(Sept 5 update at bottom)
At Techgage, we care a lot about benchmarks, and especially real-world ones. Over time, we’ve reduced the number of irrelevant benchmarks to make way for more realistic tests that our readers can actually make use of. Our Cinebench performance was joined by real Cinema 4D performance a few years ago, helping to highlight the fact that the standalone benchmark is accurate.
Our real-world Cinema 4D test
Cinebench is one benchmark in particular that has been the target of much ire over the years, but because we’ve found it to scale with the real Cinema 4D, we continue to put faith in it. People could argue that a single benchmark can’t summarize total capabilities of a chip, and that’s a point we’ve tried to make. That’s why we constantly harp on about the fact that it “pays to know your workload”.
During performance briefings at IFA, Intel double-downed on the fact that Cinebench is not a real-world benchmark, and goes further to claim the entire Cinema 4D family just doesn’t matter too much:
The slide above states that C4D claims 0.22% of the mobile user market (told to us on a call; we’d assume it revolves around creator use), and that it’s “not really” real-world. This message is untrue. We’ve already shown that Cinebench represents real-world performance, and anyone who claims Cinema 4D isn’t a popular application clearly doesn’t follow the market too well.
Intel’s position was very different just last year. The company worked with Maxon and others to extol the virtues of big CPUs. Take a peek at this whitepaper posted in May 2018:
It took Marc around 80 working hours to develop the Cinema 4D Mountainvista Scene Workload. Marc used onboard shading tools in the Cinema 4D R19 software to create the realistic pebbles, cliffs, and landscapes that make up the mountain scene.
Intel benchmarked the performance of the Intel Core i9-7980XE Extreme Edition processor with 18 cores against the performance of a 4-core processor to render in Cinema 4D R19 using the Mountainvista Scene Workload.
We found that the Intel Core i9-7980XE Extreme Edition processor was able to reduce the render time from 19 minutes to just six minutes.
But sure, it’s not a “real-world” workload.
Let’s look at this from another viewpoint. Intel has focused entirely on Cinebench, as if that’s the only renderer in the neighborhood. It chooses to ignore the same kind of scaling seen in other popular renderers we’ve tested, such as Adobe Dimension, Autodesk Arnold, Blender Cycles, Luxion KeyShot, Chaos Czech Corona, and Chaos Group V-Ray (which Intel generally beats AMD at core-for-core, so maybe the company should focus on that.)
We first learned of Cinebench R15’s release during our Intel meeting at CES 2014. At the time, the Core i7-4790K ‘Devil’s Canyon’ quad-core was on the horizon, and the company used the benchmark to highlight its improvements over the previous-gen chip. If Intel thought Cinebench had a place in 2014 when only quad-cores existed, it certainly has a place in 2019 when 16+ core chips exist.
Chaos Group’s V-Ray generally performs better on Intel than AMD
At the 9900K launch last fall, the message was all about gaming. It’s hard to forget the Principled Technologies debacle that happened around that time, but if you do forget, here’s a reminder. It’s odd, then, that gaming was the focus at that time, but today, it seems to be about Microsoft Excel, and other mundane tests run through BAPCo SYSmark (which the company seemingly laid low on at this event).
We can honestly say that in the fifteen years Techgage has been around, we have not been asked about Excel performance (or Office in general) even once. But renderers and other creative software solutions? We can’t produce enough results to satiate our readers’ appetites. That doesn’t mean that Office performance doesn’t matter, but in 2019, it’s so inconsequential that we haven’t heard a single demand to test it. And what’s more interesting? 5% boosts to Excel? Or a render completing 20% faster?
Intel’s stance on important CPU performance
To be clear, Intel’s CPUs still offer some great benefits versus the competition, but this brings us back to the important point that it pays to know your workload. As it stands right now, Ryzen tends to offer explosive value for rendering, while Intel offers an obvious leg-up with video encoding. If you’re not sure where your workload sits, you can check out our most recent performance look. You don’t even need to read our own words, because the graphs speak for themselves.
September 5 Addendum:
Intel reached out to offer its thoughts and corrections. The company says that it has nothing against Maxon’s Cinema 4D or its standalone Cinebench benchmark. It does believe, though, that too many people are relying on it for an overall view of processor performance. That could be true, and shouldn’t be done. Again, it “pays to know your workload”, and not all workloads are built alike.
At Intel’s IFA briefing, Intel’s focus on C4D/CB performance revolved entirely around notebooks, not desktops. This was a point that was a little confusing to us since the C4D/CB “not really” mentions came directly after the mention of the next Core X-series launch in the slides.
Our overall take hasn’t changed. After AMD released higher core-count CPUs, Intel began to reduce the importance of Cinebench, despite having touted its own scores many times before. That to us feels like too much of a coincidence. Leaked slides from a few weeks ago did include a desktop focus, pitting the 9900K against the 3900X. In it, Cinebench was the only winner for the AMD side.
We think Intel should focus on its own strengths instead of downplaying the importance of a real-world, realistic benchmark. We mentioned above that V-Ray runs better on Intel, and that also happens to be available in standalone real-world benchmark form.
At the end of the day, it shouldn’t matter so much what benchmarks companies are promoting. It’s real-world testing that truly matters. So the ultimate moral of the story is: never (ever) use a single metric to decide on your new CPU.