Date: August 29, 2014
Author(s): Rob Williams
In late 2011, I wagered that Intel would follow-up its i7-3960X with an eight-core model within the year. That didn’t happen. Instead, we have had to wait nearly three years since that release to finally see an eight-core Intel desktop chip become a reality. Now for the big question: Was the company’s Core i7-5960X worth the wait?
It was a glorious thing when Intel released its Gulftown-based Core i7-980X in early 2010. It was the first desktop CPU out of the company that featured six cores, and it felt like a relative boon to those, like me, who run virtual machines, encode video, compile software, and more. In the end, I was left seriously impressed with that chip.
At that time, I naively believed that an eight-core was in our near-future. In fact, I outright expected Sandy Bridge, which released in late 2011, to have an eight-core model. It didn’t. I didn’t let anything dampen my spirits though, as at the end of my look at the Core i7-3960X, I said something to the effect of, “I can’t imagine that eight-core models will not be released within the next six months or so.”
I’m not sure what got that into my head. A staggering 1,020 days have passed since the 3960X’s announcement, and we finally have an eight-core chip at our disposal. Admittedly, I wasn’t exactly enthralled by what the Core i7-4960X brought to the table, but with eight cores, I think the i7-5960X has a great chance at having a different conclusion.
Welcome to Haswell-E:
That’s a chip that means business. All Haswell-E processors are built on a 22nm process, but the eight-core of course tops the transistor count charts, with 2.6 billion – 800 million more than the Core i7-4960X. As expected, these transistors are built using Intel’s much-touted Tri-Gate 3D design.
Like Sandy Bridge-E and Ivy Bridge-E, Haswell-E has 2011 contact points on its belly, but its processors will only support the latest socket revision, called LGA2011-3. At the moment, those are only going to be found on motherboards featuring Intel’s X99 chipset.
In regards to the function layout, nothing has really changed between Ivy Bridge-E and Haswell-E, and that’s because the design isn’t broken. All of the cores still sandwich the cache, and other key functions are found at the top and bottom.
The X99 chipset is Intel’s first desktop offering from Intel (or AMD, for that matter) to support DDR4 memory, and somewhat humorously, it officially supports very modest 2133MHz speeds. Chances are good that if you buy into this platform, you’ll be winding up with memory faster – or perhaps much faster – than this.
One important thing to note about this block diagram is the mention of “up to” 40 PCIe lanes. Last-gen, that didn’t matter too much, but this gen, it does. Intel’s decided to separate its smallest LGA2011-3 chip, the i7-5820K, further from the others by limiting its PCIe lanes to 28. Both the i7-5960X and i7-5930K have the expected 40, so for those who are building the highest-end gaming PCs possible, the i7-5820K may not be an ideal choice.
Intel’s sticking to the “3 model” scheme it started with Sandy Bridge-E for this launch, with the bottom CPU set to sell for $389 in quantities of 1,000. That should result in prices at etail of about $420 or so. The middle model bumps the price to $583, which nets you slightly faster stock speeds, as well as the full 40 PCIe lanes. The big gun is the i7-5960X, Intel’s debut eight-core. As is tradition for Intel’s top-end parts, this chip is priced at $999.
|All 4th-gen Core processors are built on a 22nm process, utilizing 3D tri-gate transistors.|
Intel’s Core i7-5820K, despite its fewer PCIe lanes, is quite an attractive chip compared to the i7-4790K. It has a much slower clock, but it has 50% more cores, and far more L3 cache. For the most part, it’s similar to last-gen’s i7-4930K. That chip was 100MHz more at stock, and 200MHz more at Turbo, but had 3MB less L3 cache and cost $200 more.
As for the other two models, there’s quite a bit to talk about. For starters, the i7-5960X has a glaring flaw: Its stock speed is 3.0GHz. I don’t recall the last time any Intel enthusiast chip had dipped that low, and it’s hard to see that spec next to a $999 price tag. But on account of the fact that the chip has eight cores – something none of the other chips do, and something no AMD chip is going to hold a candle to – that drop in clock can be forgiven. Still, it doesn’t look right.
I haven’t put even two seconds into overclocking the i7-5960X as of the time of writing, but after talking to a couple of different vendors, I’ve come to the conclusion that a 4.5GHz overclock is going to be somewhat of a pipe dream. 4.0GHz, however, should be no problem at all to hit, especially if you don’t mind giving the chip a bit of extra voltage.
Even if you were to boost the chip to a static 3.7GHz or 3.8GHz across all of the cores, that dramatically enhances the appeal of the chip, and negates the fact that the i7-5930K is clocked a bit higher at stock.
Above is a shot of our i7-5960X sample cuddling with ASUS’ X99-DELUXE motherboard. This board is the first X99 offering I’ve touched, and so far, I’m beyond impressed with it. As you can probably imagine, this board will be the focus of my first X99 review, so stay tuned for that – there’s a lot to talk about.
Before moving on, there’s something interesting in that shot above: If you look closely at the CPU, you’ll see it say “USA”. I couldn’t help but ask Intel about this, since all of the CPUs I’ve received from the company have largely come from Costa Rica or Malaysia, and I was told that the USA label will not be carrying over to the retail products. So now that I’ve wasted your time with this information, let’s talk about DDR4.
Corsair was kind enough to send along a kit of its Vengeance LPX for our testing. This 16GB kit is clocked at DDR4-2800 speeds, although as I quickly found out, that speed is a little hard to hit at this point in time, and even after talking to ASUS and Corsair, I’m not entirely sure why. At 2800MHz speeds, my bandwidth score in Sandra was actually less than it was with the sticks clocked at 2400 or 2666. It could be that I have a bunk memory controller in my chip, or ASUS’ EFI needs some further tweaking. What I do know for sure is that there are some definite launch quirks here.
Desktop DDR4 at this point is bleeding-edge, and its pricing reflects it. I tackled this a couple of times recently in our news section, so I’m going to borrow a table from a post I wrote earlier this week to help illustrate things here.
|2133MHz 2x8GB||$163 (G.SKILL, 1.5V, CL 11)||$220 (Crucial, 1.2V, CL 15)||+35%|
|2133MHz 4x8GB||$315 (G.SKILL, 1.65V, CL 10)||$440 (Crucial, 1.2V, CL 15)||+40%|
|2400MHz 2x8GB||$153 (ADATA, 1.65V, CL 11)||$230 (Crucial, 1.2V, CL 16)||+50%|
|2400MHz 4x8GB||$320 (Mushkin, 1.65V, CL 11)||$460 (Crucial, 1.2V, CL 15)||+44%|
|CL = CAS Latency. All DDR3 kits were the least-expensive non-sale options on Newegg as of the time of this post.|
If you’ll be building an X99-based PC, you’re quite obviously going to be shelling out a fair bit of money to get up and running. I think I’m safe to assume that 16GB is going to be the amount of memory most people will go for, and at the moment, that will set you back just over $200 for a modest kit. Couple that with a $300 motherboard and at least the $400 i7-5820K, and that becomes a $900 base to your new PC.
Hey, Intel! I think I see an AMD engineer snooping around outside!
Alright, with Intel not looking, I have this to say: You probably don’t need an eight-core Intel processor. It could be that such a thing doesn’t even need to be said, but I just feel better by saying it anyway.
The fact of the matter is, even six cores is overkill for most people. It takes specific scenarios to take proper advantage of the resources that such a chip provides, and at the moment, that doesn’t include gaming. Intel does make mention of 3DMark’s Physics test benefiting from the extra cores, but that’s not exactly representitive of actual gameplay.
Who’d benefit from such a big CPU the most are those who encode lots of video, render a lot of 3D projects, compile large applications, run multiple virtual machines, and in general, do some pretty hardcore stuff with their PC.
Because the i7-5960X is clocked at a mere 3.0GHz, the i7-5930K stands out. It might lose two cores, but it’s at least 500MHz faster, and of course, $400 less than the eight-core.
That all being said, let’s tackle a couple of quick numbers. Intel claims that its eight-core Haswell-E will prove up to 32% faster in 3D rendering (based on Cinebench scores), and up to 20% faster in 4K video editing (based on Adobe Premiere Pro CC). Further, in its test of converting a 4K source video to 1080p using HandBrake, the i7-5960X proved 69% faster than the quad-core i7-4790K, and up to 34% faster than the i7-4960X.
With that all taken care of, we can now move onto a look at our own performance tests, to see what the i7-5960X is truly made of. For the sake of this review, I’m going to be comparing the i7-5960X directly to the i7-4960X. I would have loved to have been able to include i7-4770K results in this review, but due to time constraints, I was unable to re-test the chip using our updated test suite. Fortunately, the real comparison involves the chips that are included, so let’s get a move on and get to comparing – well, after a quick look at our testing system and methodologies.
At Techgage, we strive to make sure our results are as accurate as possible. Our testing is rigorous and time-consuming, but we feel the effort is worth it. In an attempt to leave no question unanswered, this page contains not only our testbed specifications, but also a detailed look at how we conduct our testing.
If there is a bit of information that we’ve omitted, or you wish to offer thoughts or suggest changes, please feel free to shoot us an e-mail or post in our forums.
The tables below list all of the hardware we use in our current CPU-testing machines.
|Intel X99 Test Machine|
|Processor||Intel Core i7-5960X (Eight-core, 3.0GHz, 3.5GHz Turbo)|
|Memory||Corsair Vengeance LPX 4x4GB DDR3-2666 16-16-16|
|Graphics||GIGABYTE GeForce GTX 650 Ti 1GB|
|Storage||Kingston HyperX 240GB SSD|
|Power Supply||Corsair HX850|
|Chassis||Corsair Obsidian 700D Full-Tower|
|Cooling||Noctua NH-U14S Air Cooler|
|Et cetera||Windows 7 Ultimate 64-bit|
|Intel X79 Test Machine|
|Processor||Intel Core i7-4960X (Six-core, 3.6GHz, 4.0GHz Turbo)|
|Motherboard||ASUS P9X79-E WS|
|Memory||Kingston HyperX Beast (4x8GB) – DDR3-2133 11-12-11|
|Graphics||GIGABYTE GeForce GTX 650 Ti 1GB|
|Storage||Kingston HyperX 240GB SSD|
|Power Supply||Cooler Master Silent Pro Hybrid 1300W|
|Chassis||Cooler Master Storm Trooper Full-Tower|
|Cooling||Thermaltake WATER3.0 Extreme Liquid Cooler|
|Et cetera||Windows 7 Ultimate 64-bit|
When preparing our testbeds for any type of performance testing, we follow these guidelines:
To aide in the goal of achieving accurate and repeatable results, we stop certain services in Windows 7 from starting up at boot. This is due to the fact that these services have the tendency to start up in the background without notice, potentially causing inaccurate test results. For example, disabling “Windows Search” turns off the OS’ indexing which can at random times utilize the hard drive and memory.
The most important services we disable are:
To ease the tedium of setting up an OS for a round of benchmarking, we rely on Acronis True Image to restore an install that we previously setup. These images include most of our benchmarks, a minimal number of drivers (LAN, graphics), an up-to-date OS and all of our above-mentioned tweaks.
To help us deliver a well-rounded set of test results for each processor we evaluate, we use a variety of real-world applications and synthetic benchmarks.
Wallpaper Credit: Mohsen Kamalzadeh
Our current test suite consists of:
Most tests are run twice over with the results averaged. If there is an unnatural variance between the first two runs, then we continue to run the test until we receive a result we believe to be accurate.
If there’s design work that needs to be done, then Autodesk is sure to have the right tool. From 3D modeling to architectural design, Autodesk’s selection of highly-regarded tools is almost mind-numbing, and because both its 3ds Max and Maya applications have long been considered to be some of the best in their respective class, we opt to use them for our benchmarking here.
For the sake of all-around testing, we perform most of our benchmarking on this page with the help of SPEC’s SPECapc 3ds Max 2015 and SPECapc Maya 2012, although we also render an in-depth model/scene in the former. We’ll explain each benchmark as we go along.
We kick off our testing with one of the most comprehensive benchmarks in our test suite: SPECapc 3ds Max 2015. The overarching goal of those responsible for producing SPEC’s benchmarks is to deliver as well-rounded a test suite as possible for a respective field, such as 3D rendering and modeling, to produce accurate results that those responsible for purchasing hardware can take advantage of.
Designed to utilize both the CPU and GPU, SPECapc 3ds Max 2015 comes in both free and professional flavors, with the latter being the version we use. It’s comprised of 48 individual tests and takes a couple of hours to complete on a high-end machine.
Considering the fact that the i7-5960X has a +33% core advantage, we’re likely to see similar gains in most benchmarks that can take proper advantage of all of its cores – but taking into account its 3.0GHz clock speed. With a 22% boost in this particular benchmark, we can see some proof of that theory.
For our second 3ds Max 2015 test, we render a scene commissioned from Bulgarian artist Nikola Bechev, entitled “Naomi: The Black Pearl”. The woman is comprised of over 7,000 polys with the entire scene totaling just over 106,000 vertices. Three light sources are used, with the entire scene being enhanced with HDR and ray tracing techniques, and subsurface scattering applied to certain objects. The scene is rendered at 1800×3600 as a production release, with HQ detail levels being used all-around.
The i7-5960X impresses a bit more in our real-world test than it did in SPEC’s apc test, rendering Naomi 28% faster. Had the eight-core been clocked like the six-core, we would have undoubtedly hit that 33% mark.
Like its 3ds Max 2015 variant, SPECapc Maya 2012 is designed to stress various aspects of the tool, such as rendering with standard and HQ methods, working in wireframe mode and so forth across numerous models and one overarching scene titled “Toy Store”.
Interestingly, it was the GFX score that saw the greatest boost here, despite the fact that the same graphics card was used. On the CPU side, the gain was about 6%, which might say more about this being an outdated benchmark than the i7-5960X failing to offer Maya users a proper boost.
Like Autodesk’s 3ds Max and Maya 3D tools, Maxon’s Cinema 4D is a popular cross-platform 3D design tool that’s used by new users and experts alike. Maxon is well-aware that its users are in need of some rather beefy PC hardware to help speed up rendering times, which is one of the reasons the company itself releases its own benchmark, Cinebench.
There are a couple of reasons we like to use Cinebench in our testing. For one, it’s freely available for anyone to download, unlike our Autodesk-based tests. Second, it has the capability to scale up to 64 threads, which means we’ll easily be able to rely on it for quite some time. As a faster CPU can also help improve the GPU computational pipeline, we also like that it includes an OpenGL benchmark as well. The fact that the benchmark completes in a minute or so is another perk.
Here’s that 33% gain we were looking for. And yes – our test on the i7-5960X did actually score 1337.
The “Persistence of Vision Ray Tracer” is a multi-platform ray tracing tool that allows you to take your previously-created environments and models and apply a ray tracing algorithm based on a script you either created yourself or borrowed from others. The tool is free and has become a standard in the ray tracing community, with some of the ‘Hall of Fame’ results able to be found here.
For our testing, we run the built-in benchmark in both single-threaded and multi-threaded mode. The results are presented in “pixels-per-second” – a simple metric, but one that’s easy to understand.
It’s with this benchmark that I found myself a little dumbfounded. The i7-4960X has a Turbo speed of 4GHz, while the i7-5960X peaks at 3.5GHz. Despite that 500MHz drop, Intel’s eight core managed to perform better in the single-thread test. Also interesting is that it managed to surpass a 33% gain in the multi-thread test, hitting 35%.
At this point, I can’t establish the reason for this gain, although I think it’s safe to say that clock-for-clock, Haswell-E is going to be quicker at synthetic tests anyway. Still, this is a notable result given the reduced clock speed of the eight-core chip.
With our 3D modeling and rendering tests out of the way, let’s dive right into another popular use for high-end machines: video editing and encoding. Scenarios here could include encoding a large movie into a mobile format, ripping a Blu-ray to your PC and encoding it for HTPC use, or encoding a family video you painstakingly edited.
Adobe’s Premiere Pro likely needs no introduction. It’s a tool used by the amateur and professional video content creator alike due to the extreme control it provides along with all of the important codecs, presets, filters and tweaking options. Premiere Pro can be used for any sort of video, be it real-life, animated, 3D or even game footage.
For our benchmarking, we encode a project that consists of 35GB worth of game footage from Payday: The Heist, which we encode to MPEG2 Blu-ray 1080p/30. The resulting video can be seen here.
Intel told us that we should expect up to a 20% gain in performance in Premiere Pro, although 4K video was specifically mentioned. It seems our test isn’t quite hardcore enough, as the i7-5960X’s decrease in clock speed reflected itself here. Due to time constraints, I was unable to test using the “Maximum Quality Render” setting, but I’m betting some gains would be seen there. However, that’s not a setting that most people will use.
Premiere Pro is meant to be used as a professional tool for editing and encoding, while HandBrake acts strictly as an encoder, able to take one video format and encode it to another according to your specifications. While there are many presets available from the get-go, you’re able to customize whatever’s available, or create your own. It’s a simple tool with complex capabilities.
Here, we have a project that makes use of a Blu-ray rip of Pixies: Live at the Paradise in Boston. With it, we encode the first 15 minutes of the concert to an archival-quality 1080p MKV. The archival-quality encode is time-consuming, but it can take full advantage of a high-end processor. For those interested, our H.264 options are:
On the last page, we saw Intel’s eight-core i7-5960X perform 33% better in Cinebench, and 35% better in POV-Ray. Being that those are synthetic benchmarks, it’s not that hard to believe such gains. Here, we see performance very close to that hit: The Core i7-5960X encoded our video 25% faster than the i7-4960X. Just imagine if the i7-5960X was clocked the same as the i7-4960X! Intel claimed to us that HandBrake should see about a 33% gain, but as the Premiere Pro test above proved, the company’s tests differ a bit from ours.
Photo manipulation benchmarks are more relevant than ever given the proliferation of high-end digital photography hardware. For this benchmark, we test the system’s handling of RAW photo data using Adobe Lightroom 5.6, an excellent RAW photo editor and organizer that’s easy to use and looks fantastic. You can check out our full review of the tool here.
For our testing, we take a total of 500 RAW files spread across 250 .NEFs captured with a Nikon D80 and 250 .CR2 captured across a Canon 40D and 5D Mark II. We export all of these files to a matte-sharpened quality 90 JPEG resized to a resolution of 1000×660 – similar to a lot of photos we use here on the website. The test is timed indirectly using a stopwatch as the program doesn’t record the duration itself.
Here’s good proof that more cores doesn’t mean better – or at least that much better. Despite Lightroom being a high-end editing tool, it exported our photos using the i7-5960X in 94% of the time it took the i7-4960X. That’s not exactly what I’d call much of a gain. It’s my hope that future versions of Lightroom will be designed to take far better advantage of multi-threading – there’s no reason that the results here couldn’t be as impressive as what we saw with the HandBrake test on the previous page.
You own hundreds, thousands, or even tens of thousands of songs, all encoded to a pristine lossless format such as FLAC. Your mobile device on the other hand, supports either MP3 or AAC. What’s the solution? There are several, but the one I’ve relied on for almost ten years has been dBpoweramp. It’s both flexible and powerful, which happen to be two important factors for those who take their music seriously.
Recent versions of dBpoweramp have opened up the ability to encode more than one track at once, up to a limit of one-per-thread. With twelve-thread CPUs on the market, that ability can greatly improve overall times. For our testing here, we take 500 unique FLAC files that average about ~30MBs and encode them using the “high-quality” setting to 320Kbit/s MP3.
Unlike Lightroom, which doesn’t do much to take advantage of hugely threaded processors, dBpoweramp does. But even in this case, with 16 threads being utilized, the gains are still only about 15%. Part of the reason for this could be due to I/O, despite using a fast SSD. It’d be fun to run this same test on a RAMDisk. That’s a test for a future time.
SiSoftware’s Sandra is a piece of software that needs no introduction. It’s been around as long as the Internet, and has long provided both diagnostic and benchmarking features to its users. The folks who develop Sandra take things very seriously, and are often the first ones to add support to the program long before consumers can even get their hands on the product.
As a synthetic tool, Sandra can give us the best possible look at the top-end performance from the hardware it can benchmark, which is the reason we use it to test much of our PC’s hardware. The fact that a free version exists so that you can also benchmark against our results is something we greatly appreciate.
The more threads a CPU has coupled with its frequency and architecture refinements, the faster it should be able to calculate complex math. We’re not talking about simple math that can be done on a calculator, but rather advanced calculation that is often used behind the scenes. Sandra’s Arithmetic test stresses the popular Dhrystone integer and Whetstone floating-point algorithms that have acted as a base for a countless number of benchmarks dating back as far back as the 70s.
As I mentioned earlier, if a test is able to take proper advantage of a full eight-core processor, then chances are good that we’ll see up to a 33% speed advantage on this eight-core i7-5960X. That said, while gains so far do scale like that in this first Sandra test, we’ll have to settle with a 29.5% performance gain, rather than 33%, due to the lower clock speed. Even so, it goes without saying that performance gains close to 30% are impressive – remember our match-up between the i7-3960X and i7-4960X? At least here, we’re seeing gains worth getting excited about.
One of the best reasons for upgrading or building a new PC is to increase the performance for multi-media work, whether it be editing or encoding. As we saw earlier in our results, faster CPUs can save minutes or even hours of time. To test such capabilities here, Sandra renders the famous Mandelbrot set in a total of 255 iterations and in 32 colors.
This is a test that’s been around for close to forever, but it still scales extremely well with thread counts and can benefit from new media-centric instruction sets, including AVX.
We see proof of the extra cores in the Float test, with a gain in performance of about 26%, but in the Integer test, we see proof of substantial design improvements, with a staggering gain of 56%.
You might not be aware of it, but cryptography plays a major role in computing. With some algorithms proving more complex than the others, having a faster processor can dramatically improve performance – especially important on the server front. In Sandra’s benchmark, the mega-popular AES and SHA algorithms are computed, both with 256-bit key sizes.
Here’s a test that makes me regret not being able to retest the i7-4770K for the sake of this review. With Haswell, Intel’s introduction of AVX2 instructions made a substantial difference in SHA hashing performance – about +50% in SHA-256. Well, I think it’s safe to say that with Haswell-E’s brawn behind this benchmark, the results are simply incredible when compared to the i7-4960X – we’re talking a performance gain of 2.35x. I suppose the 10% gain seen with AES-256 is also impressive, but anyone who’s built an Intel rig over the past four years has already been enjoying great AES performance.
There’s little that can stress a CPU’s worth quite as much as number-crunching, and for that reason, we take full advantage of both Sandra’s financial and scientific analysis benchmarks.
We’re seeing at least 20% gains here, with 25% being more typical. In the GEMM test of the scientific analysis benchmark, we see an enormous gain in performance of 50%. I admit, I find gains like that to be exciting, especially since Intel wasn’t going to let additional cores be the sole reason for gains.
When hard drives densities measured in the megabytes or single-digit gigabytes, data compression became something that even the layman computer user took advantage of. In fact, even entire hard drives could be used in compressed mode to help increase the overall storage. Today, such methods aren’t required thanks to hard drives ranging in the thousands of gigabytes, but compression is still used on a regular basis by many people, either for storing a folder for backup, encoding music, converting a photo and et cetera. On servers, compression is often used to shrink mega-large log files.
For our compression testing, we enlist the help of 7-zip 9.30. We take a 772MB folder that consists of 39,236 highly-compressible files and archive it using an ‘Ultra’ level of compression using the LZMA2 algorithm. This results in an archive weighing in at about 137MB.
It seems our project isn’t perfect enough to show off the benefits of the eight-core processor; I am certain that other archival projects could see actual gains, but this test proves that those would be far more specific projects than what most people would use an archiver for.
In terms of complexity, Euler3D is one of our most advanced benchmarks, and also one of the quickest to run. It calculates the fluid dynamics properties of the AGARD 445.6 aeroelastic test wing as it was tested in-house at NASA’s Langley Research center. It’s calculated using Euler equations, with results printed out as Hz and time-to-complete (seconds). A benchmark such as this is useful to those who work designing products where physics has to be considered, whether it be a wing, a car, a ship and so on.
Yet another example of where the i7-5960X can exceed the 33% performance gain. We’re seeing a 35% gain here despite the decreased clock speed – imagine the result if this sucker were overclocked!
Futuremark’s no stranger to most enthusiasts, as its benchmarking software has been considered a de facto standard for about as long as it’s been fun to benchmark. While its 3DMark software is undoubtedly the company’s most popular offering, PCMark is a great tool for summing up the performance of a PC with gaming being a minor focus rather than a major one.
Futuremark’s latest PCMark, 8, consists of five main test suites: Home, Creative, Work, Storage, and Applications. The goal of each is to show how a system will perform overall in a given scenario, and their titles sum up each respective goal nicely. The Applications suite consists of two sub-suites; one for Adobe’s Creative Suite (or Creative Cloud), and the other for Microsoft Office. Of all these suites, we run them all except for the Storage, as it’s not that relevant.
For fun, we also include the overall test results with PCMark 7 (just can’t bear to let it go!).
Note: Our PCMark 8 installation for some reason couldn’t detect that After Effects was installed, so I didn’t run the application benchmarks in time for publishing.
I admit that I expected stronger gains than this. It seems PCMark is far more frequency-dependent than core-dependent – at least when we’re dealing with processors with 6 or more cores.
The faster the processor, the better its bandwidth and latencies are. Where memory is concerned, however, there are many more factors at play. While frequency plays a major role in overall memory performance, the memory controller can make an even greater improvement, based on its implementation and also its capabilities.
With Intel’s X79 and X99 platforms, we’re given a quad-channel controller, while Intel’s (and AMD’s) other platforms stick to a dual-channel design. A quad-channel controller could in theory provide twice as much bandwidth as a dual-channel one. How the controller is integrated into its chip along with the memory’s frequency determines the latency.
While faster memory bandwidth and lower latencies can improve overall computer performance, the faster each core can work with one another along with how much bandwidth a cache can handle rounds out the most important factors of PC performance. The results of all of these are tackled on this page.
At this point, DDR4 isn’t blowing our socks off. In fact, given the premium that must be paid on DDR4 modules, these gains are downright underwhelming. Of course, I should note that 2666MHz is not the “max” speed that DDR4 can manage. In fact, the modules I’m using support up to 3000MHz (despite being advertised for 2800MHz). But that said, some memory frequencies will require the CPU to overclock; 2666MHz is one of those frequencies that doesn’t.
I mentioned on the first page of this article that at the moment, scaling above 2666MHz on Haswell-E is a little sketchy at this point in time, and it’s not clear to me why. When I benchmarked our memory kit at 2800MHz, Sandra spit out a result of 49.5GB/s bandwidth – roughly 3GB/s less than when the kit was clocked at 2666MHz. That’s pretty nonsensical.
Here’s the reality: People who buy DDR4 for their Haswell system are going to be paying for roughly the same performance that they could get from DDR3, or perhaps even worse. Buying a very fast kit of DDR4 and seeing lower performance than 2133MHz DDR3 sticks is bizarre, so hopefully it won’t take long for board vendors or whoever needs to fix things, to fix things. Once this launch settles, I plan to take a deeper look at this subject. For now, let’s see how other memory-related performance fares.
Interestingly, L3 speed decreased slightly, while L1 speed increased dramatically – it’s more than 2x!
Latencies on the X99 platform fared worse than those on the X79 one. On the DDR4 side, that was definitely to be expected, given the looser timings (which are a side-effect to the lower voltages). Bandwidth-wise, the eight-core chip reigns supreme, but even in that test, the inter-core latency is higher as well.
Game benchmarks stand to see the least amount of gain in comparison to our other tests, but they’re necessary for the sake of completeness. Also, while we benchmark hands-on for our graphics card content, we opt for synthetic testing here, as we’re utilizing the same GPU across each setup.
First up is the ever-popular 3DMark benchmark, and for the sake of completeness, we run all two of the four tests (Sky Diver and Fire Strike).
I admit I’m a little surprised to see such gains in the overall test scores, but if we have one thing to thank, it might be the higher physics scores. Intel told us that we should expect big gains there, and lo and behold, we got them. Still, I have to stress that 3DMark isn’t a game, so seeing this kind of gain in the real world is going to be tough.
When I took a look at Intel’s Core i7-4960X last fall, I wasn’t left impressed in the end. The fact of the matter was, we were given a new enthusiast chip two years after the i7-3960X that offered a negligible gain in performance. It almost seemed like a bit of a joke, because while any gains are appreciated, everyone expected a lot more given the amount of time that passed between each launch. In fact, the Core i7-4960X should have been an eight-core in my mind – that would have made all the difference.
Ignoring all that, though, we have an eight-core now, and ultimately, I’m left very impressed with it. I am not that keen on the fact that it’s clocked at 3GHz, but as many of our tests show, that’s sometimes easily compensated for by the extra cores. Even with its 3GHz clock, for example, our HandBrake encode proved 25% quicker on the eight-core chip. It’s gains like that, that make me want to get right into overclocking – at least to match it up with the i7-4960X better.
One thing that came as no surprise was the reality that in many scenarios, a bump from six to eight cores will make no real difference. In particular, our Premiere Pro encode proved that in some cases, the higher frequency matters more than those extra cores. In order to see gains, a scenario needs to take as full advantage of the extra cores as possible – but that’s a no-brainer. As PCMark highlighted, overall usage will remain the same between a six-core and eight-core for most people.
However, there’s a reason why PCMark scores shouldn’t be treated as gospel. Each one of the three suites I ran showed modest improvement – if you could even call it that – but some of our real-world tests showed significant gain. I already mentioned HandBrake, but there were other tests that benefited almost just as well. Our 3ds Max render, for example, completed in 70% of the time on this i7-5960X as it did on the i7-4960X. The Cinebench and POV-Ray scores back that kind of gain up.
But then we had some tests like 7-zip and Lightroom, which showed a gain, but one so small, it just doesn’t matter. In the intro, I mentioned that an eight-core chip is not for everyone, and this is why. In order to take full advantage of the CPU, you naturally need to battle scenarios that can use a large number of threads. 3D rendering is definitely one, as is some video encoding. And of course, those who often run virtual machines will also stand to gain here.
As impressed as I am with the i7-5960X, I can’t help but wish Intel clocked it at 3.5GHz stock. That would have negated the decrease in performance that we saw in Premiere Pro, and would have delivered even more impressive results across-the-board. Fortunately, overclocking the CPU to 3.5GHz isn’t difficult. Even 4GHz should be attainable on most chips, although for that, I’d recommend having water cooling.
While this article focused on the i7-5960X, I’d be remiss to not talk about the other two available models. For those not planning to run exotic GPU configurations, the i7-5820K is an attractive chip. At $389, it’s $200 cheaper than last-gen’s least-expensive Intel six-core, plus it has a bit more L3 cache. Unfortunately, those who do want exotic GPU configurations will have to shell out another $200 for the i7-5930K, as it provides the full 40 PCIe lanes. Fortunately, that chip also includes a clock boost, although that will really only matter to those who don’t plan to do any overclocking.
The biggest downside with this platform is the same downside that came with X79, and especially with launch DDR3 kits: Pricing. CPUs aside, you can expect a good X99 motherboard to cost at least $300, with models like the ASUS X99-DELUXE I tested with bumping that to $400 (the board is packed, however). Of course, the biggest drawback is the pricing of DDR4 memory. You’ll be paying at least 40% more than you would for a comparable DDR3 kit, and for almost no gain in performance.
That performance problem can be fixed with faster kits, but those are going to be quite expensive, and as mentioned on the Sandra Memory page, there are quirks that might make speeds of 2800MHz or higher perform worse than lower frequencies. Again, this is a situation I’m investigating. After talking to a couple of vendors about it, I know that I’m not alone – even those in the labs at these companies are trying to get to the bottom of it.
If you’re planning to build an X99 PC but are not going with the top-end chip, I might recommend holding off for as long as possible for DDR4 prices to go down. While you’re going to be getting more CPU this year than you did last, most of the money saved is going to go right on DDR4. Of course, if you do need to build now, you’re going to be far from unhappy with your purchase.
Pricing aside, X99 is a great platform, and a proper successor to X79. While I didn’t test I/O (given that this is a CPU review), the X99 chipset finally gives enthusiasts a proper Intel SATA 6Gbps controller, which is a far better solution than the third-party chipsets vendors have had to implement.
In the end, it’s been a while since we were given an Intel enthusiast platform that didn’t bring with it a handful of caveats, so that in itself makes X99 excellent. As for the top-end CPU, if you’ve been eagerly awaiting an Intel eight-core chip, it’s finally here, and it brings with it some seriously impressive performance.
It’s about time, Intel.
Intel Core i7-5960X Processor
Copyright © 2005-2019 Techgage Networks Inc. - All Rights Reserved.