by Rob Williams on February 19, 2007 in Intel Processors
What do you get when you take one of the best performing CPUs from last year and double it? Twice as much to love, of course. We are taking a look at Intels latest Quad-Core processor, the Q6600. Let’s see how it performs when compared to it’s siblings.
Before we jump into the review further, I think we need to take a look at why such a chip exists today. When I first learned of Kentsfield, like so many people, I thought, “Who needs four cores? Most people are still looking for a use for two.” If you have to ask why you need four cores, then you probably don’t need four cores. However, workstations, graphic/video designers, or even computers in the research field could well benefit from a single CPU with four cores under the hood.
That’s not to say normal users couldn’t take advantage of four cores though, because there are a slew of applications out there today that -will- take advantage of every core available to them, and I am not just talking about applications such as 3D Studio Max or AutoCAD. When dual-cores first launched less than two years ago, there were many people who didn’t understand why you would need more than one core, or how it would be possible to utilize both in a realistic scenario.
I think it’s fair to say that today, most people who use Dual Cores love them. Just last week, I was fixing a friends computer who is still stuck with a Single Core processor. I was thankful that I could go back to my Dual Core later that evening. Spreading huge workloads over multiple cores just makes sense… and leads to a smoother and more responsive computer experience.
So two we understand, but what about four? As we will find out from a few specific tests, some everyday applications that we already use are designed to utilize more than one core. Nero Recode for example, is one of these applications. At it’s peak, it will use all cores up to 90%, effectively cutting the time in half off an encoding project when compared to a Dual Core processor at the same speed. This is just one of the many possible examples of how even the average user can benefit from quad-core.
But as it stands today, the users who will really enjoy the benefits of a quad-core chip are those who are into heavy duty multi-media work, whether it be rendering high-resolution models or encoding a home video. I’ve already mentioned 3D Studio Max, a popular 3D graphics application. Big movie studios use it, game designers use it, advanced hobbyists use it, and they all realize what a resource hog 3DS Max can be. Some projects may take hours to render, while others take days. When you can install a CPU of identical physical size to the one you are using now and effectively cut rendering time in half… it’s practically a no-brainer.
When Dual Core CPUs first launched, there was a popular scenario given. “Imagine playing your favorite game while ripping a DVD!” If you are a believer of this outlook, then quad-core takes it a step further. Imagine re-encoding a DVD at twice the speed (or triple the speed if the game is not using two full cores) while playing your favorite game. That’s now possible with multiple core machines.
With the launch of the QX6700 chip in November, Intel became the first out the door with a quad-core CPU for public consumption. Nobody has yet followed up to this launch, unless you count Quad FX, which is not actually a quad-core processor, but rather a quad-core machine. I’ve not played with Quad FX personally, but it will be AMD’s Barcelona that we will all be waiting for. Whether or not it will be able to compete with Kentsfield will be seen later this year.
As mentioned in the intro, Kentsfield doesn’t have a single die, but rather consists of two dual-core dies. Think of the Q6600 as essentially two E6600s joined at the hip. Some purists may complain that all four cores don’t share a single die, so Intel’s Core 2 quads aren’t true quad-cores, but since both dies share a single LGA775 package, it’s reasonable to regard them as a functional unit. Intel has a couple reasons for doing things this way, the primary one being that having two dies result in better yields and bins. A second reason is the fact that it requires less engineering resources, because they already have the technology… they just needed to simply throw two dies onto the processor. All of these reasons save both Intel and the consumer money in the long run.
Even though the Q6600 denotes 4MB x 2 L2 cache, the entire cache can be shared by any of the cores. If one core, for example, is doing most of the legwork and needs extra cache, it can use what’s available off the other cores. It’s an “8-way” cache system, so store/load sequences should prove faster than the 4-way system that would result if each die was limited to its own onboard cache.
Greater memory bandwidth is another of Intel’s design priorities, so all Core CPUs utilize what’s called Smart Memory Access. What it effectively does is allow more than one execution to occur at any given time, instead of one application having to wait on another set of instructions to finish. This results in reduced latency and higher overall bandwidth. This still doesn’t match AMD’s on-die memory controller however, as it has larger bandwidth capabilities. Whether or not this extra bandwidth would make difference in the grand scheme of things is still up for debate. As it stands, receiving bandwidth of “5,000MB/s” might be all anyone needs, depending on what you are doing.
Looking to the future, Intel released their latest roadmap at the last IDF which shows their Core 2 architecture’s next steps: Penryn and Nehalem. These CPUs will have a huge slew of benefits, including a 45nm process, improved transistors, lower power requirements, higher clock speed. For the enthusiasts, lower power and the smaller process usually mean better overclocking. We will not be seeing these released until late 2007 or early 2008.