Techgage logo

Intel Core 2 Duo E8400 3.0GHz – Wolfdale Arrives

Date: January 30, 2008
Author(s): Rob Williams

Intel’s 45nm Dual-Cores have finally arrived, so it’s only fitting that we take one for a spin. Our test subject is the 3.0GHz E8400, offering 6MB cache, SSE4 and more. Overclocking is impressive, with 3.8GHz stable being possible without even raising the voltage! This chip definitely proves itself a winner.


In the summer of 2006, Intel released their 65nm Conroe-based processors, and to say they won the hearts of many would be an understatement. It was one product-launch that Intel didn’t want to hit lightly, especially since AMD were actively taking from their customer base – on the enthusiast side, most notably. When said and done, Intel did accomplish what they planned to do. They put the industry through a blender and showed us how to be excited about processors again.

Although frequencies with Conroe were not as high as what we were used to seeing from Intel, the folks in Santa Clara proved that a high frequency didn’t mean much if the processor itself was inefficient. Indeed, a 2.4GHz Conroe Dual-Core proved just how much better an efficient processor could be, and it quickly became the most common processor choice for the enthusiast.

The following summer, follow-up processors were released, including the E6750 Dual-Core which we evaluated at the time. Besides speed bumps, those processors didn’t bring much to the table in way of new features, except for native 1333FSB support. Instead, the processor we are taking a look at today is one of the few new models that effectively replace the Conroe-based chips that we came to love so dearly in summer of ’06.

I won’t delve deep into how 45nm improves on 65nm, as I explained all of that in our QX9650 review, but I will touch on things briefly. One large benefit that comes with all die shrinks is better power efficiency and lower temperatures. Chips have the capability to run just as fast, if not faster, than their predecessors, all while running cooler and drawing less power. It’s a win/win situation.

But with 45nm, Intel introduced more than just a die shrink. The biggest feature that most people will be interested in is the SSE4 instruction set. It affects media-buffs only – those who encode videos – but the performance gains are so evident, that developers of such applications are bound to begin supporting it sooner than later. The speed increases could be as large as 2x, even though it’s difficult to believe.

Other improvements include increased L2 cache, half-multipliers (eg, 9.5x), a faster front-side-bus, improved Super Shuffle Engine, Smart Cache (to improve how split loads are accessed and stored) and so many transistors on a single die, it can give people headaches to think about it!

The obvious downside of the QX9650 launch in November was the fact that no other processors complimented it. Therefore, it was QX9650 or bust – until now that is. During CES earlier this month, Intel officially announced their 45nm launch plans, which include the desktop side, server and also mobile. We found out at that time that the Quad-Core models (Q9300 – Q9550) were pushed back to sometime in Q1. Although a solid date was never settled on, original road maps showed January as the scheduled launch. However, the rumor is that due to poor performing Phenom Quad-Core sales, Intel decided to hold off on the launch to help push remaining 65nm models to consumers first.

So how does the road map stand now that some time has past? Although Intel announced near-immediate availability of all 45nm desktop Dual-Cores at CES, only the E8400 has shown up on e-tailers. One popular e-tailer has the other models listed for availability in April. How true that is, I’m unsure, but it’s strange given the fact that they were supposed to be available by now.

Processor Name
1Ku Price
Intel Core 2 Extreme QX9775
2 x 6MB
Q1 2008
Intel Core 2 Extreme QX9770
2 x 6MB
Q1 2008
Intel Core 2 Extreme QX9650
2 x 6MB
Intel Core 2 Quad Q9550
2 x 6MB
Q1 2008
Intel Core 2 Quad Q9450
2 x 6MB
Q1 2008
Intel Core 2 Quad Q9300
2 x 3MB
Q1 2008
Intel Core 2 Duo E8500
Jan 2008
Intel Core 2 Duo E8400
Intel Core 2 Duo E8200
Jan 2008
Intel Core 2 Duo E8190
Jan 2008

The biggest downside to the road map is that the Q9xxx are not available. Once they are, they are no doubt going to sell like hotcakes, given the improvements over the previous generation and the fact that the prices do not increase. The upside, though, is that even though the E8400 is the lone desktop Dual-Core to be available right now, we can be happy that it is the model most people would be after.

What makes the E8400 such a great choice is the fact that it’s affordable, at $220USD on average, and has a nice clock speed. Let’s face it… where overclocking is not concerned, having a 3.0GHz CPU looks better to the ego than say, 2.66GHz. It’s all about the smooth frequencies, baby.

Why Dual Core?, Refreshed Box Art, Stock Cooler

Why Dual-Core?

The computer hardware industry, like many others, moves at an incredible pace. Nineteen months ago when Conroe first launched, the X6800 processor was the ultimate high-end, a Dual-Core offering at 2.93GHz. Indeed, it was a blazing fast processor and with a little overclocking, new heights could be reached. But today, despite that processor still being fast, it seems so mundane when compared to the latest offerings.

One reason that’s the case is that half a year after the X6800, Intel followed up with their high-end QX6700 Quad-Core… effectively opening the gates to the multi-core era. Two months following that, the Q6600 was launched, becoming one of the most popular choices for hardware enthusiasts… which is still the case. That fact was helped even further with summer price-drops, placing the processor at >$300. For anyone who could benefit from a Quad-Core, it almost felt like a steal.

With Quad-Cores so prevalent today, then, are Dual-Core offerings becoming less and less of a wise choice? Consider the fact that regardless of which Dual-Core you choose, a Quad-Core variant is never that much more expensive. But of course, overall single-threaded performance will be lacking. As it stands, while the E8400 retails for around $220, the Q6600 Quad-Core hovers around $280. When the 2.66GHz Q9450 is released, it will retail for around $350.

It all comes down to personal preference and budget at this point. If you are an overclocker, it’s just as easy to purchase a Quad-Core offering and overclock it to 3.0GHz speeds with a sufficient cooler. Right now though, the choices are made even easier, since the Q9xxx 45nm models are not yet available. Purchasing a Q6600 at the current time is less appealing now than it was last year, since the new models are right around the corner.

For those who don’t require the flexibility of a Quad-Core, Dual-Cores still prove to be a great choice. They run fast, run cool and suck less power. As you will see in our overclocking reports in this review, we managed to overclock our $220 E8400 so high, that it became faster than the stock Q6600, at $280. Did I mention that I’m talking about a stable overclock? On air? Just thought I should mention it…

Refreshed Box Art, Low-Profile CPU Cooler

Box art doesn’t matter, at all… or does it? Sure it does, to some degree. The old-school Core 2 non-Extreme box art was a sky-blue color, but with 45nm’s launch comes some even more colorful box art. When I tore this beast out of the box… it felt like summer on a winters day. Well, kind of. Perhaps it was just the excitement of a new CPU, I don’t know.

Other than the box art, not much has changed. The CPU cooler for dual-cores remains the same, with it being at about half-height of the Quad-Core version. During testing, I didn’t use the included cooler because it made more sense to go with something that would improve overclocking. If you have no intentions to overclock, the stock cooler will suffice, but overclockers should always invest a little bit of money into their cooling setup.

Taking a look at the back of the new Dual-Core CPUs reveals a decreased collection of filter caps, which is interesting, because the QX9650 didn’t cut down on many all, when compared to the QX6850, for instance. The Dual-Core counter-parts, however, show a very noticeable decrease. Even before popping it in the machine, we can tell it’s more efficient from that fact alone.

Left: E8400, Right: E6750

It doesn’t matter much right now, but all launch 45nm Dual-Cores are the C0 revision. If improvements are made, the revision might change, so bear this in mind months down the road if you want to purchase one.

CPU codes are as follows:

Yes… the E8190’s code is SLAQR. Hard to make something like that up! With that bit of humor, let’s move right into our testing methodology, followed by our testing results.

Testing Methodology

Regardless of the OS we are running or product being reviewed, there are a few conditions that are met to assure accurate, repeatable results.

All testing between processors is done on the same hardware. Our configuration is below:

For our processor reviews, we use three different operating systems: Windows XP, Windows Vista and Gentoo Linux. Although Vista has been out for close to a year, we’ve encountered numerous issues with our benchmarking, so we use it only where necessary, which at this time is only for PCMark Vantage.

No time demos are used in this review. Each level was manually played with the Minimum and Average Frames-Per-Second captured with the help of FRAPS 2.9.4. Each play-through lasts between four and six minutes. Because no time demos are used, the average FPS will vary in between runs, even on the same CPU, due to changing circumstances in the game. It’s for this reason that we play on each setting twice, then average the two. To cover the bases, both 1280×1024 and 2560×1600 resolutions are used, to see if benefits can be seen at either the low-end or high-end.

Below, you can view all of the games we will be using, as well as the settings used.

Call of Duty 4



Half-Life 2: Episode Two

All other non-game benchmarks will be explained along the way.

SYSmark 2007, PCMark Vantage

There is no better way to evaluate a system and its components than to run a suite of real-world benchmarks. To begin our testing, we will use two popular benchmarking suites that emulate real-world scenarios and stress the machine the way it should be… by emulating tasks that people actually perform on a day to day basis.

Both SYSmark and PCMark are hands-free, using scripts to execute all of the real-world scenarios, such as video editing and image manipulation. Each one of these suites output easy-to-understand scores once the tests are completed, giving us a no-nonsense measure of seeing which areas our computer excels in.

SYSmark 2007 Preview

SYSmark, from Bapco, is a comprehensive benchmarking application that emulates real-world scenarios by installing popular applications that many people use every day, such as Microsoft Office, Adobe Photoshop, Sony Vegas and many others.

SYSmark grades the overall performance of your system based off of different criteria, but mostly it will depend on how fast it could complete certain tasks. Once the suite is completed, five scores are delivered, one as an overall average and the others for each of the four categories.

Windows Vista used to act as our backdrop for our SYSmark testing, but over the course of the past few months, we’ve encountered many show-stopping errors and odd behaviour which we believe to be directly linked to the OS. So, we’ve moved back to Windows XP and have yet to run into an error.

Our E8400 kicks things off to a good start, even beating out the Q6600 in some regards.

PCMark Vantage

The most recent recruit to our testing suite is PCMark Vantage, an application that proves to be far more than a simple upgrade from a previous version. Vantage is a completely overhauled application, and this was evidenced by the fact that it took more than two full years to produce. Rather than having a PCMark that could complete in 15 minutes, Vantage’s entire run will take around 90 minutes, testing seven primary areas, such as high-definition video, image manipulation, music conversion, et cetera.

Like SYSmark, PCMark delivers simple scores once completed, one for each of the seven main categories and an overall “PCMark Suite” score, which is what most folks will use for comparisons. I left out two suites due to irrelevancy and to keep the graph a modest size.

The CPU shows great spirit here as well… once again beating out our Q6600 Quad-Core in a few of the tests.

Multi-Media: DivX 6.7, Nero Recode

DivX 6.7

One area where Intel’s 45nm processors excel is with multi-media encoders that utilize the SSE4 instruction set. Beginning with DivX 6.6.0, the set is fully supported and will make a huge difference when using the “Experimental Full Search” algorithm to encode.

When using DivX 6.6.0+, you will notice that the “Experimental Full Search” is left at Disabled by default. This, as we found out, is a good thing since it does indeed take longer overall. If you are a media enthusiast who cares a lot about quality and doesn’t mind the extra wait, then this might be the route to take. The end result may vary depending on certain factors, such as original video codec, original video quality and video length.

For our testing, we are using a 0.99GB high-quality DivX .AVI of Half-Life 2: Episode Two gameplay. The video is just under 4 minutes in length and is in 720p resolution, which equates to a video bit rate of ~45Mbps, not dissimilar to standard 720p movies. We converted the video two different ways.

First, we encoded the video at the same resolution but a lower quality, so as to achieve a far more acceptable file size (~150MB). The second method is encoding of the same video, but to a 480×272 resolution, similar to what some mobile devices use. This last method is not entirely realistic as it’s unlikely the exported video would work on such a device, but the test is to see the benefits of SSE4 in general.

It’s hard to see what benefit SSE4 had here, but the E8400 still easily beat the other Dual-Cores, not surprisingly.

Nero Recode

Where video conversion is concerned, one of the applications I’ve grown to enjoy over the years is Nero Recode. Though it’s export options are extremely limited, they offer high image quality and decent file weight. Nero 8 was released a few months ago, but still lacks support for SSE4.

In a meeting with Nero in September, we questioned whether or not we would see SSE4 support in a future update, but we were told that there is no immediate plans to implement it, although the “guys in the lab” are taking a look at it. Nero exhibits confidence that their application is optimized enough as is, and SSE4 is not needed.

For this test, we’ve first ripped our copy of our concert DVD, Killadelphia, by Lamb of God. The original DVD rip weighs in at 7.7GB, but we are using Nero to reconvert it to 4.5GB so that it will fit on a normal-sized DVD to use as a backup. Our “mobile” test consists of converting the main concert footage to the same resolution a Sony PSP uses (480×272) which results in a 700MB file.

The performance continues here, with the E8400 pushing ahead of our other dual-cores. It also far surpasses our Q6600 for our mobile video, thanks to the fact that the encoder will not utilize more than two cores.

Multi-Media: Adobe Lightroom, 3DS Max 9

Adobe Lightroom 1.2

Years ago, you’d have to fork over many Benjamins in order to get a piece of great technology, but that’s not the case anymore. For a modest fee, you can set yourself up with some absolutely killer hardware. Luckily, one area where that’s definitely the case is with digital cameras. It’s cheaper than ever to own a Digital-SLR, which is the reason why they are growing in popularity so quickly. As a result, RAW photo editing is also becoming more popular, hence the topic of our next benchmark.

Adobe Lightroom is an excellent RAW photo editor/organizer that’s easy to use and looks fantastic. For our test, we take 100 RAW files (Nikon .NEF) which are 10 Megapixel in resolution and then export them as JPEGs in 1000×669 resolution… a result that could be easily passed around online or saved elsewhere on your machine as a low-resolution backup.

It looks like we created a staircase here, with the E8400 easily beating out the other Dual-Cores, but falls short of even the 2.4GHz Quad-Core. This is thanks to Lightroom utilizing all of the cores so well.

3DS Max 9

As an industry-leading 3D graphics application, Autodesk’s 3DS Max is one of our more important benchmarks. If there are people who will benefit from faster CPUs with lots of cores, it’s designers of 3D models and environments and animators. Some of these projects are so comprehensive that they can take days to render. At this time, the application does not support SSE4 and will likely not in the future due to irrelevant instructions.

For our test, we are taking a dragon model which is included with the application, Dragon_Character_Rig.max, and rendering it to 1080p resolution (1920×1080). For a second test, we render the same model, but all 60 frames, to a 490×270 resolution .AVI.

Once again, our E8400 scaled well, with a time of 65 seconds for the single-frame, besting the 2.66GHz E6750 by twelve seconds. There’s no question… if you are a 3DS Max user, you need a Quad-Core.

Multi-Media: Cinebench 10, 7-Zip Archiving

Cinebench R10

Like 3DS Max, Cinema 4D is another popular cross-platform 3D graphics application that’s used by new users and experts alike. Its creators, Maxon, are well aware that their users are interested in huge computers to speed up rendering times, which is one reason why they released Cinebench to the public.

Cinebench R10 is based on the Cinema 4D engine the test consists of rendering a high-resolution model of a motorcycle and gives a score at the end. Like most other 3D applications on the market, Cinebench will take advantage of as many cores as you can throw at it.

Cinebench (and Cinema 4D) is one program that will take advantage of Intel’s 45nm benefits and it’s proven here. Our E8400 beat out the QX6850 in single-threaded mode, but was on par with our QX9650.

7-Zip Archiving

If you are a power-user and love free software (such as myself), then you no doubt have heard of 7-Zip. Although the application is similar to other compression applications on the market, such as WinZip and WinRAR, 7-Zip is completely free and lacks nothing that you need to effectively archive your documents.

For our test, we take a 4GB folder that’s complete with pictures, music, documents and other random files and compress it to a .7z file using both the LZMA and Bzip2 algorithms.

Bzip2 is my preferred algorithm as it’s faster thanks to multi-threadedness, but the drawback is that the resulting filesize is ~1% larger than what LZMA will export. Of course, whether or not that extra 1% is worth your extra time or not is a personal decision. We are using both algorithms in our tests since both are widely used.

You know that multi-core processors are finally catching on when even your archiver takes advantage of all four cores. The differences between our Dual-Core and Quad-Core results is staggering.

Linux: GCC Compiler, Image Suite, Tar Archiving

GCC Compiler

When thinking about faster processors or processors with more cores, multi-media projects immediately come to mind as being the prime targets for having the greatest benefit. However, anyone who regularly uses Linux knows that a faster processor can greatly improve application compiling with GCC. Programmers themselves would see the greatest benefit here, but end-users who find themselves compiling large applications often would also reap the rewards.

Even if you don’t use Linux, the results found here can benefit all programmers, as long as you are using a multi-threaded compiler. GCC is completely multi-core friendly, so the results found here should represent the average increase you would see with similar scenarios.

For our testing, we are using Gentoo 2007.0 under the 2.6.22 Gentoo-patched kernel. The system is command-lined-based, with no desktop environment installed, which helps to keep processes to an absolute minimum.

Our target is a copy of Wine 0.9.49 (with fontforge support). We are using GCC 4.1.2 as our compiler. For single core testing, “time make” was used while dual and quad core compilations used “time make -j 3” and “time make -j 5”, respectively.

Like most multi-threaded applications, it’s hard for GCC to make Dual-Cores look good when they’re pummeled by the Quad-Cores.

Image Suite

Even though multi-core processors are not new, it’s tricky finding a photo application that handles them properly. Lightroom was one, Photoshop is another. In light of the fact that it’s difficult to write scripts for more popular image manipulation applications, we are going to test the single core benefit of ImageMagick and UFRaw, two command-line-based applications for Linux.

ImageMagick is a popular choice for those who run websites, as it does what it does well, and that’s altering of images on the fly. Maybe websites and forums use ImageMagick in the background, which is why it’s performance is included here. UFRaw on the other hand is strictly a RAW manipulation tool which includes both a command-line and GUI-based version of the application. The command-line version is ideal for converting many images at a time, which is why we use it here.

For our test here, our script first calls on UFRaw to convert 100 .NEF 10 megapixel camera files using our settings to JPEGs 1000×669 in resolution. ImageMagick is then called up to watermark all 100 new JPEGs and also to create thumbnails of each. This entire process is similar to how we convert/watermark our photos here. An example snippet is below.

ufraw-batch –exif –wb=auto –exposure=0.60 –size=1000,670 –gamma=0.40 –linearity=0.04 –compression=90 –out-type=jpeg –out-path=../files/ *.nef;
composite -gravity SouthEast -geometry 254×55+3+3 whitewatermark.png 001.jpg ~/Output/001.jpg;

Not surprisingly, given the single-threaded likeness, the performance here was on par with our other 3.0GHz CPUs.

Tar Archiving

To help expand our Linux performance testing, we are now including Tar as a benchmark, similarly to how we use 7-Zip for our Windows benchmarking. For our Tar tests, we are using the same 4GB /Archive/ folder found in our 7-Zip test, which is loaded to the brim with miscellaneous files and sub-folders.

Because both GZip and Bzip2 are popular solutions for Linux users, we are using both in our tests here. Default options are used for both compressors, with the simple syntax: tar z/jcf Archive.tar Archive/.

Sadly, multi-core processors don’t make much of a difference here, but rather raw frequency.

Gaming: Call of Duty 4, Crysis

The newest additions our gaming arsenal is Call of Duty 4 and Crysis, two titles that are absolutely mind-blowing in both graphics and gameplay. It’s not too often that gorgeous games play well, but Infinity Ward and Crytek really know what they are doing. For precise settings used throughout testing, please refer to our testing methodology page.

Call of Duty 4

We have used Call of Duty 2 in our testing since its release, so it’s great to finally change the scenery a bit now that the fourth installment is available. I admit that I am not terribly fascinated with war-based games, but CoD4 does well to excite during benchmarking. It might be one title I will actually go back and play through, and that says a lot!

The level chosen for testing is The Bog, which begins you out among friends on a destroyed bridge in the heat of battle. This is a level to use in order to push your computer to the limits. The level is one of the most visually appealing I’ve seen (though dark), but has intense action that will stress both the processor and GPU.

Although CPUs are normally marketed to promote better gameplay, these graphs prove that most of the time, it’s more of a GPU bottleneck, not the CPU. In this case, our $220 CPU didn’t hold anything back from our gameplay here. The beefy 1600FSB QX9770 was the only CPU to show a real performance increase.


Do games this hyped really need an introduction? Crysis is one of the first games we’ve seen in a while that actually does a great job of pushing the highest-end computers to their breaking point. This is far from a joke. I would love to see a $10,000 e-peen PC run this game like butter at 2560×1600. Maybe next year, but I’d be hard-pressed to see that happen right now.

Because we just added the game to our fleet, the level used is the first one in the game. Instead of beginning right at the beginning when you jump out of a plane, we crated a save on the beach, where is where we begin each time. The manual playthrough ends after about four minutes, after the second area that requires the super-jump.

We can officially claim that the processor is not going to be the bottleneck for a game like Crysis. How long will it take before we can run this game on a mid-range card and it perform well?

Gaming: HL2: Episode 2, F.E.A.R.

Half-Life 2: Episode Two

Yet another game that needs no introduction, Half-Life 2: Episode Two was a proper sequel to Episode One, although the duration in which people had to wait between the two was a little questionable. Luckily for fans though, Episode Two proved to be more of what we love. It was a win/win. Introduced with this version were achievements as well, which let you know how much of a fan you really are.

We are using the Silo level for our testing, which is a level most people who haven’t even played the game know about, thanks to Valves inclusion of it in their Episode Two trailers over the past year. During our gameplay, we shoot down a total of three striders (their locations are identical with each run, since we are running a saved game file) and a barn is blown to smithereens.

Given the fact that the Source engine tends to favor raw frequency over multiple cores, our results here were expected.


Like Call of Duty 2, FEAR first hit our PCs in fall of 2005. When it did, it proved to almost everyone just how badly our computers needed upgrading. It was one of the first games to truly benefit from having 2GB of RAM installed, but of course also a massive graphics card. Even today, running a high-resolution FEAR is a visual treat.

The third level is our destination today, which begins us out beside two friends who send me off through various buildings, kicking some ass en route. I am unsure where the final destination is, as I’ve never explored that far, but throughout our five-minute gameplay we encounter four enemies, outdoor and indoor areas and even have a strange horror sequence occur.

Like Half-Life 2, FEAR favors a higher frequency over additional cores. Not surprising, since the games development occurred before even Dual-Cores became popular.


It’s no secret… Core 2 Duo’s have been some of the best overclocking chips for the past year and a half. Unlike their beastly Quad-Core brothers, Dual-Cores run cool and allow for much more overclocking headroom. That, once again, has been further proven with the E8400.

One would imagine that the step to 45nm would decrease the TDP, but that’s not the case here. The Dual-Core products still retain the same 65W TDP that the 65nm models do. However, that doesn’t mean much because it’s been proven already that the 45nm processors run use less power and are cooler to boot. You can read more about that in our review of the QX9770.

So given the TDP, one would assume that the required voltage to run would be similar as well. Personally, whenever I benchmark a new CPU, I keep the CPU Voltage at 1.3v, as its proven to be the stable setting for all CPUs tested. The motherboard though, more often than not, will set it even higher than that. So, with the E8400, 1.3v might still be the most common voltage setting, but as we found out during testing, it’s able to run stable on far less.

What’s considered stable?

For an overclocked setting to be deemed stable, it has to be stable. If an error shows its ugly face after three hours of stress testing, it’s not stable. Some might argue that such an error will not likely effect regular computer usage, and that’s true. However, calling an overclock stable when an error arises… doesn’t sit well with me.

The stress-tester of choice is SP2004, with an instance on each core. Using the Small FFT mode, the cores are kept to 100% usage for as long as the test it run. As a result, if errors are possible, they will most likely be detected within fifteen minutes. I’ve rarely run into an error after an hour of running the test.

For a modest setting to be considered stable, I require a three-hour run to pass without errors. For more serious overclocks, I stress for eight hours – more, if time permits. Once SP2004 is finished, 3DMark 06’s main suite is looped five times, to ensure that gaming is stable as well. After this, miscellaneous benchmarks are run to further solidify the fact that a given setting is indeed stable.

Before I concluding on an overclock, the machine is shut down and five minutes pass before booting it back up. If it boots fine and still proves to be stable, the CPU settings are reset to stock. Afterwards, the overclocked setting in question is again set and the computer reboot, to make sure that the overclock was not a one-time affair. If it passes all these tests, I consider the overclock to be stable.

Overclocking Results

Please note that while I used the ASUS P5E3 Deluxe motherboard for our benchmarking, all overclocking was done using ASUS’ Maximus Extreme board, as it’s more suitable for high-end overclocking. The lone downside of the board is the fact that the lowest CPU voltage setting is 1.1v, but that’s what we tested out first. For cooling, a Zalman 9700 was used.

The CPU will most likely be run at 1.25v – 1.3v on most machines if the Auto setting is used, but as discovered, even 1.1v was deemed stable after a three-hour period. Everything tested at this setting was stable… gaming and benchmarks included, not just SP2004.

E8400 – 3.0GHz – 1.1v CPU Stable
(Click to view three-hour stress)

Our second goal, of course, was to see how far the CPU could be pushed while retaining stock CPU voltage. Bear in mind though, the Northbridge voltage did have to be increased in order to handle the higher FSB. The CPU voltage stayed at 1.3v.

How does 3.825GHz sound? I was impressed with the headroom here, and was unsure when I was finally going to hit my limit, but this was it. In order to keep 425FSB stable, I increased the Northbridge voltage from 1.45v to 1.55v (it’s minor).

E8400 – 3.825GHz – 1.3v CPU Stable (1.55v Northbridge)
(Click to view four-hour stress)

Sadly, I somehow lost the proof of stability screenshot when moving in between computers, so I don’t have one here for this setting. I am re-running all the stress tests and will update the review later today with the screenshot. (01/30/08 6:25PM Update: Image can now be clicked. Problem arose with original follow-up stress due to NB voltage accidently being too high (1.65v). When set back to 1.55v, it was stable again.)

After finding out that 3.8GHz was stable using stock CPU voltages, I decided to go full out and find the maximum possible stable overclock for the chip. Remarkably, that turned out to be 4.23GHz. As evidenced in the screenshot below (click to view full), that overclock ran for eight-straight hours without a single error.

E8400 – 4.23GHz – 1.475v CPU Stable (1.71v Northbridge)
(Click to view eight-hour stress)

I’ve never had a Dual-Core overclock so high and still remain stable, so I was skeptical. I put the CPU through all it’s paces… and nothing could stop it. As you can see in the screenshot below, it ran a collection of benchmarks without issue. The important thing to note here is that this is a true overclock that increases performance. All results scale well, so heat didn’t become an issue at all, despite the chip running at an average of 68°C on each core.

E8400 – 4.23GHz – 1.475v CPU Stable (1.71v Northbridge)
(Click to view misc. benchmarks)

If you take a look at the Sandra results, you will notice something interesting. Our E8400 at 4.2GHz effectively passed by the Q6600 in terms of raw performance. Despite having half the cores of the Q6600, the huge overclock made up for the loss… and then some.

Another thing that strikes me is the Super Pi result. While not that relevant today, when Conroe was first launched, it was deemed extraordinary when enthusiast overclockers broke through the 10s mark with the new processors. Now, even us casual overclockers have the possibility to break through that milestone… on air. That is beyond impressive.

One downside here is that I believe this CPU still had more room to push. I became limited by the motherboards FSB, which may or may not have had anything to do with the CPU. Running 470FSB was fine, but moving up to even 475FSB would spawn errors in SP2004 within five minutes. While 4.2GHz is undeniably impressive, I can’t help but feel it could be pushed a bit higher.

Final Thoughts

In the intro, I mentioned that Intel’s Core 2 Duo launch in mid-2006 succeeded in getting people excited about processors again, and it’s safe to say that the 45nm launch will accomplish something similar. One thing that has changed since that launch and is well reflected in this one, is pricing.

Case in point: In summer 2006, the E6600 sold for $316 in quantities of 1,000, which would end up being $350 once sold by the e-tailer. Fast forward to now, and we are seeing a far superior product in terms of overall efficiency and speed, and it costs 40% less. Of course, such is the natural progression of things, but it’s certainly a better time than ever to PaPP (ponder a processor purchase!).

If you are in the market for a computer upgrade or want to make the move to 45nm, the E8400 is a superb choice and will no doubt become the most popular 45nm Dual-Core model. It delivers top-rate performance, increased thermal benefits, greater power efficiency and best of all, sells for an easy-to-swallow $220, on average.

Nothing changes when overclocking is brought into the picture. Although the higher-binned E8500 might yield better extreme overclocks, the E8400 delivers incredible potential and costs a full $100 less. We are dealing with a processor that can handle an overclock so large, it eclipses the Quad-Core Q6600. It’s hard to be disgruntled with a processor of such potential.

The capabilities here impressed me more than once. First, the chip managed to remain stable at stock speeds with a low 1.1v voltage. Lower power consumption, a cooler processor… nothing to complain about. Mindblowing was the top-end 4.23GHz overclock, which proved to be completely stable throughout all of stress-testing and benchmarking… nothing could stop it – believe me, I tried. Admittedly, the voltage was semi-high, at 1.475v and 1.71v Northbridge, but even so, the CPU and motherboard ran well within temperature limits. The fact that this was done on air makes it only even more impressive.

For more modest overclocking, our chip also managed to retain stability at 3.825GHz, all at stock CPU voltages. In that case, though, the Northbridge voltage was increased to 1.55v to help support the higher FSB, which is a setting that will not require additional cooling. Using the “Auto” setting on the motherboard, 1.45v was chosen, so this increase is exceptionally minor.

The question now is whether or not you want the E8400, or wait for the Q9xxx Quad-Cores, which will be launched sometime this quarter. It’s a good question, but again falls on what your needs are. If you regularly use applications that will take advantage of additional cores (there are more than you think), then a Quad-Core makes sense. However, if you want raw horsepower more than additional cores, the E8400 will serve you well. Just expect to lose a few nights of your life due to overclocking the beast.

Discuss in our forums!

If you have a comment you wish to make on this review, feel free to head on into our forums! There is no need to register in order to reply to such threads.

Copyright © 2005-2019 Techgage Networks Inc. - All Rights Reserved.