Techgage logo

Intel Core 2 Quad Q9450 2.66GHz

Date: April 29, 2008
Author(s): Rob Williams

The wait for an affordable 45nm Quad-Core is now over, and the Q9450 promises to become the ultimate choice of the new offerings. It’s not much slower than the QX9650, offers 12MB of cache and as expected, has some fantastic overclocking ability. How does 3.44GHz stable sound?



Introduction

When Intel launched their first 45nm processor last November, the QX9650, people might have been excited, but not everyone wanted to shell out a premium to have the fastest piece of hardware available. So, most sat around and waited in hopes to see more affordable Quad-Cores hit the market.

January came, and at CES we found out that the new CPUs were still not ready for launch. This was a blow to those who were already holding off their upgrade or new built. But fast forward almost four months, and finding a 45nm Quad-Core is easier than ever.

It just might not be the Q9450.

Because the new CPUs are in such high demand, it’s hard to find the top two mid-range models in stock, anywhere. For those looking for an entry-level point into the 45nm Quad-Core scheme of things, you’ll be pleased to know that the Q9300 is readily available at most popular e-tailers.

But we’re here today to take a look at the Q9450, the mid-range offering of the mid-range offerings. Clocked at a healthy 2.66GHz, it looks to be an ideal chip for those looking to piece together a fast computer without breaking the bank. For those curious about overclocking, don’t worry, we’ve got you covered.

Closer Look at the Core 2 Quad Q9450

As mentioned above, the Q9450 is one popular chip right now, and because of that, prices tend to be inflated, which is unfortunate. The official price from Intel is $316, but many e-tailers are selling it for well over $400, when it should be closer to $350 – $360. The situation is even worse for the Q9550, which seems to be even more rare.

If you want a new Quad-Core and happen to want it now, then the Q9300 would make for a great choice. I haven’t touched one personally, but I know what kind of performance it pushes out, and given that it has good overclocking ability, it’s hard to go wrong. Plus, because it’s not suffering any sort of shortage, prices are ideal, at around $285.

Processor Name
Cores
Clock
Cache
FSB
TDP
1Ku Price
Available
Intel Core 2 Extreme QX9775
4
3.20GHz
2 x 6MB
1600MHz
150W
$1,499
Now
Intel Core 2 Extreme QX9770
4
3.20GHz
2 x 6MB
1600MHz
136W
$1,399
Now
Intel Core 2 Extreme QX9650
4
3.0GHz
2 x 6MB
1333MHz
130W
$999
Now
Intel Core 2 Quad Q9550
4
2.86GHz
2 x 6MB
1333MHz
95W
$530
Now
Intel Core 2 Quad Q9450
4
2.66GHz
2 x 6MB
1333MHz
95W
$316
Now
Intel Core 2 Quad Q9300
4
2.5GHz
2 x 3MB
1333MHz
95W
$266
Now
Intel Core 2 Duo E8500
2
3.16GHz
6MB
1333MHz
65W
$266
Now
Intel Core 2 Duo E8400
2
3.00GHz
6MB
1333MHz
65W
$183
Now
Intel Core 2 Duo E8200
2
2.66GHz
6MB
1333MHz
65W
$163
Now
Intel Core 2 Duo E8190
2
2.66GHz
6MB
1333MHz
65W
$163
Now
Intel Core 2 Duo E7200
2
2.53GHz
3MB
1066MHz
65W
~$133
May 2008

Where the Q9450 sits well, though, is with it’s robust cache, at 12MB. The Q9300 on the other hand, cuts that in half. The benefits of all that extra cache is difficult to to judge from a simple specs standpoint, but it’s a topic I’d like to delve into in a future article.

So, let’s get right to some benchmarking! On the following page, we explain in-depth how our testing methodology works, then we’ll jump into our SYSmark and PCMark tests, followed by many more.


System Configuration & Methodology

At Techgage, we strive to make sure our results are as accurate as possible. Our testing is rigorous, and sometimes exhaustive, but we feel the effort is worth it. In an attempt to leave no question unanswered, this page contains not only our testbed specifications, but also a fully-detailed look at how we conduct our testing.

If there is a bit of information that we’ve omitted, or you wish to throw off recommendations or suggest changes, please feel free to shoot us an e-mail or post in our forums.

When preparing our testbeds for performance testing, we follow these guidelines:

No hardware during our processor reviews is changed during testing, except for the CPU, of course. Our current configuration is as follows:

For our testing, we use both Microsoft Windows Vista Ultimate 64-bit and Gentoo Linux 2007.0 32-bit. We chose to stick to a 64-bit Windows because throughout the past year of usage, we find it to be much more stable than the 32-bit counterpart.

Once we set up our OS’, nothing changes unless we intend to re-benchmark our entire selection of processors using the refreshed environments. Given the sheer amount of time that takes, it doesn’t happen too often.

Gaming

In an attempt to deliver accurate results, all games are played through manually, with the average FPS recorded with the help of FRAPS 2.9.4. In our personal tests, we have found that manually benchmarking games is the best way to deliver accurate results, since time demos rely heavily on the CPU.

In order to deliver the best results, each title we choose is explored to find the best possible level for our benchmarking. Once a level is chosen, we play through in order to find the best route, and then in future runs, we stick to that route as close as possible. We are not robots, so we cannot make sure that each run is identical, but they will never be far off from each other. As we see in our results, scaling is good, so we are confident that our methodology is a good one.

Crysis

1680×1050
2560×1600


Call of Duty 4

1680×1050
2560×1600

Half Life 2: Episode Two

1680×1050
2560×1600

Unreal Tournament III

1680×1050
2560×1600


All of our benchmarks will be explained on their respective pages.


Suites: SYSmark 2007 Preview, PCMark Vantage

There is no better way to evaluate a system and its components than to run a suite of real-world benchmarks. To begin our testing, we will use two popular benchmarking suites that emulate real-world scenarios and stress the machine the way it should be… by emulating tasks that people actually perform on a day to day basis.

Both SYSmark and PCMark are hands-free, using scripts to execute all of the real-world scenarios, such as video editing and image manipulation. Each one of these suites output easy-to-understand scores once the tests are completed, giving us a no-nonsense measure of seeing which areas our computer excels in.

SYSmark 2007 Preview

SYSmark, from Bapco, is a comprehensive benchmarking application that emulates real-world scenarios by installing popular applications that many people use every day, such as Microsoft Office, Adobe Photoshop, Sony Vegas and many others.

SYSmark grades the overall performance of your system based off of different criteria, but mostly it will depend on how fast it could complete certain tasks and handle multi-tasking. Once the suite is completed, five scores will be delivered, one being the overall. We dedicate an OS and hard drive to this test in order to keep the environment as clean as possible.

To walk away with a great result in this test, the CPU either has to have many cores, or a very high frequency. This is proven by the fact that our Quad-Core Q6600 only bested the E8400 Dual-Core by eight points. Overall though, good results. The Q9450 came right behind both the QX6850 and QX9650.

PCMark Vantage

Futuremark’s PCMark Vantage has similar goals to Bapco’s SYSmark 2007 Preview. However, whereas SYSmark emulates real application use, PCMark is synthetic in all regards. Also similar to SYSmark, PCMark’s goal is to test your system with a handful of scenarios that you would find yourself in regularly, such as music conversion and photo management.

Although Vantage contains eight different suites in total, we focus on five. The ones that have been omitted, we find, are not too important or redundant. Our goal was to create a reasonable graph, and the more tests added, the larger it would become. We left out “Productivity”, for example, since that’s essentially SYSmark’s entire focus.

Like SYSmark, PCMark delivers simple scores once completed, one for each of the seven main categories and an overall “PCMark Suite” score, which is what most folks will use for comparisons. I left out two suites due to irrelevancy and to keep the graph a modest size.

Similar to SYSmark, none of PCMarks tests utilize the SSE4 instruction set, so clock for clock between 65nm and 45nm, differences in scores are minimal. Raw frequency plays a huge role here, and all of our results scale as we would expect.

Here, although the Q9450 had a minor frequency boost over the Q6600, the improvements are significant, and still come close to the QX9650 at 3.0GHz.


Multi-Media: DiVX (VirtualDub), Nero Recode

DivX 6.7

One area where Intel’s 45nm processors excel is with multi-media encoders that utilize the SSE4 instruction set. Beginning with DivX 6.6.0, the set is fully supported and will make a huge difference when using the “Experimental Full Search” algorithm to encode.

When using DivX 6.6.0+, you will notice that the “Experimental Full Search” is left at Disabled by default. This, as we found out, is a good thing since it does indeed take longer overall. If you are a media enthusiast who cares a lot about quality and doesn’t mind the extra wait, then the Experimental Full Search is the route to take. The end result may vary depending on certain factors, such as original video codec, original video quality and video length.

For our testing, we are using a 0.99GB high-quality DivX .AVI of Half-Life 2: Episode Two gameplay. The video is just under 4 minutes in length and is in 720p resolution, which equates to a video bit rate of ~45Mbps, not dissimilar to standard 720p movies. We converted the video two different ways.

First, we encoded the video at the same resolution but a lower quality, so as to achieve a far more acceptable file size (~150MB). The second method is encoding of the same video, but to a 480×272 resolution, similar to what some mobile devices use.

Results are as expected here. Thanks to the inclusion of SSE4, our Q9450 stormed past the “faster” QX6850 with our 720p encoding.

Nero Recode

Where video conversion is concerned, one of the applications I’ve grown to enjoy over the years is Nero Recode. Though its export options are extremely limited, they offer high image quality and decent file weight. For our testing, we are using Nero Recode 2 which is included with the Nero 8 Ultra Edition.

Unlike DivX, Nero Recode does not support the SSE4 instruction set, which I consider to be unfortunate, since real differences can be seen, as evidenced above. In a meeting with Nero last fall, I was told that while the application lacks SSE4 support, it does have what it takes to fully take advantage of Multi-Core processors, and to them, that matters more.

For this test, we’ve first ripped our copy of our concert DVD, Killadelphia, by Lamb of God. The original DVD rip weighs in at 7.7GB, but we are using Nero to reconvert it to 4.5GB so that it will fit on a normal-sized DVD to use as a backup. Our “mobile” test consists of converting the main concert footage to the same resolution an Apple iPod uses (480×272) which results in a 700MB file.

For whatever reason, our Q9450 didn’t perform that well here in relation to the other Quad-Cores, and I’m not ready to immediately blame the chip since the mobile recode does scale. We are currently evaluating the importance of Recode as a benchmark, since the results seem to fluctuate too often.


Multi-Media: TMPGEnc XPress, ProShow Gold

Pegasys TMPGEnc XPress

TMPGEnc XPress from Pegasys is a robust video conversion tool, allowing you to input virtually any file type and exporting to a variety of formats, including standard DVD MPEG, XDVD, SVCD, HDV, DivX and a lot more. It allows basic editing functionality, such as cropping and adding filters and proves to be a great tool overall. It’s a little more expensive than most video conversion tools, but not many others include so much format support.

One of the biggest reasons for including TMPGEnc XPress in our testing is the fact that it supports a wide-range of benefits delivered from the newest CPUs. It fully supports multi-threading and also every SSE instruction set to date, including SSE4.

For our test, we take a 1.2 GB source video file and export it to an HDV 1440x1080p resolution. Depending on the CPU used, the application will use either SSE3 or SSE4 for encoding.

This test is a perfect example of just how useful SSE4 is with a few select applications, and is a clear reason to give developers a push to include support in their own application if it makes sense. Once again, our SSE4-capable Q9450 showed the QX6850 who’s boss.

Photodex ProShow Gold

While TMPGEnc XPress’ purpose is to convert video formats, ProShow from Photodex helps turn your collection of photos into a fantastic looking slide show. I can’t call myself a slide show buff, but this tool is unquestionably definitive. It offers many editing abilities and the ability to export in a variety of formats, including a standard video file, DVD video and even HD video.

Like TMPGEnc and many other video encoders, ProShow can take full advantage of a multi-core processor. It doesn’t support SSE4 however, but hopefully will in the future as it would improve encoding times considerably. Still, when a slide show application handles a multi-core processor effectively, it has to make you wonder why there is such a delay in seeing a wider-range of such applications on the marketplace.

ProShow is quite reliable in that its results scale well with what we’d expect to see. It’s all about frequency here, thanks to the lack of SSE4 support.


Multi-Media: Autodesk 3DS Max, Adobe Lightroom

3DS Max 9

As an industry-leading 3D graphics application, Autodesk’s 3DS Max is one of our more important benchmarks. If there are people who will benefit from faster CPUs with lots of cores, it’s designers of 3D models, environments and animators. Some of these projects are so comprehensive that they can take days to render. At this time, the application does not support SSE4 and will likely not in the future due to irrelevant instructions.

For our test, we are taking a dragon model which is included with the application, Dragon_Character_Rig.max, and rendering it to 1080p resolution (1920×1080). For a second test, we render the same model, but all 60 frames, to a 490×270 resolution .AVI.

Our results here are interesting. When Intel first launched their 45nm processors, they boasted the fact that clock for clock, 45nm should prove faster than 65nm. That’s observed here when comparing the QX6850 to the QX9650. In the case of our Q9450, it once again manages to outpace the QX6850 despite its slower frequency.

Adobe Lightroom 1.4

Years ago, you’d have to fork over a roll of Benjamin’s in order to get a piece of great technology, but that’s not the case anymore. For a modest fee, you can set yourself up with some absolutely killer hardware. Luckily, one area where that’s definitely the case is with digital cameras. It’s cheaper than ever to own a Digital-SLR, which is the reason why they are growing in popularity so quickly. As a result, RAW photo editing is also becoming more popular, hence the topic of our next benchmark.

Adobe Lightroom is an excellent RAW photo editor/organizer that’s easy to use and looks fantastic. For our test, we take 100 RAW files (Nikon .NEF) which are 10 Megapixel in resolution and then export them as JPEGs in 1000×669 resolution… a result that could be easily passed around online or saved elsewhere on your machine as a low-resolution backup.

Similar to 3DS Max, Lightroom can show some improvement in some cases with 65nm vs. 45nm, and our Q9450 results prove that, beating the QX6850 by four seconds and the Q6600 by 44 seconds.


Multi-Media: Cinebench, POV-Ray

Cinebench R10

Like 3DS Max, Cinema 4D is another popular cross-platform 3D graphics application that’s used by new users and experts alike. Its creators, Maxon, are well aware that their users are interested in huge computers to speed up rendering times, which is one reason why they released Cinebench to the public.

Cinebench R10 is based on the Cinema 4D engine and the test consists of rendering a high-resolution model of a motorcycle and gives a score at the end. Like most other 3D applications on the market, Cinebench will take advantage of as many cores as you can throw at it.

Cinebench R10 is another application known to see improvements on 45nm, but like most of our benchmarks, it doesn’t take advantage of SSE4. Here, the Q9450 might have general enhancements over equivalent 65nm Quad-Cores, but the faster frequency of the QX6850 puts it on top. Our Q9450 did manage to outperform the Q6600 by over 2,000 points, however.

POV-Ray 3.7

Similar to Cinebench, the “Persistence of Vision Ray Tracer” is as you’d expect, a ray tracing application that also happens to be cross-platform. It allows you to take your environment and models and apply a ray tracing algorithm, based on a script you either write yourself or borrow from others. It’s a free application and has become a standard in the ray tracing community and some of the results that can be seen are completely mind-blowing.

The official version of POV-Ray is 3.6, but the 3.7 beta unlocks the ability to take full advantage of a multi-core processor, which is why we use it in our testing. Applying ray tracing algorithms can be extremely system intensive, so this is one area where multi-core processors will be of true benefit.

For our test, we run the built-in benchmark, which delivers a simple score (Pixels-Per-Second) the the end. The higher, the better. If one score is twice another, it does literally mean it rendered twice as fast.

Clearly, Dual-Core processors are not the ideal choice with 3D rendering, and I doubt that needs to be argued. Similar to a few of our earlier benchmarks, raw frequency overpowers the benefits of 45nm.


Et cetera: Microsoft Excel 2007, Sandra XII

Microsoft Excel 2007

Most, if not all, businesses in existence have to crack open a spreadsheet at some point. Though simple in concept, spreadsheets are an ideal way to either track information or compute large calculations all in real-time. This is important when you run a business that deals with a large amount of expenses.

Although the importance of how fast a calculation takes in an Excel file is, we include results here since they heavily test the mathematical capabilities of each processor. Because Excel 2007 is completely multi-threaded (it can even take advantage of an 8-Core Skulltrail), it makes for a great benchmark to show the scaling between all of our CPUs.

I’ll let Intel explain the two files we use:

Monte CarloThis workload calculates the European Put and Call option valuation for Black-Scholes option pricing using Monte Carlo simulation. It simulates the calculations performed when a spreadsheet with input parameters is updated and must recalculate the option valuation. In this scenario we execute approximately 300,000 iterations of Monte Carlo simulation. In addition, the workload uses Excel lookup functions to compare the put price from the model with the historical market price for 50,000 rows to understand the convergence. The input file is a 70.1 MB spreadsheet.

CalculationsThis workload executes approximately 28,000 sets of calculations using the most common calculations and functions found in Excel*. These include common arithmetic operations like addition, subtraction, division, rounding and square root. It also includes common statistical analysis functions such as Max, Min, Median and Average. The calculations are performed after a spreadsheet with a large dataset is updated with new values and must re-calculate many data points. The input file is a 6.2 MB spreadsheet.

It’s hard to believe that Excel, of all applications, best shows the benefits of higher frequencies and added cores. The interesting thing is that doubling the cores actually comes close to delivering a 100% improvement clock for clock – a total rarity.

SiSoftware Sandra XII

Sandra has been in my virtual toolbox for quite some time, and the reason is simply the fact that it includes many different types of synthetic benchmarks and makes for a great all-in-one. The two tests we will be focusing on is the Arithmetic and Multi-Media, however, as they are both CPU-specific. Like Excel 2007, these two tests stress each CPU to find the maximum mathematical calculations per second and operations per second.

In the Arithmetic test, the application stresses the CPU to find the maximum ALU instructions per second and floating point operations per second, in millions. In the Multi-Media test, a similar stress is executed to find the maximum int and float instructions per second.

Where good scaling is concerned, Sandra is the application to use. Our Quad-Cores show almost an absolute 100% increase clock for clock. The Q9450 was no exception, although it did again outperform the QX6850 in the Multi-Media FP test.


Gaming: Crysis, Call of Duty 4

Each graph for our benchmarking results are labeled with the resolution that the game was played at, while omitting secondary settings such as Anti-Aliasing, Anisotropic Filtering, texture quality, et cetera. To view all specific settings that we used, please refer to our testing methodology page, where we have screenshots for each game.

Crysis

It’s not often that a game comes along that truly pushes our hardware to the utmost limit. Crysis is one of those few games, and that will be the case for at least the next year. Don’t believe me? Boot up your top-end machine, max out your resolution and set the graphics to “Very High”. I guarantee tears will be shed within a few seconds of loading a level.

The level we chose here is Onslaught, also known as level five. We begin out in a tunnel, but what’s important is that we are in control of a tank. What could be more fun? Our run through consists of leaving the tunnel and hitting the other side of the battlefield, killing six or seven enemy tanks along the way.

It goes without saying that any level in Crysis would make for a great benchmark, but this one in particular is gorgeous. Using the “Medium” settings, the game looks spectacular and is playable on all of our graphic cards, so we stick with it. Throughout the level, there is much foliage and trees and also large view-distances. Explosions from the tanks is also a visual treat, making this one level I don’t mind playing over and over, and over.

Settings: Due to the intensiveness of the game, no AA is used at any resolution, and the secondary settings are all left to Medium.

In our past CPU reviews, I had chosen what I thought was a great level to benchmark with (the opening level), but after taking all of the CPUs through our new favorite level, I now realize that the first level in the game is a bad choice. Onslaught showcases huge draw distances and much more action on the screen, so here we can actually see slight benefits with faster processors.

Not only is there improvement with a higher frequency, but also with added cores. The differences aren’t too stark, but there are differences nonetheless. But as the graphs prove, better CPUs make the most difference at lower resolutions, which 1680×1050 definitely is. Moving to the ultra-high-definition resolution of 2560×1600 proves that it doesn’t matter what CPU you use.

Call of Duty 4

While Crysis has the ability to bring any system to its knees with reasonable graphic settings, Call of Duty 4 is a title that looks great no matter what setting you choose, even if you have it running well! It’s also one of the few games on the market that will benefit from having more than one core in your machine, as well.

The level chosen here is The Bog, for the simple fact that it’s incredibly intensive on the system. Though it takes place at night, there is more gunfire, explosions and specular lighting than you can shake an assault rifle at.

Our run consists of proceeding through the level to a point where we are about to leave a building we entered a minute before, after killing off a slew of enemies. The entire run-through takes about four minutes on average.

Settings: High details are used overall throughout all tests, although 4x AA is used for our 1920×1200 setting. That AA is removed in our 2560×1600. As we can see in the graphs below, both of those settings are quite similar in performance.

Though variations are seen here at 1680×1050, CPU differences begin to mean less going upwards. At that point, it becomes far more GPU-bound than CPU-bound, which is quite interesting all in itself.


Gaming: Half-Life 2, Unreal Tournament III

Each graph for our benchmarking results are labeled with the resolution that the game was played at, while omitting secondary settings such as Anti-Aliasing, Anisotropic Filtering, texture quality, et cetera. To view all specific settings that we used, please refer to our testing methodology page, where we have screenshots for each game.

Half-Life 2: Episode Two

If there is one game in our line-up that most everyone has played at some point, it would be Half-Life 2. The most recent release is Episode Two, a game that took far too long to see the light of day. But despite that, it proved to be worth the wait as it delivered more of what fans loved.

We are using the Silo level for our testing, which is a level most people who haven’t even played the game know about, thanks to Valves inclusion of it in their Episode Two trailers during the year before its release. During our gameplay, we shoot down a total of three Striders (their locations are identical with each run, since we are running a saved game file) and a barn is blown to smithereens.

Overall it’s a great level, but the Strider’s minions can prove a pain in the rear at times – most notably when they headbutt you. Nothing a little flying log won’t solve, however! This levels graphics consist mostly of open fields and trees, although there is a few explosions in the process as well, such as when you blow the Striders apart with the help of the Magnusson Device.

Settings: High graphic settings are used throughout all three resolutions, with 4x AA and 8xAF.

There is a definite similarity going on here. At lower resolutions, CPUs tend to matter more. But for high-end gamers, there is less of a difference. Interesting how that works, eh?

Unreal Tournament III

The Unreal series has always been one that’s pushed graphics to the next level. Surprisingly, though, as the graphics improve, the game still remains playable on a reasonable machine, with good FPS. How often is that the case?

“Gateway” is our level of choice for a few different reasons. The first and most notable is the fact that it’s a great level, and chock-full of eye-candy. The entire level consists of three different areas that can be accessed through portals, or “gateways”. The area we begin out in is a snow-filled wonderland, similar to Lost Planet’s winter levels, with a futuristic city and waterfall area also being accessible.

Settings: All in-game settings are maxed out, with physics and smooth frame rate disabled.

Once again, differences are indeed seen here, but they are so minute, it almost gives me a headache. UTIII is one game where the CPU really doesn’t matter, which is actually quite reassuring.


Linux: GCC, Archiving, Image Suite

GCC Compiler

When thinking about faster processors or processors with more cores, multi-media projects immediately come to mind as being the prime targets for having the greatest benefit. However, anyone who regularly uses Linux knows that a faster processor can greatly improve application compiling with GCC. Programmers themselves would see the greatest benefit here, but end-users who find themselves compiling large applications often would also reap the rewards.

Even if you don’t use Linux, the results found here can benefit all programmers, as long as you are using a multi-threaded compiler. GCC is completely multi-core friendly, so the results found here should represent the average increase you would see with similar scenarios.

For our testing, we are using Gentoo 2007.0 under the 2.6.24-r3 Gentoo-patched kernel. The system is command-lined-based, with no desktop environment installed, which helps to keep processes to an absolute minimum.

Our target is a copy of Wine 0.9.59 (with fontforge support). We are using GCC 4.1.2 as our compiler. For single core testing, “time make” was used while dual and quad core compilations used “time make -j 3” and “time make -j 5”, respectively.

45nm benefits aren’t just for Windows’ users, as seen with our dual Quad-Cores at 3.0GHz, but frequency is definitely the main factor here.

Image Suite

Even though multi-core processors are not new, it’s tricky finding a photo application that handles them properly. Lightroom was one, Photoshop is another. In light of the fact that it’s difficult to write scripts for more popular image manipulation applications, we are going to test the single core benefit of ImageMagick and UFRaw, two command-line-based applications for Linux.

ImageMagick is a popular choice for those who run websites, as it does what it does well, and that’s altering of images on the fly. Maybe websites and forums use ImageMagick in the background, which is why its performance is included here. UFRaw on the other hand is strictly a RAW manipulation tool which includes both a command-line and GUI-based version of the application. The command-line version is ideal for converting many images at a time, which is why we use it here.

For our test, our script first calls on UFRaw to convert 100 .NEF 10 megapixel camera files using our settings to JPEGs 1000×669 in resolution. ImageMagick is then called up to watermark all 100 new JPEGs and also to create thumbnails of each. This entire process is similar to how we convert/watermark our photos here. An example snippet is below.


ufraw-batch –exif –wb=auto –exposure=0.60 –size=1000,670 –gamma=0.40 –linearity=0.04 –compression=90 –out-type=jpeg –out-path=../files/ *.nef;
composite -gravity SouthEast -geometry 254×55+3+3 whitewatermark.png 001.jpg ~/Output/001.jpg;

Nothing too surprising here. Overall, it scales well to our assumptions.

Tar Archiving

To help expand our Linux performance testing, we are now including Tar as a benchmark. For the test, we take a 4GB folder with numerous files within and compress it.

Because both GZip and Bzip2 are popular solutions for Linux users, we are using both in our tests here. Default options are used for both compressors, with the simple syntax: tar z/jcf Archive.tar Archive/.

Is it just me, or is Gzip slow? Regardless of that, Tar compiling lives with a simple concept. The faster the CPU, the faster the process. If only multi-threading was possible!


Power Consumption, Temperatures

Before we delve into these two tests, we have to admit that these results are in no way definitive. We conduct these with rather simple means and variances beyond what our equipment can see might occur. That said, we still consider these results to be reliable, but for various reasons, they wouldn’t be 100% spot-on.

To test for power consumption, we use the trusty Kill-A-Watt monitor. For a reminder of our machine specs, please refer to our testing methodology page. Our Kill-A-Watt is plugged directly into the wall, and our PC directly into it. Nothing else shares the socket in order to keep fluctuations to a minimum.

For our idle results, we boot up the machine and leave it at the desktop for five minutes and grab the result that’s on the Kill-A-Watt. If for some reason the wattage is fluctuating, we’ll wait until it stabilizes. Vista can sometimes run random processes which can make it difficult to get a reliable result. For our load result, we run Cinebench R10 and grab the highest average wattage during the peak of the benchmark.

The Q9450 is faster than the Q6600, but uses less power. That’s what we like to see.

To gather our temperatures, we use Everest 4.0 and record for five minutes after boot, while the computer sits idle. Afterwards, we load up enough instances of SP2004 to keep the CPU at 100% usage and again record using Everest for 15 minutes. After that, we grab both the idle and load average temperature. Please note that we grab the temperature for the entire CPU, not an individual core.

As I’ve mentioned in the past, there is no reliable way to capture the real temperature of the processor, although “Real Temp” might be the best application for finding the most accurate one. The absolute best way to capture the real temperature would be to use a High-K temperature diode, but that’s impossible (unless of course you don’t want to use a CPU cooler).

Again, similar to our power consumption tests, the faster Q9450 runs cooler than the last-gen Q6600, at both idle and load.


Overclocking the Q9450

Whenever I take a Quad-Core for an overclocking spin, I’m unsure what to expect. Dual-Cores are far easier to overclock, and even moreso to achieve a stable overclock. Quad-Cores are a little more difficult since it’s rare to find a chip with four identical cores that will scale exactly the same. So if three cores happen to overclock 50% without issue, but the other doesn’t, then it will not be stable at full load if that 50% overclock is put into place.

Before I jump into my results with the Q9450, however, I need to reiterate what we consider to be a stable overclock. For our more serious overclocks (meaning, overclocks that we ourselves would use 24/7), we put it through 8 hours of SP2004 torture, if time permits. For moderate overclocks, or overclocks right in the middle of our testing, we make sure to stress the CPU for at least an hour.

If that passes, I run a loop of 3DMark 06 at least three times to see if it passes, and after that, I’ll hop into a quick round of HL2 to see if it retains stability. Half-Life 2 tends to be a great benchmark for overclocking, because if there is anything run, it will lock up at various points during gameplay.

One thing I found out quick with the Q9450 is that it loves voltages, but even without a simple voltage increase, we were still able to achieve some great-looking overclocks. How does 3.2GHz stable sound?



3.20GHz – Stock Voltages (1.30v CPU)

Next up is an overclock that proves my voltage love theory. Bumping it up a mere 0.025v allowed me to increase the frequency to 3.37GHz, while still retaining stability.



3.37GHz – 1.325v CPU

Up another 0.025v and we hit 3.44GHz. I should reiterate that these overclocks really did scale this way. Without that 0.025v extra, it would fail within five minutes of running SP2004. It’s one picky chip.



3.44GHz – 1.35v CPU

To go even further though, I needed to become a little more generous with the extra voltage. At 1.40v, 3.52GHz could be attained. Without excellent cooling, I don’t recommend pushing 1.40v into your CPU for a 24/7 machine.



3.52GHz – 1.40v CPU

The top overclock reached was 3.64GHz at 1.475v. Again, I don’t recommend this setting for regular use, as it’s rather high, regardless of the cooling.



3.64GHz – 1.475v CPU

After this point, there was no real gain to be had, at least in way of stability. Higher overclocks could be reached, but none remained stable for long. I went as high as 1.55v and even bumped up the northbridge and other voltages, but nothing would push this CPU further.

But, there’s still no reason to complain. Without raising the voltage at all, 3.2GHz was stable, and bumping it up to a still very reasonable 1.35v gave us 240MHz higher. It’s always a good feeling to push a $350 CPU to $1,000+ CPU heights, with ease.


Final Thoughts

As we expected, the Q9450 is a fantastic processor all-around. It offers a nice clock speed, improved characteristics over the previous 65nm Quad-Cores and is priced right. The problem of course, is that despite being out for over a month, it’s a game to just find one in stock, anywhere.

If looking for a new Quad-Core right now, it’s hard to outright recommend this one, because not one e-tailer I checked at press time had them in stock. The Q9300, on the other hand, is found all over the place.

Where that battle is concerned, the Q9300 is still a great choice if the Q9450 cannot be found. It’s a tad bit slower, at 2.50GHz, and also has its L2 cache halved, but it’s still a worthy upgrade over the previous 65nm generation of processors.

But, assuming that the Q9450s will come back into stock soon, I recommend it highly. As it stands, it’s a great offering for the price, especially considering that it’s far less expensive than the QX9650, but is only 340MHz slower. That’s even without taking overclocking into consideration.

Where that’s concerned, there are no complaints to be had, either. As seen on the last page, 3.2GHz (QX9770 territory) could be achieved by increasing the FSB to 400MHz. On our particular motherboard, that didn’t even require a voltage increase on the northbridge. The CPU didn’t need a boost either, so it’s a completely free overclock all around. Pick up some DDR3-1600 to match that 3.2GHz, and you’ve got one fast machine (understatement).

Even further though, bumping the voltage up to a modest 1.35v allowed us to hit 3.44GHz stable, and even that didn’t require a northbridge increase. In fact, 1.35v is a voltage a lot of people seem to use even at stock speeds, since it’s still within Intel’s warranty limits. A 780MHz boost free of charge on a Quad-Core? That sounds like $350 well spent.

Overall, this is a great processor and one I recommend to anyone willing to spend $350 for their upgrade. The downside is still the fact that it’s so difficult to find in stock, but here’s to hoping that will soon change.

Discuss in our forums!

If you have a comment you wish to make on this review, feel free to head on into our forums! There is no need to register in order to reply to such threads.

Copyright © 2005-2019 Techgage Networks Inc. - All Rights Reserved.