Date: October 9, 2009
Author(s): Rob Williams
Last month, AMD became the first company to bring a $99 quad-core processor to market, the Athlon II X4 620. The question, of course, is whether or not it delivers. At 2.60GHz, it looks to offer ample performance, but the lack of an L3 cache is sure to be seen in some of our tests. Luckily, the chip’s overclocking-ability helps negate that issue.
In the fall of 2006, Intel launched its Core 2 Extreme QX6700 processor, and as a result, it became the first company to deliver a consumer-targeted quad-core. Sure, the chip retailed for $999, and it was built using a “non-native” design, but it wasn’t until almost a full year later that AMD launched its own quad-core to compete with. While Intel rightfully holds onto its bragging rights from its launch that fall, AMD, as of this month, has its own accomplishment to boast about.
With its Athlon II X4 620, AMD becomes the first company to release a desktop quad-core for under $100… $99 to be precise. Think about that for a moment. Less than three-years-ago, we saw the first-ever desktop quad-core, for $999, and today, we have one for $99. Isn’t the rapid progress of technology grand?
So… a $99 quad-core. Who’s it designed for? That’s the same question I asked myself when I first learned of the chip, and even now, I’m not quite sure who to recommend it to. To me, it seems like the X4 620 comes off as being a product with two contradicting goals. On one hand, it wants to offer great multi-threading capabilities, while on the other, it wants to be a budget offering, and it is just that, thanks to its lack of L3 cache and modest clock speed.
When I think of a multi-core processor, namely a quad or higher, I picture 3D rendering, video creation, scientific computing and more. But those scenarios don’t only benefit from a quad-core, but a fast quad-core. With the lack of any L3 cache, a lot of the high-performance multi-media goals are going to be thrown out the window. So if not for that, could the X4 620 be designed for mainstream-performance multi-tasking? It seems so.
The X4 620 might be a budget offering, but don’t worry, it’s not based on ancient technology. Rather, it uses the Propus core, which is based on Deneb. The primary difference is the total lack of an L3 cache. It seems minor, but Phenom’s make heavy use of that cache in many different scenarios, so to have it taken completely away may take away some of the overall lustre that a $99 quad-core should seemingly have.
Of course, it’s a little too early to be drawing conclusions now, so instead of doing that, let’s take a quick look at AMD’s current processor line-up, consisting mostly of Phenom II’s and Athlon II’s:
|AMD Phenom II X4 965 BE|
|AMD Phenom II X4 955 BE|
|AMD Phenom II X4 945|
|AMD Phenom II X4 905e|
|AMD Phenom II X3 720 BE|
|AMD Phenom II X3 705e|
|AMD Phenom X4 9650|
|AMD Phenom II X2 550|
|AMD Athlon II X4 630|
|AMD Athlon II X4 620|
|AMD Athlon II X2 250|
|AMD Athlon II X2 245|
|AMD Athlon II X2 240|
As you might notice, the X4 620 isn’t the only Athlon II quad-core. Rather, AMD has also launched an X4 630, which bumps the clock speed by 200MHz. This comes at a premium of $23, however, and without jumping too far ahead in this article, I can confidently say that picking up the X4 620 and overclocking it to 2.80GHz is more than feasible (just wait until you see what we managed for a stable overclock!).
The current CPU-Z release doesn’t have the proper Athlon II X4 logo hard-coded in, so we’re given a generic AMD logo in its place. At stock speeds, you’ll notice that the CPU voltage is higher than you’d expect, at 1.375v. It seems a wee bit high, but in all of our tests, the temperatures were well within reason (<50°C at full load). As this is a non-Black Edition, our 13x multiplier is locked. Again though, there’s no reason for alarm as that’s not going to slow down this chip’s overclocking-ability.
Although the “Package” section in CPU-Z for AMD’s current CPUs tends to be wrong most of the time (it almost always says AM2+), it’s correct in this instance. The X4 620 will work in both AM2+ and AM3 motherboards, so if you want to save as much money as possible, picking up an older board is definitely an option (just make certain that the board has a recent BIOS capable of handling the “unknown” CPU).
At $99, Intel has nothing in the quad-core department to compete with. Its Core 2 Quad Q8200 retails for around $149, although it should perform much better than the X4 620. On the company’s dual-core side, there’s still not much to compete with, although the Core 2 Duo E7400 comes very close, at around $110. Will AMD’s unique market position with its $99 quad make it a winner amongst “high-performance” budget computing? Let’s find out… as soon as we take care of the look at our test system and methodology.
At Techgage, we strive to make sure our results are as accurate as possible. Our testing is rigorous and time-consuming, but we feel the effort is worth it. In an attempt to leave no question unanswered, this page contains not only our testbed specifications, but also a fully-detailed look at how we conduct our testing. For an exhaustive look at our methodologies, even down to the Windows Vista installation, please refer to this article.
The below table lists our testing machine’s hardware, which remains unchanged throughout all GPU testing, minus the graphics card. Each card used for comparison is also listed here, along with the driver version used. Each one of the URLs in this table can be clicked to view the respective review of that product, or if a review doesn’t exist, it will bring you to the product on the manufacturer’s website.
Please note that for the particular CPU we’re looking at today, we’re not using the below-listed Gigabyte AM3 board, but rather the ASUS M4A785TD-M EVO. This was at AMD’s request, as this 785G mATX board is a perfect match for the new budget quad-core. It currently retails for $95, and is very feature-rich, so I can’t disagree.
AMD AM2+/AM3 Test System
Gigabyte MA790GP-DS4H – 790GX-based, F3 BIOS (01/13/09)
Corsair XMS3 DHX 2x2GB – DDR2-1066 5-5-5-15-2T, 2.10v
Intel LGA1156 Test System
|Processors||Intel Core i7-870 – Quad-Core, 2.93GHz, ~1.25v|
Intel Core i5-750 – Quad-Core, 2.66GHz, ~1.25v
Gigabyte P55-UD5 – P55-based, F3 BIOS (08/01/09)
Corsair XMS3 DHX 2x2GB – DDR3-1333 7-7-7-20-2T, 1.65v
ATI Radeon HD 4870 512MB (Catalyst 8.11)
Intel LGA1366 Test System
ASUS Rampage II Extreme – X58-based, 0705 BIOS (11/21/08)
Intel Core 2 Quad Q9650 – Quad-Core, 3.00GHz, 1.30v (Sim)
Intel Core 2 Quad Q9550 – Quad-Core, 2.83GHz, 1.30v (Sim)
Intel Core 2 Quad Q9400 – Quad-Core, 2.66GHz, 1.30v
Intel Core 2 Quad Q8200 – Quad-Core, 2.33GHz, 1.30v
Intel Core 2 Duo E8600 – Dual-Core, 3.33GHz, 1.30v
Intel Core 2 Duo E8500 – Dual-Core, 3.16GHz, 1.30v (Sim)
Intel Core 2 Duo E8400 – Dual-Core, 3.00GHz, 1.30v
Intel Pentium Dual-Core E5200 – Dual-Core 2.50GHz, 1.30v
ASUS Rampage Extreme – X48-based, 0501 BIOS (08/28/08)
(Sim) represents models that were tested using a faster, but under-clocked processor. For example, for the Q9550, we used the QX9770, since the specs are identical all-around, except for the clock speeds. Those were adjusted appropriately, effectively giving us a Q9550 to test with.
When preparing our testbeds for any type of performance testing, we follow these guidelines:
To aide with the goal of keeping accurate and repeatable results, we alter certain services in Windows Vista from starting up at boot. This is due to the fact that these services have the tendency to start up in the background without notice, potentially causing slightly inaccurate results. Disabling “Windows Search” turns off the OS’ indexing which can at times utilize the hard drive and memory more than we’d like.
To help test out the real performance benefits of a given processor, we run a large collection of both real-world and synthetic benchmarks, including 3ds Max, Adobe Lightroom, TMPGEnc Xpress, Sandra 2009 and many more.
Our ultimate goal is always to find out which processor excels in a given scenario and why. Running all of the applications in our carefully-chosen suite can help better give us answers to those questions. Aside from application data, we also run two common games to see how performance scales there, including Call of Duty 4 and Half-Life 2: Episode Two.
In an attempt to offer “real-world” results, we do not utilize timedemos in any of our reviews. Each game in our test suite is benchmarked manually, with the minimum and average frames-per-second (FPS) captured with the help of FRAPS 2.9.5.
To deliver the best overall results, each title we use is exhaustively explored in order to find the best possible level in terms of intensiveness and replayability. Once a level is chosen, we play through repeatedly to find the best possible route and then in our official benchmarking, we stick to that route as close as possible. Since we are not robots and the game can throw in minor twists with each run, no run can be identical to the pixel.
Each game and setting combination is tested twice, and if there is a discrepancy between the initial results, the testing is repeated until we see results we are confident with.
The two games we currently use for our motherboard reviews are listed below, with direct screenshots of the game’s setting screens and explanations of why we chose what we did.
Autodesk’s 3ds Max is without question an industry standard when it comes to 3D modeling and animation, with DreamWorks, BioWare and Blizzard Entertainment being a few of its notable users. It’s a multi-threaded application that’s designed to be right at home on multi-core and multi-processor workstations or render farms, so it easily tasks even the biggest system we can currently throw at it.
For our testing, we use two project files that are designed to last long enough to find any weakness in our setup and also allows us to find a result that’s easily comparable between both motherboards and processors. The first project is a dog model included on recent 3ds Max DVD’s, which we infused with some Techgage flavor.
Our second project is a Bathroom scene that makes heavy use of ray tracing. Like the dog model, this one is also included with the application’s sample files DVD. The dog is rendered at an 1100×825 resolution, while the Bathroom is rendered as 1080p (1920×1080).
What a nice way to kick things off… with AMD’s $99 quad coming ahead of Intel’s $150 quad. It’s also interesting to note that the X4 620 pulled ahead of the X3 720 Black Edition as well. This might be a popular theme throughout the review, as although the X3 720 has a faster clock speed, it lacks the fourth core.
Like 3DS Max, Cinema 4D is another popular cross-platform 3D graphics application that’s used by new users and experts alike. Its creators, Maxon, are well aware that their users are interested in huge computers to speed up rendering times, which is one reason why they released Cinebench to the public.
Cinebench R10 is based on the Cinema 4D engine and the test consists of rendering a high-resolution model of a motorcycle and gives a score at the end. Like most other 3D applications on the market, Cinebench will take advantage of as many cores as you can throw at it.
Compared to the Core 2 Quad Q8200, the Athlon II X4 620’s dominance didn’t last too long. But, it’s hard to discredit the lowly quad in this case. It proved just 2% slower, but costs 33% less.
Similar to Cinebench, the “Persistence of Vision Ray Tracer” is as you’d expect, a ray tracing application that also happens to be cross-platform. It allows you to take your environment and models and apply a ray tracing algorithm, based on a script you either write yourself or borrow from others. It’s a free application and has become a standard in the ray tracing community and some of the results that can be seen are completely mind-blowing.
The official version of POV-Ray is 3.6, but the 3.7 beta unlocks the ability to take full advantage of a multi-core processor, which is why we use it in our testing. Applying ray tracing algorithms can be extremely system intensive, so this is one area where multi-core processors will be of true benefit.
For our test, we run the built-in benchmark, which delivers a simple score (Pixels-Per-Second) the the end. The higher, the better. If one score is twice another, it does literally mean it rendered twice as fast.
In the case of POV-Ray, raw frequency plays a huge role in overall performance, and that’s proven here when once again comparing the X4 620 to the Q8200. We’re off to a great start, so let’s move right into our other multi-media tests, including Adobe Lightroom and TMPGEnc Xpress.
Photo manipulation benchmarks are more relevant than ever, given the proliferation of high-end digital photography hardware. For this benchmark, we test the system’s handling of RAW photo data using Adobe Lightroom, an excellent RAW photo editor and organizer that’s easy to use and looks fantastic.
For our testing, we take 100 RAW files (in Nikon’s .NEF file format) which have a 10-megapixel resolution, and export them as JPEG files in 1000×669 resolution, similar to most of the photos we use here on the website. Such a result could also be easily distributed online or saved as a low-resolution backup. This test involves not only scaling of the image itself, but encoding in a different image format. The test is timed indirectly using a stopwatch, and times are accurate to within +/- 0.25 seconds.
With the above chart, it becomes fairly clear that the 3D rendering tests on the previous page don’t make much use of an L3 cache, because as soon as a scenario is introduced which can, the results begin to work against the X4 620. But to be fair, nothing else comes close to the price of the X4 620, either (the Pentium E5200 is about $60 and is our only other sub-$100 chip).
When it comes to video transcoding, one of the best offerings on the market is TMPGEnc Xpress. Although a bit pricey, the software offers an incredible amount of flexibility and customization, not to mention superb format support. From the get go, you can output to DivX, DVD, Video-CD, Super Video-CD, HDV, QuickTime, MPEG, and more. It even goes as far as to include support for Blu-ray video!
There are a few reasons why we choose to use TMPGEnc for our tests. The first relates to the reasons laid out above. The sheer ease of use and flexibility is appreciated. Beyond that, the application does us a huge favor by tracking the encoding time, so that we can actually look away while an encode is taking place and not be afraid that we’ll miss the final encoding time. Believe it or not, not all transcoding applications work like this.
For our test, we take a 0.99GB high-quality DivX H.264 AVI video of Half-Life 2: Episode Two gameplay with stereo audio and transcode it to the same resolution of 720p (1280×720), but lower the bit rate in order to attain a modest file size. This test also utilizes the SSE instruction sets, either SSE2 or SSE4, depending on what the chip supports.
The lack of L3 cache becomes apparent again, as both the X4 620 and X3 720 just about come out even in their performance. The triple-core chip still comes out on top, though, thanks in part to its higher clock speed.
While TMPGEnc XPress’ purpose is to convert video formats, ProShow from Photodex helps turn your collection of photos into a fantastic-looking slide show. I can’t call myself a slide show buff, but this tool is unquestionably definitive. It offers many editing abilities and the ability to export in a variety of formats, including a standard video file, DVD video and even HD video.
Like TMPGEnc and many other video encoders, ProShow can take full advantage of a multi-core processor. It doesn’t support SSE4 however, but hopefully will in the future as it would improve encoding times considerably. Still, when a slide show application handles a multi-core processor effectively, it has to make you wonder why there is such a delay in seeing a wider-range of such applications on the marketplace.
Once again, the X4 620 topples the X3 720. In some cases, extra cores are better used than L3 cache or faster frequencies. It’s hard to tell this without testing, though, and it could vary often between applications.
This test here stresses the CPU’s ability to handle multi-media instructions and data, using both MMX and SSE2/3/4 as the instruction sets of choice. The results are divided by integer, floating point and double precision, three specific numbering formats used commonly in multi-media work.
Excuse the lack of highlighting our tested model in the graph above… there was already green to be used and I didn’t want to overkill it! The result here is interesting. For the Int x16 test, the X4 620 performs quite well, probably because of the four cores, because it falls short in the other two tests. Overall though, Sandra puts both the X4 620 and Q8200 pretty much on par overall.
With each new processor launch, one thing that’s bound to prove faster are mathematical equations, which when all said and done, plays a massive role in a lot of our computing today. The faster an equation can be completed, the faster a math-heavy process can finish.
Sandra includes applications designed to specifically test the mathematical performance of processors, with the main one being the arithmetic test.
Where raw arithmetic is concerned, Intel’s Core i family is king, with the lone exception of a non-HyperThreaded chip and the Whetstone computation. Our X4 620 performed quite well here, once again beating out the X3 720 and Q8200.
Crypto is a major part of computing, whether you know it or not, and certain processes can prove slower than others, depending on their algorithms. User passwords on your home PC are encrypted, as are user passwords on web servers (like in our forums). Past that, crypto is used in other areas as well, such as with creating of unbreakable locks on files or assigning a hash to a particular file (like MD5).
In Sandra’s Cryptography test, the results are outputted as MB/s, higher being better. Although this is somewhat of an odd metric to go by, generally speaking, the higher the number, the faster the CPU tears through the respective algorithm, which comes down to how fast a password is either encrypted, decrypted, signed, et cetera.
If you are huge into running security algorithms such as AES and SHA, then the X4 620 actually proves to be quite a bargain.
Most, if not all, businesses in existence have to crack open a spreadsheet at some point. Though simple in concept, spreadsheets are an ideal way to either track information or compute large calculations all in real-time. This is important when you run a business that deals with a large amount of expenses.
Although the importance of how fast a calculation takes in an Excel file is, we include results here since they heavily test the mathematical capabilities of each processor. Because Excel 2007 is completely multi-threaded (it can even take advantage of an 8-Core Skulltrail), it makes for a great benchmark to show the scaling between all of our CPUs.
I’ll let Intel explain the two files we use:
Monte Carlo – This workload calculates the European Put and Call option valuation for Black-Scholes option pricing using Monte Carlo simulation. It simulates the calculations performed when a spreadsheet with input parameters is updated and must recalculate the option valuation. In this scenario we execute approximately 300,000 iterations of Monte Carlo simulation. In addition, the workload uses Excel lookup functions to compare the put price from the model with the historical market price for 50,000 rows to understand the convergence. The input file is a 70.1 MB spreadsheet.
Calculations – This workload executes approximately 28,000 sets of calculations using the most common calculations and functions found in Excel*. These include common arithmetic operations like addition, subtraction, division, rounding and square root. It also includes common statistical analysis functions such as Max, Min, Median and Average. The calculations are performed after a spreadsheet with a large dataset is updated with new values and must re-calculate many data points. The input file is a 6.2 MB spreadsheet.
For many computational operations, raw frequency is very, very important, but so is having more cores. In the case of our Excel test, our X4 620 didn’t fare too well, despite the four cores. Even just the slightly higher frequency of the X3 720 made a huge difference here.
Generally speaking, the faster the processor, the higher the system-wide bandwidth and the lower the latency. As is always the case, faster is better when it comes to processors, as we’ll see below. But with Core i7, the game changes up a bit.
Whereas previous memory controllers utilized a dual-channel operation, Intel threw that out the window to introduce triple-channel, which we talked a lot about at August’s IDF. Further, since Intel integrates the IMC onto the die of the new CPUs, benefits are going to be seen all-around.
Before jumping into the results, we already had an idea of what to expect, and just as we did, the results seen are nothing short of staggering.
Not only does raw freqency effect memory bandwidth, so does a robust cache, it seems. The X4 620 fell quite behind the Phenom II chips here, in both bandwidth and latency.
How fast can one core swap data with another? It might not seem that important, but it definitely is if you are dealing with a true multi-threaded application. The faster data can be swapped around, the faster it’s going to be finished, so overall, inter-core speeds are important in every regard.
Even without looking at the data, we know that Core i7 is going to excel here, for a few different reasons. The main is the fact that this is Intel’s first native Quad-Core. Rather than have two Dual-Core dies placed beside each other, i7 was built to place four cores together, so that in itself improves things. Past that, the ultra-fast QPI bus likely also has something to do with speed increases.
With the previous memory result, this one isn’t too surprising. The X4 620 is the first recently-reviewed chip to fall under 3.0GB/s multi-core bandwidth. Oddly enough, its latency between cores is matched with the faster X3 720.
While some popular game franchises are struggling to keep themselves healthy, Call of Duty doesn’t have much to worry about. This is Treyarch’s third go at a game in the series, and a first for one that’s featured on the PC. All worries leading up to this title were all for naught, though, as Treyarch delivered on all promises.
To help keep things fresh, CoD: World at War focuses on battles not exhaustively explored in previous WWII-inspired games. These include battles which take place in the Pacific region, Russia and Berlin, and variety is definitely something this game pulls off well, so it’s unlikely you’ll be off your toes until the end of the game.
For our testing, we use a level called “Relentless”, as it’s easily one of the most intensive levels in the game. It features tanks, a large forest environment and even a few explosions. This level depicts the Battle of Peleliu, where American soldiers advance to capture an airstrip from the Japanese. It’s a level that’s both exciting to play and one that can bring even high-end systems to their knees.
Luckily for hardcore CoD players, the game’s performance doesn’t change with a faster CPU, which is rather impressive. Here, the game ran just as well on our lowly E5200 as it did on our i7-975.
The original Half-Life 2 might have first seen the light of day close to four years ago, but it’s still arguably one of the greatest-looking games ever seen on the PC. Follow-up versions, including Episode One and Episode Two, do well to put the Source Engine upgrades to full use. While playing, it’s hard to believe that the game is based on a four+ year old engine, but it still looks great and runs well on almost any GPU purchased over the past few years.
Like Call of Duty 4, Half-Life 2: Episode Two runs well on modest hardware, but a recent mid-range graphics card is recommended if you wish to play at higher than 1680×1050 or would like to top out the available options, including anti-aliasing and very high texture settings.
This game benefits from both the CPU and GPU, and the skies the limit. In order to fully top out the available settings and run the highest resolution possible, you need a very fast GPU or GPUs along with a fast processor. Though the in-game options go much higher, we run our tests with 4xAA and 8xAF to allow the game to remain playable on the smaller mid-range cards.
Unlike CoD, HL2: Episode Two does love extra CPU power, and that’s evidenced above, but only at the highly-sporadic 1680×1050 resolution. That resolution has proven to be a chore, because the average FPS can fluctuate a great deal. What’s important to note here is that at our top setting of 2560×1600, the differences are almost zero.
For the X4 620, the 2560×1600 result was expected, but the 1680×1050 was not. I’m willing to bet that most of this is the fault of the game, however, as it doesn’t quite add up, and the game is rather problematic (I had similar score issues with our Lynnfield article last month).
As PC enthusiasts, we tend to be drawn to games that offer spectacular graphics… titles that help reaffirm your belief that shelling out lots of cash for that high-end monitor and PC was well worth it. But it’s rare when a game comes along that is so visually-demanding, it’s unable to run fully maxed out on even the highest-end systems on the market. In the case of the original Crysis, it’s easy to see that’s what Crytek was going for.
Funny enough, even though Crysis was released close to a year ago, the game today still has difficulty running at 2560×1600 with full detail settings – and that’s even with overlooking the use of anti-aliasing! Luckily, Warhead is better optimized and will run smoother on almost any GPU, despite looking just as gorgeous as its predecessor, as you can see in the screenshot below.
The game includes four basic profiles to help you adjust the settings based on how good your system is. These include Entry, Mainstream, Gamer and Enthusiast – the latter of which is for the biggest of systems out there, unless you have a sweet graphics card and are only running 1680×1050. We run our tests at the Gamer setting as it’s very demanding on any current GPU and is a proper baseline of the level of detail that hardcore gamers would demand from the game.
Our previous games didn’t show real differences between CPUs, and Crysis Warhead is no different. You can be rest-assured that no matter your PC, this game is going to run like molasses!
Although we generally shun automated gaming benchmarks, we do like to run at least one to see how our GPUs scale when used in a ‘timedemo’-type scenario. Futuremark’s 3DMark Vantage is without question the best such test on the market, and it’s a joy to use, and watch. The folks at Futuremark are experts in what they do, and they really know how to push that hardware of yours to its limit.
The company first started out as MadOnion and released a GPU-benchmarking tool called XLR8R, which was soon replaced with 3DMark 99. Since that time, we’ve seen seven different versions of the software, including two major updates (3DMark 99 Max, 3DMark 2001 SE). With each new release, the graphics get better, the capabilities get better and the sudden hit of ambition to get down and dirty with overclocking comes at you fast.
Similar to a real game, 3DMark Vantage offers many configuration options, although many (including us) prefer to stick to the profiles which include Performance, High and Extreme. Depending on which one you choose, the graphic options are tweaked accordingly, as well as the resolution. As you’d expect, the better the profile, the more intensive the test.
Performance is the stock mode that most use when benchmarking, but it only uses a resolution of 1280×1024, which isn’t representative of today’s gamers. Extreme is more appropriate, as it runs at 1920×1200 and does well to push any single or multi-GPU configuration currently on the market – and will do so for some time to come.
The results here are just as we expected. Generally, the better the CPU, the higher the score. The overall 3DMark Score doesn’t vary much, however, as the benchmark doesn’t weigh the CPU score that heavily, which after taking a look at our three games tested here, is a good thing.
Before discussing results, let’s take a minute to briefly discuss what I consider to be a worthwhile overclock. As I’ve mentioned in past content, I’m not as interested in finding the highest overclock possible as much as I am interested in finding the highest stable overclock. To me, if an overclock crashes the computer after a few minutes of running a stress-test, it has little value except for competition.
How we declare an overclock stable is simple… we stress it as hard as possible for a certain period of time, both with CPU-related tests and also GPU-related, to conclude on what we’ll be confident is 100% stability throughout all possible computing scenarios.
For the sake of CPU stress-testing, we use IntelBurnTest, for reasons I’ve laid out in a recent forum thread. Compared to other popular CPU stress-testers, IBT’s tests are far more gruelling, and proof of that is seen by the fact that it manages to heat the CPU up to 20Â°C hotter than competing applications, like SP2004. Also, despite its name, IntelBurnTest is just as effective on AMD processors. Generally, if the CPU survives the first half-hour of this stress, there’s a good chance that it’s mostly stable, but I strive for a 12 hour stress as long as time permits.
If the CPU stress passes without error, then GPU stress-testing begins, in order to assure a system-wide stable overclock. To test for this, 3DMark Vantage’s Extreme test is used, with the increased resolution of 2560×1600, looped nine times. If this passes, some time is dedicated to real-world game testing, to make sure that gaming is just as stable as it would be if the CPU were at stock. If both these CPU and GPU tests pass without issue, we can confidently declare a stable overclock.
After I finished up benchmarking the X4 620, I contemplated whether or not I should even bother attempting to overclock the chip. The reason is this. This is not a chip designed for overclocking, and its target audience isn’t even close. I also assumed that it simply wasn’t going to overclock, given the luck I’ve had in recent months with Phenom II’s (due to heat). But, I decided it made sense to at least give it a try, because, who knows, right?
Boy, am I glad I decided to see what this puppy was made of! It exceeded my overclocking expectations by a large margin, I can honestly say, and while I was overclocking it, it felt like I was overclocking a recent Intel chip, because it was just that easy (no offense to AMD’s chips… I’ve just had horrible luck with them). I first cranked the chip up to 3.0GHz, and it was absolutely stable. So I decided to push it further… to 3.2GHz, then 3.3GHz, and sure enough… still stable.
When all said and done, I got the X4 620 up to 3.53GHz, stable. That in itself is sweet, but even sweeter is the fact that this overclock required absolutely no user-managed voltage increase. As you can see in the below shot, the board itself increased the voltage, about 0.1v. I won’t lie… that voltage is rather high, but given the stock voltage is 1.375v, and the overclocked is 1.475v, it’s not exactly a major jump.
I should also reiterate the fact that this overclock was achieved with the ASUS M4A785TD-M EVO mATX motherboard. Not a bad clock speed for $200 (board + chip), huh?
How does our overclock translate into real-world results?
AMD Athlon II X4 620 2.60GHz (Overclock: 3.53GHz)
|Autodesk 3ds Max 2009|
|Adobe Lightroom 2.0|
Convert 100 RAW to JPEG
HD Video Encode
Mobile Video Encode
HD Video Encode
DVD Video Encode
Big Number Crunch
The results really do speak for themselves. Our 35% increase to the processor’s clock speed actually did result in the same gain in almost all of our benchmarks, the few exceptions being with TMPGEnc Xpress. Considering that this overclock was “free”, in that it required no effort and is 100% stable, the performance here is incredible.
It should be noted that while I hit 3.53GHz, I quit while I was ahead, because I felt that to be an ideal clock speed given the effort. Any further, and I’m sure I’d have to start manually cranking the voltage to unsafe levels. Either way you look at it, this is a 35% overclock that took no effort to pull off, and it was done on a $95 motherboard, so chances are good that you’ll be able to achieve an equal, if not better, overclock in your own testing.
It goes without saying that power efficiency is at the forefront of many consumers’ minds today, and for good reason. Whether you are trying to save money or the environment – or both – it’s good to know just how much effort certain vendors are putting into their products to help them excel in this area. Both AMD and Intel have worked hard to develop efficient chips, and that’s evident with each new launch. The CPUs are getting faster, and use less power, and hopefully things will stay that way.
To help see what kind of wattage a given processor draws on average, we use a Kill-A-Watt that’s plugged into a power bar that’s in turn plugged into one of the wall sockets, with the test system plugged directly into that. The monitor and other components are plugged into the other socket and is not connected to the Kill-A-Watt. For our system specifications, please refer to our methodology page.
To test, the computer is first boot up and left to sit at idle for five minutes, at which point the current wattage is recorded if stable. To test for full CPU load, IntelBurnTest is run with maximum memory stress for a total of five minutes. During that run, the highest point the wattage reaches on the meter is captured and becomes our “Max Load”. For i7, we use eight instances of SP2004 instead of IntelBurnTest, as the latter is not yet fully compatible with the newer processors.
The X4 620 might not be the best performer out there, but it sure does a good job at beating all of our other quad-cores in way of power consumption. 217W at max load! For a quad-core!
Before getting into our overall conclusions, I must give kudos to AMD for being the first company out the door with a sub-$100 quad-core processor. Any way you look at it, reaching this point is impressive, because once again, quad-cores haven’t even been available for a full three years yet, so to have such an affordable offering is fantastic. Couple this chip with a sub-$100 motherboard like we did, and you have a sweet base to a great multi-tasking rig.
The overall results of the X4 620 is a mixed-bag, and it’s hard to draw up an accurate conclusion of just how great this $99 wonder is. In some of our tests, the chip surpassed the performance of the X3 720, which has a higher clock speed, and also Intel’s Q8200, which in general has a faster architecture and twice the amount of cache. But in others, it fell behind Intel’s 3.0GHz dual-core E8400. Granted, that chip has a faster clock speed, but the X4 620 has twice the cores.
What it all comes down to is the fact that the lack of L3 cache hurts, and it wouldn’t at all surprise me if AMD followed-up soon with a “high-end” Athlon II X4 that included at least 3MB of L3 cache. Because as it stands now, AMD has a $99 quad-core, with the next step up being the $175 Phenom II X4 905e. It almost seems like we’re in need of something to fill that void, and a chip similar to what we tested today, with additional L3 cache, would seem to do a good job of that, without compromising the company’s Phenom II line-up.
From a value standpoint, the X4 620 is an incredible offering. It’s a freakin’ quad-core for under one-hundred bucks! Just don’t make the mistake of picking it up and expecting well-rounded performance across all of your applications and scenarios, because as we’ve seen, that’s just not going to happen. It excels in some cases, and falls behind in others, namely in gaming and also our multi-media tests, such as image manipulation and video encoding.
What makes the X4 620 even better is the overclocking-ability. As we saw on the previous page, we gave our chip a clock speed boost of 35%, and it required absolutely minimal effort… a simple heist of the system bus speed. The board automatically increased the voltage, so that was one less factor we had to worry about. And once again, this was done on a $95 motherboard, not a $200 offering built for overclocking.
Hopefully by now you have a good idea of whether or not the X4 620 is for you. If you don’t often stress your CPU with a single task, but rather want the most out of multi-tasking, then the chip is a superb value. If you want the fastest performance for your multi-media jobs, then another chip with a faster clock speed, and additional cache, will make all the difference in the world.
Have a comment you wish to make on this article? Recommendations? Criticism? Feel free to head over to our related thread and put your words to our virtual paper! There is no requirement to register in order to respond to these threads, but it sure doesn’t hurt!
Copyright © 2005-2019 Techgage Networks Inc. - All Rights Reserved.