Date: February 2, 2011
Author(s): Rob Williams
AMD and NVIDIA released $250 GPUs last week, and both proved to deliver a major punch for modest cash. After testing, we found AMD to have a slight edge in overall performance, so to see if things change when OCing is brought into the picture, we pushed both cards hard, and then pit the results against our usual suite.
Catering to those with appetites for a ~$250 graphics card, both AMD and NVIDIA released appropriate models last week. AMD was responsible for the least interesting of the two, simply knocking its Radeon HD 6950 2GB down to 1GB, while NVIDIA released what it hopes will become a legendary offering, the GeForce GTX 560 Ti.
Although we’ve posted articles taking a look at each card in some depth, one area we haven’t tackled is overclocking, so that’s the purpose of this article. When NVIDIA first briefed us on its GTX 560 Ti, it touted the major overclocking potential that the card offered. From AMD, “overclocking” wasn’t uttered even once.
In our review of NVIDIA’s card, we had to give props to AMD for having the slightly more appealing offering, because while it costs about $20 more, it proved to be a bit faster, and boasts improved efficiency overall. But with the GTX 560 Ti’s potential for overclocking, we thought we’d pit both cards against each other in that regard, to see if our opinions would change on which is better.
So, I sat down with both cards, installed the respective overclocking tools, and then got to work. For the AMD card, I used Sapphire’s TriXX, as it offers a great deal of flexibility. For the same reason, we chose to use MSI’s Afterburner for the NVIDIA card. Voltages were not touched during overclocking as neither program would allow us that privilege.
To deem an overclock as “stable”, it must first pass 30 minutes of 3DMark 11’s Extreme test, followed by real gameplay through both Dirt 2 and Metro 2033. If we can’t force the card to crash via these means, we consider it stable.
First up, AMD’s Radeon HD 6950 1GB. Like its 2GB bigger brother, the 1GB model features core clock speeds of 800MHz and memory clocks of 1250MHz.
(Click for larger version.)
The overclock I managed to reach with this card, 950MHz core and 1300MHz memory, impressed me quite a bit. I haven’t had the best of luck overclocking AMD’s previous HD 6000 series cards, so I didn’t expect much here, but as you can see, this is not a minor overclock… the core saw an 18% boost alone.
What about NVIDIA? Strangely enough, even though this was the card I had expected to reach unbelievable heights, it couldn’t. During our press briefing with the company, it floated around the idea of overclocks as high as 970MHz, but we were unable to even get close. Instead, we had to settle on 930MHz core (from 822MHz), 1860MHz shader (from 1645MHz) and 1020MHz memory (from 1002MHz).
(Click for larger version.)
Overall, I’m not super-pleased with this overclock since I was expecting much better. Even with a core clock of 940MHz (+10MHz above our “stable”), Dirt 2 would artifact rather starkly during a single race and caused the game to crash on me twice.
In looking around the Web, I’ve seen others achieve higher overclocks than this, some as high as 1,000MHz, which leads me to believe that most people should be able to get a clock much higher than 930MHz. Unfortunately, I only had one GPU sample to test out, so I’m hoping to get more in, in the near future, to see if I can reach anything higher.
As much as I wished we could have heightened the clock a bit more, we were unable, so what you see above is what we had to stick with. And with that said, let’s take a look at the actual performance gains throughout our usual suite from each card.
At Techgage, we strive to make sure our results are as accurate as possible. Our testing is rigorous and time-consuming, but we feel the effort is worth it. In an attempt to leave no question unanswered, this page contains not only our testbed specifications, but also a detailed look at how we conduct our testing.
The below table lists our testing machine’s hardware, which remains unchanged throughout all GPU testing, minus the graphics card. Each card used for comparison is also listed here, along with the driver version used. Each one of the URLs in this table can be clicked to view the respective category on our site for that product.
Intel Core i7-975 Extreme Edition – Quad-Core @ 4.05GHz – 1.40v
Gigabyte GA-EX58-EXTREME – F13j BIOS (08/02/2010)
Corsair DOMINATOR – 12GB DDR3-1333 7-7-7-24-1T, 1.60v
Radeon HD 6970 2GB CrossFireX (Reference) – Catalyst 10,12 Beta
Radeon HD 6950 2GB CrossFireX (Reference) – Catalyst 10.12 Beta
Radeon HD 6970 2GB (Reference) – Catalyst 10,12 Beta
Radeon HD 6950 2GB (Reference) – Catalyst 11.1
Radeon HD 6950 1GB (Reference) – Catalyst 11.1
Radeon HD 6870 1GB (Reference CrossFireX) – Catalyst 10.10
Radeon HD 6850 1GB (Reference CrossFireX) – Catalyst 10.10
Radeon HD 6870 1GB (Reference) – Catalyst Oct 5, 2010 Beta
Radeon HD 6850 1GB (Reference) – Catalyst Oct 5, 2010 Beta
Radeon HD 5870 1GB (Sapphire) – Catalyst 10.8
Radeon HD 5850 1GB (ASUS) – Catalyst 10.8
Radeon HD 5830 1GB (Reference) – Catalyst 10.8
Radeon HD 5770 1GB (Sapphire FleX) – Catalyst 10.9
Radeon HD 5770 1GB (Reference) – Catalyst 10.8
Radeon HD 5750 1GB (Sapphire) – Catalyst 10.8
GeForce GTX 580 1536MB (Reference) – GeForce 262.99
GeForce GTX 570 1280MB (Reference) – GeForce 263.09
GeForce GTX 560 Ti 1024MB (Reference) – GeForce 266.56
GeForce GTX 480 1536MB (Reference) – GeForce 260.63
GeForce GTX 470 1280MB (EVGA) – GeForce 260.63
GeForce GTX 460 1GB (EVGA) – GeForce 260.63
GeForce GTX 450 1GB (ASUS) – GeForce 260.63
Gateway XHD3000 30″
When preparing our testbeds for any type of performance testing, we follow these guidelines:
To aide with the goal of keeping accurate and repeatable results, we alter certain services in Windows 7 from starting up at boot. This is due to the fact that these services have the tendency to start up in the background without notice, potentially causing inaccurate test results. For example, disabling “Windows Search” turns off the OS’ indexing which can at times utilize the hard drive and memory more than we’d like.
The most important services we disable are:
The full list of Windows services we assure are disabled is large, but for those interested in perusing it, please look here. Most of the services we disable are mild, but we go to such an extent to have the PC as highly optimized as possible.
At this time, we benchmark with three resolutions that represent three popular monitor sizes available today, 20″ (1680×1050), 24″ (1920×1080) and 30″ (2560×1600). Each of these resolutions offers enough of a variance in raw pixel output to warrant testing with it, and each properly represent a different market segment: mainstream, mid-range and high-end.
Because we value results generated by real-world testing, we don’t utilize timedemos. The possible exceptions might be Futuremark’s 3DMark Vantage and Unigine’s Heaven 2.1. Though neither of these are games, both act as robust timedemos. We choose to use them as they’re a standard where GPU reviews are concerned.
All of our results are captured with the help of Beepa’s FRAPS 3.2.3, while stress-testing and temperature-monitoring is handled by OCCT 3.1.0 and GPU-Z, respectively.
For those interested in the exact settings we use for each game, direct screenshots can be seen below:
It’s not that often that faithful PC gamers get a proper racing game for their platform of choice, but Dirt 2 is one of those. While it is a “console port”, there’s virtually nothing in the game that will make that point stand out. The game as a whole takes good advantage of our PC’s hardware, and it’s as challenging as it is good-looking.
Manual Run-through: The race we chose to use in Dirt 2 is the first one available in the game, as it’s easily accessible and features a lot of GPU-pounding effects that the game has become known for, such as realistic dust and water effects, a large on-looking crowd of people and fine details on and off the track. Each run-through lasts the entire two laps, which comes out to about 2.5 minutes.
The results seen here scale identically to the results from the non-overclocked versions of these cards. NVIDIA’s GTX 560 Ti keeps ahead of AMD’s HD 6950 1GB at 1680×1050 and 1920×1080, but the tables turn at 2560×1600. With AMD’s overclock, the card reaches the same performance as the HD 6970, while NVIDIA stays a bit behind the GTX 570.
Just Cause 2 might not belong to a well-established series of games, but with its launch, it looks like that might not be the case for long. The game offers not only superb graphics, but an enormous world to explore, and for people like me, a countless number of hidden items to find around it. During the game, you’ll be scaling skyscrapers, racing through jungles and fighting atop snow-drenched mountains. What’s not to like?
Manual Run-through: The level chosen here is part of the second mission in the game, “Casino Bust”. Our runthrough begins at the second-half of the level, which requires us to situate ourselves on top of a car and have our driver, Karl Blaine, speed us through part of the island to safety. This is a great mission for benchmarking as we get to see a lot of the landmass, even if some of it is at a distance.
With our top overclock, AMD’s HD 6950 1GB manages to surpass the performance of the HD 6970 2GB by just a smidgen, while for NVIDIA, the GTX 560 Ti performs quite nicely compared to the GTX 570..
For fans of the original Mafia game, having to wait an incredible eight years for a sequel must’ve been tough. But as we found out in our review, the wait might be forgotten as the game is quite good. It doesn’t feature near as much depth as say, Grand Theft Auto IV, but it does a masterful job of bringing you back to the 1940’s and letting you experience the Mafia lifestyle.
Manual Run-through: Because this game doesn’t allow us to save a game in the middle of a level, we chose to use chapter 7, “In Loving Memory…”, to do our runthrough. That chapter begins us on a street corner with many people around, and from there, we run to our garage, get in our car, and speed out to the street. Our path ultimately leads us to the park, and takes close to two minutes to accomplish.
Both AMD’s and NVIDIA’s cards performed great here, with AMD’s still managing to surpass the more expensive HD 6970, and NVIDIA’s card keeping comfortably behind the GTX 570.
One of the more popular Internet memes for the past couple of years has been, “Can it run Crysis?”, but as soon as Metro 2033 launched, that’s a meme that should have died. Metro 2033 is without question one of the beefiest games on the market, and though it supports DirectX 11, it’s almost a feature worth ignoring, because the extent you’ll need to go to in order to see playable framerates isn’t likely going to be worth it.
Manual Run-through: The level we use for testing is part of chapter 4, called “Child”, where we must follow a linear path through multiple corridors until we reach our end point, which takes a total of about 90 seconds. Please note that due to the reason mentioned above, we test this game in DX10 mode, as DX11 simply isn’t that realistic from a performance standpoint.
The larger buffer that the 2GB HD 6970 has kept it ahead of our overclocked 1GB card in this game, while NVIDIA’s card kept super close to the GTX 570 at all three resolutions.
Of all the games we test, it might be this one that needs no introduction. Back in 1998, Blizzard unleashed what was soon to be one of the most successful RTS titles on the planet, and even as of today, the original is still heavily played all around the world – even in actual competitions. StarCraft II of course had a lot of hype to live up to, and it did, thanks to its intense gameplay and superb graphics.
Manual Run-through: The portion of the game we use for testing is part of the Zero Hour mission, which has us holding fort until we’re able to evacuate. Our saved game starts us in the middle of the mission, and from the get-go, we build a couple of buildings and concurrently move our main units up and around the map. Total playtime lasts about two minutes.
Both cards are quite divided here, but both still deliver superb performance.
Although we generally shun automated gaming benchmarks, we do like to run at least one to see how our GPUs scale when used in a ‘timedemo’-type scenario. Futuremark’s 3DMark 11 is without question the best such test on the market, and it’s a joy to use, and watch. The folks at Futuremark are experts in what they do, and they really know how to push that hardware of yours to its limit.
Similar to a real game, 3DMark 11 offers many configuration options, although many (including us) prefer to stick to the profiles which include Performance, and Extreme. Depending on which one you choose, the graphic options are tweaked accordingly, as well as the resolution. As you’d expect, the better the profile, the more intensive the test. The benchmark doesn’t natively support 2560×1600, so to benchmark with that, we choose the Extreme profile and simply change the resolution.
According to 3DMark, the overclocked HD 6950 1GB is faster all-around then the HD 6970 2GB. For NVIDIA, the overclocked GTX 560 Ti doesn’t quite manage to reach the heights of the GTX 570, but it does push far ahead its stock-clocked variant.
While Futuremark is a well-established name where PC benchmarking is concerned, Unigine is just beginning to become exposed to people. The company’s main focus isn’t benchmarks, but rather its cross-platform game engine which it licenses out to other developers, and also its own games, such as a gorgeous post-apocalytic oil strategy game. The company’s benchmarks are simply a by-product of its game engine.
The biggest reason that the company’s “Heaven” benchmark grew in popularity rather quickly is that both AMD and NVIDIA promoted it for its heavy use of tessellation, a key DirectX 11 feature. Like 3DMark Vantage, the benchmark here is overkill by design, so results here aren’t going to directly correlate with real gameplay. Rather, they showcase which card models can better handle both DX11 and its GPU-bogging features.
Continuing the performance we’ve been seeing so far, both cards offer much improved performance over their stock variants, which is no surprise given our rather large clock bumps.
To test our graphics cards for both temperatures and power consumption, we utilize OCCT for the stress-testing, GPU-Z for the temperature monitoring, and a Kill-a-Watt for power monitoring. The Kill-a-Watt is plugged into its own socket, with only the PC connect to it.
As per our guidelines when benchmarking with Windows, when the room temperature is stable (and reasonable), the test machine is boot up and left to sit at the desktop until things are completely idle. Because we are running such a highly optimized PC, this normally takes one or two minutes. Once things are good to go, the idle wattage is noted, GPU-Z is started up to begin monitoring card temperatures, and OCCT is set up to begin stress-testing.
To push the cards we test to their absolute limit, we use OCCT in full-screen 2560×1600 mode, and allow it to run for 15 minutes, which includes a one minute lull at the start, and a four minute lull at the end. After about 5 minutes, we begin to monitor our Kill-a-Watt to record the max wattage.
In the case of dual-GPU configurations, we measure the temperature of the top graphics card, as in our tests, it’s usually the one to get the hottest. This could depend on GPU cooler design, however.
Note: Due to power-related changes NVIDIA made with the GTX 580 & GTX 570, we couldn’t run OCCT on that GPU. Rather, we had to use a run of the less-strenuous Heaven benchmark.
For whatever reason, our GTX 560 Ti overclocked card was a bit more forgiving when we tested it for temperatures than the stock-clocked version was, as it reaches the top of the chart – a good thing in this case. AMD’s overclocked card experienced just a minor gain in heat.
Power-wise, NVIDIA’s card drew 21W more at load, and AMD’s drew 26W. Given the performance boosts seen, these bumps in power draw might not matter too much to you.
The sole purpose of this article was to see if the conclusions I had reached in both of our published articles on these cards could be swayed when overclocking was brought into the equation, but when all said and done, the answer is simple… “no”.
At stock speeds, AMD’s card consistently gives us better performance than NVIDIA’s, and because we were able to overclock AMD’s card to such a great degree, it makes sense that it was again able to keep ahead of NVIDIA’s in the vast majority of cases.
There is of course a problem, though. The overclock we reached on NVIDIA’s card doesn’t quite compare to the overclocks we’ve been seeing around the Web, and in fact, at 930MHz, I think we’ve been beat out by everyone. I’ve even seen 1,000MHz+ clocks from sites that take stability as seriously as I do ([H]ard|OCP), so I feel confident in the fact that our sample is less-than-ideal from an overclocking standpoint.
The fact that we couldn’t reach a higher clock had nothing to do with voltages or the like, because as far as I’m aware, there are no tools out there currently that allow that adjustment. Out of the box, the card should be far more overclockable than our sample, and for you, that’s a great thing. For us, it doesn’t help us in proving any points.
That being said, we’re not able to comment on the true result of AMD vs. NVIDIA here, and overall, that might be a good thing to avoid regardless. We were not able to reach 1,000MHz on our GTX 560 Ti, and chances are you won’t be able to either. On the same token, you might not be able to go out and purchase an HD 6950 1GB and have it overclock to 950MHz like ours. That’s just the name of the overclocking game.
Compared to the respective stock-clocked versions of each card though, the gains we saw were rather stark. I tend to be against GPU overclocking, because the risk of instability is rarely worth the minor gain in performance, but we’d often see 10 FPS gains at 1680 and 5 FPS gains at higher resolutions, which is rather notable. So if you don’t mind overclocking your cards, the performance gains awaiting you are huge.
From a non-overclocking AMD vs. NVIDIA standpoint, which to purchase? To gain perspective there, we’d recommend reading through our previously published articles. The general idea though, is that at stock-clocks, the extra features are what sell each brand. NVIDIA has PhysX and CUDA, while AMD has improved power efficiency and better multi-monitor support. It’s up to you to decide which of these features are more important to you.
Have a comment you wish to make on this article? Recommendations? Criticism? Feel free to head over to our related thread and put your words to our virtual paper! There is no requirement to register in order to respond to these threads, but it sure doesn’t hurt!
Copyright © 2005-2020 Techgage Networks Inc. - All Rights Reserved.