by Rob Williams on January 31, 2011 in Graphics & Displays
AMD last week released a 1GB variant of its popular Radeon HD 6950 graphics card, and to see how the performance would be throttled with the GDDR cut, we benchmarked both versions with the latest Catalyst 11.1 driver. Does the 1GB card and its $20 savings prove too hard to ignore, or should the 2GB still be the one to scoop up?
When AMD released the 1GB version of its Radeon HD 6950, it accomplished two goals. First, it gave AMD fans another choice for mid-range to high-end graphics, for about $20 less than the 2GB version. Second, it released a more competitive offering for NVIDIA’s GeForce GTX 560 Ti. Both of these things have been well accomplished.
As we discovered in our launch article for NVIDIA’s latest and greatest mid-range offering, AMD edged out just ahead and currently has the more attractive offering. It might cost $20 more, but it’s a bit faster, and offers better power efficiency. Of course, there are trade-offs when you choose between AMD or NVIDIA, and depending on what you’re looking for your GPU to do, either of the two cards are a great choice.
So what about AMD’s 1GB vs. 2GB battle? The results, as we saw throughout our five tested games and two synthetic benchmarks, were a bit interesting. Using the same driver, the 1GB card consistently outperformed the 2GB card in 90% of the tests. Simple logic says that makes little sense, and while it’s hard to come up with a reason, it’s not the first time I’ve seen this happen.
I used to believe that 2GB on a graphics card was overkill, but then I saw results in games that could actually use more than 1GB of memory, such as Grand Theft Auto IV and others. Unfortunately, just because a game can use more than 1GB of memory doesn’t mean it’ll perform better – it just means it could look better while offering the same or similar performance.
It’s clear that none of the games we’ve tested here at our chosen settings can use more than 1GB of GDDR, but that doesn’t mean there’s no use for a card with a larger buffer. It could be that if we tested out a game like Metro 2033 in DX11 mode, we’d see some gains, but truthfully, that game doesn’t offer great performance on most any card at decent resolutions in DX11, so even if there were improvements, they wouldn’t be major.
For the $20 savings, it’s clear that the 1GB HD 6950 is a winner, but does it mean you shouldn’t at least consider the 2GB model? If you’re gaming needs don’t require the beefiest resolutions and highest detail levels, I’d say the 1GB card is a good choice, but for the best kind of future-proofing, I’d still recommend the 2GB card. If you are the type to upgrade your GPU every year, then it doesn’t matter, but if you want it to last a couple of years, the extra $20 could be worth it. At the end of the day, the performance between the two cards, despite being a tad better on the 1GB, are so minimal that they almost don’t exist.
1GB and $20 savings, or $20 extra for improved future-proofing? It really comes down to what you’re looking for, and what you’re expecting from your PC for the next couple of years.
If anyone out there has games or settings that can effectively push more than 1GB of GDDR, we’d love to hear about them. We had assumed that if any test could do that, 3DMark 11 would, but no cigar. It could also be that the 11.1 driver isn’t “optimized” enough for the 2GB card, but that seems a bit silly to say. Technically, both cards are identical… one just happens to have more memory than the other.
The findings we stumbled on are interesting, though, and if you needed an example to show people who believe that more memory on a GPU makes it better, this is a great one.
Discuss this article in our forums!
Have a comment you wish to make on this article? Recommendations? Criticism? Feel free to head over to our related thread and put your words to our virtual paper! There is no requirement to register in order to respond to these threads, but it sure doesn’t hurt!