by Rob Williams on January 31, 2011 in Graphics & Displays
AMD last week released a 1GB variant of its popular Radeon HD 6950 graphics card, and to see how the performance would be throttled with the GDDR cut, we benchmarked both versions with the latest Catalyst 11.1 driver. Does the 1GB card and its $20 savings prove too hard to ignore, or should the 2GB still be the one to scoop up?
To test our graphics cards for both temperatures and power consumption, we utilize OCCT for the stress-testing, GPU-Z for the temperature monitoring, and a Kill-a-Watt for power monitoring. The Kill-a-Watt is plugged into its own socket, with only the PC connect to it.
As per our guidelines when benchmarking with Windows, when the room temperature is stable (and reasonable), the test machine is boot up and left to sit at the desktop until things are completely idle. Because we are running such a highly optimized PC, this normally takes one or two minutes. Once things are good to go, the idle wattage is noted, GPU-Z is started up to begin monitoring card temperatures, and OCCT is set up to begin stress-testing.
To push the cards we test to their absolute limit, we use OCCT in full-screen 2560×1600 mode, and allow it to run for 15 minutes, which includes a one minute lull at the start, and a four minute lull at the end. After about 5 minutes, we begin to monitor our Kill-a-Watt to record the max wattage.
In the case of dual-GPU configurations, we measure the temperature of the top graphics card, as in our tests, it’s usually the one to get the hottest. This could depend on GPU cooler design, however.
Note: Due to power-related changes NVIDIA made with the GTX 580 & GTX 570, we couldn’t run OCCT on that GPU. Rather, we had to use a run of the less-strenuous Heaven benchmark.
Overall, the differences between the 1GB and 2GB cards in both temperatures and power consumption are minimal, and whatever differences there are could be attributed to random variance.