To test our graphics cards for both temperatures and power consumption, we utilize OCCT for the stress-testing, GPU-Z for the temperature monitoring, and a Kill-a-Watt for power monitoring. The Kill-a-Watt is plugged into its own socket, with only the PC connect to it.
As per our guidelines when benchmarking with Windows, when the room temperature is stable (and reasonable), the test machine is boot up and left to sit at the desktop until things are completely idle. Because we are running such a highly optimized PC, this normally takes one or two minutes. Once things are good to go, the idle wattage is noted, GPU-Z is started up to begin monitoring card temperatures, and OCCT is set up to begin stress-testing.
To push the cards we test to their absolute limit, we use OCCT in full-screen 2560×1600 mode, and allow it to run for 15 minutes, which includes a one minute lull at the start, and a four minute lull at the end. After about 5 minutes, we begin to monitor our Kill-a-Watt to record the max wattage.
In the case of dual-GPU configurations, we measure the temperature of the top graphics card, as in our tests, it’s usually the one to get the hottest. This could depend on GPU cooler design, however.
If it seems odd that the Radeon HD 6850 runs far hotter than the HD 6870, you’re certainly not alone. Prior to our launch article, AMD sent out BIOS updates for the HD 6850 in particular, as they shipped out without proper power mechanics in place. Though we updated both of our cards, they still prove much hotter than the HD 6870. It wouldn’t surprise me if we need another BIOS update, and I’m confident that shipping cards won’t follow this same scheme.
The HD 6870 in CrossFireX out-performed the GTX 480 in all of our tests, but it also proves to suck down far more power as well… over 500W for our entire PC. Given that we’re running an overclocked Core i7 quad-core, though, 525W for such a high-end rig seems quite respectable!