To test our graphics cards for both temperatures and power consumption, we utilize OCCT for the stress-testing, GPU-Z for the temperature monitoring, and a Kill-a-Watt for power monitoring. The Kill-a-Watt is plugged into its own socket, with only the PC connect to it.
As per our guidelines when benchmarking with Windows, when the room temperature is stable (and reasonable), the test machine is boot up and left to sit at the desktop until things are completely idle. Because we are running such a highly optimized PC, this normally takes one or two minutes. Once things are good to go, the idle wattage is noted, GPU-Z is started up to begin monitoring card temperatures, and OCCT is set up to begin stress-testing.
To push the cards we test to their absolute limit, we use OCCT in full-screen 2560×1600 mode, and allow it to run for 15 minutes, which includes a one minute lull at the start, and a four minute lull at the end. After about 5 minutes, we begin to monitor our Kill-a-Watt to record the max wattage.
In the case of dual-GPU configurations, we measure the temperature of the top graphics card, as in our tests, it’s usually the one to get the hottest. This could depend on GPU cooler design, however.
Note: Due to changes AMD and NVIDIA made to the power schemes of their respective current-gen cards, we were unable to run OCCT on them. Rather, we had to use a less-strenuous run of 3DMark Vantage. We will be retesting all of our cards using this updated method the next time we overhaul our suite.
The latest public version of GPU-Z was unable to read the temperature sensors for the HD 6790, so we’ll produce those results at a later date. Where power consumption is concerned, the HD 6790 does suck down a bit more power than the HD 5770, but draws less than NVIDIA’s GTX 550 Ti at both load and idle.