To test our graphics cards for both temperatures and power consumption, we utilize OCCT for the stress-testing, GPU-Z for the temperature monitoring, and a Kill-a-Watt for power monitoring. The Kill-a-Watt is plugged into its own socket, with only the PC connect to it.
As per our guidelines when benchmarking with Windows, when the room temperature is stable (and reasonable), the test machine is boot up and left to sit at the Windows desktop until things are completely idle. Once things are good to go, the idle wattage is noted, GPU-Z is started up to begin monitoring card temperatures, and OCCT is set up to begin stress-testing.
To push the cards we test to their absolute limit, we use OCCT in full-screen 2560×1600 mode, and allow it to run for 30 minutes, which includes a one minute lull at the start, and a three minute lull at the end. After about 10 minutes, we begin to monitor our Kill-a-Watt to record the max wattage.
I had to do a total double-take when I inputted our results for the Toxic card into the chart above, because by all coincidence, our temperatures for both the Toxic and reference HD 5850 were absolutely the same, both for the idle and load. Now, I say “coincidentally”, because while both cards are the same model, the Toxic has a slightly different cooler, and is pre-overclocked. So to hit the exact same numbers is neat.
It may seem immediately unimpressive that despite Sapphire’s interesting cooler modifications, the temperatures are the same, but bear in mind that the card is overclocked, so at least it isn’t worse.
The overclock may not have been noticeable in our temperature tests above, but for power consumption, the opposite is true, with a 27W increase at load, and 7W increase at idle.