NVIDIA GeForce GTX 570

Print
by Rob Williams on December 10, 2010 in Graphics & Displays

To keep the mid to high-end GPU market interesting, NVIDIA has just launched its GeForce GTX 570, a replacement to the GTX 470. It’s priced a bit higher, at $349, but packs extra performance, improved power efficiency and lower temperatures. Is that enough to make it a winner in today’s tight market? We’re here to find that out.

Page 10 – Power & Temperatures

To test our graphics cards for both temperatures and power consumption, we utilize OCCT for the stress-testing, GPU-Z for the temperature monitoring, and a Kill-a-Watt for power monitoring. The Kill-a-Watt is plugged into its own socket, with only the PC connect to it.

As per our guidelines when benchmarking with Windows, when the room temperature is stable (and reasonable), the test machine is boot up and left to sit at the desktop until things are completely idle. Because we are running such a highly optimized PC, this normally takes one or two minutes. Once things are good to go, the idle wattage is noted, GPU-Z is started up to begin monitoring card temperatures, and OCCT is set up to begin stress-testing.

To push the cards we test to their absolute limit, we use OCCT in full-screen 2560×1600 mode, and allow it to run for 15 minutes, which includes a one minute lull at the start, and a four minute lull at the end. After about 5 minutes, we begin to monitor our Kill-a-Watt to record the max wattage.

In the case of dual-GPU configurations, we measure the temperature of the top graphics card, as in our tests, it’s usually the one to get the hottest. This could depend on GPU cooler design, however.

Note: Due to power-related changes NVIDIA made with the GTX 580 & GTX 570, we couldn’t run OCCT on that GPU. Rather, we had to use a run of the less-strenuous Heaven benchmark.

Because NVIDIA changed the way power throttling occurs on the GTX 500 series, running a program like OCCT or Furmark isn’t appropriate, because both the temperatures and power consumption results are going to be incorrect. As an example, while running OCCT on the GTX 580, the card recognized that it was being stressed too hard and throttled its voltages, resulting in an overall system draw of 319W. By comparison, a Heaven run resulted in 418W as seen above.

Being that the GTX 570’s TDP is 319W, 4W higher than the GTX 470, we can see the “improvements” NVIDIA made to its power efficiency. The GTX 470 scored 393W at full load, and the GTX 570 hit 253W. We’re still evaluating methods to produce more reliable power results. NVIDIA certainly turned the tables on our methods here, that’s for sure.

Support our efforts! With ad revenue at an all-time low for written websites, we're relying more than ever on reader support to help us continue putting so much effort into this type of content. You can support us by becoming a Patron, or by using our Amazon shopping affiliate links listed through our articles. Thanks for your support!

Rob Williams

Rob founded Techgage in 2005 to be an 'Advocate of the consumer', focusing on fair reviews and keeping people apprised of news in the tech world. Catering to both enthusiasts and businesses alike; from desktop gaming to professional workstations, and all the supporting software.

twitter icon facebook icon instagram icon