Latest News Posts

Social
Latest Forum Posts

NVIDIA GeForce GTX 580
Bookmark and Share

nvidia_geforce_gtx_580_article_logo.gif
Print
by Rob Williams on November 10, 2010 in NVIDIA-Based GPU

NVIDIA launched its first Fermi-based GPU earlier this year in the form of the GeForce GTX 480, and it was met with mixed reception. Until now, it’s been the fastest single-GPU offering on the market, but certain downsides kept it from being the first-choice of many. Does NVIDIA’s first proper follow-up fix all that was wrong?

Power & Temperatures

To test our graphics cards for both temperatures and power consumption, we utilize OCCT for the stress-testing, GPU-Z for the temperature monitoring, and a Kill-a-Watt for power monitoring. The Kill-a-Watt is plugged into its own socket, with only the PC connect to it.

As per our guidelines when benchmarking with Windows, when the room temperature is stable (and reasonable), the test machine is boot up and left to sit at the desktop until things are completely idle. Because we are running such a highly optimized PC, this normally takes one or two minutes. Once things are good to go, the idle wattage is noted, GPU-Z is started up to begin monitoring card temperatures, and OCCT is set up to begin stress-testing.

To push the cards we test to their absolute limit, we use OCCT in full-screen 2560×1600 mode, and allow it to run for 15 minutes, which includes a one minute lull at the start, and a four minute lull at the end. After about 5 minutes, we begin to monitor our Kill-a-Watt to record the max wattage.

In the case of dual-GPU configurations, we measure the temperature of the top graphics card, as in our tests, it’s usually the one to get the hottest. This could depend on GPU cooler design, however.

Note: Due to power-related changes NVIDIA made with the GTX 580, we couldn’t run OCCT on that GPU. Rather, we had to use a run of the less-strenuous Heaven benchmark.

Because NVIDIA changed the way power throttling occurs on the GTX 580, running a program like OCCT or Furmark isn’t appropriate, because both the temperatures and power consumption results are going to be incorrect. As an example, while running OCCT, the card recognized that it was being stressed too hard and throttled its voltages, resulting in an overall system draw of 319W. By comparison, a Heaven run resulted in 418W as seen above.

The results are not exactly comparable to the cards that had OCCT run, but even so, the results we see here are quite good, and NVIDIA obviously did some good work with both the cooler on this card and also its power efficiency. The card is faster than the GTX 480, but more efficient. That’s what we like to see.


Advertisement