Need a new mid-range GPU for under $200? NVIDIA’s 9800 GTX+ is a good model to keep in mind, and ASUS’ Dark Knight card in particular proves to be one well-worth considering. The card offers great gaming performance for the cash, even handling certain games at 2560×1600 with ease, has a sweet-looking cooler, and best of all, is priced-right.
Before tackling our overclocking results, let’s first clear up what we consider to be a real overclock and how we go about achieving it. If you’ve read our processor reviews, you might already be aware that I personally don’t care for an unstable overclock. It might look good on paper, but if it’s not stable, then it won’t be used. Very few people purchase a new GPU for the sole purpose of finding the maximum overclock, which is why we focus on finding what’s stable and usable.
To help find the max stable overclock on an ATI card, we stick to the Catalyst Control Center included with the official driver. Sadly, ATI’s limits are real conservative, so it’s rare when at least the Core clock can’t be totally maxed out, as long as temperatures are kept within check.
Once we find what we feel could be a stable overclock, the card is put through the stress of dealing with 3DMark Vantage’s “Extreme” test, looped three times. Although previous versions of 3DMark offered the ability to loop the test infinitely, Vantage for some reason doesn’t. It’s too bad, as it would be the ideal GPU-stress test.
If no artifacts or performance issues arise, we continue to test the card in multiple games from our test suite, at their maximum available resolutions and settings that the card is capable of handling. If no issues arise during our real-world gameplay, we can consider the overclock to be stable and then proceed with testing.
The default clocks on the 9800 GTX+ are 738MHz Core, 1836MHz Shader and 1100MHz memory. Spending a little time with RivaTuner, I was able to achieve 840MHz Core, 1975MHz Shader and 1100MHz memory as the max stable overclock. Going any higher on either the Core or Shader would result in near-immediate crashing when running 3DMark Vantage.
As expected, the differences are minimal, but that’s to be expected. We did see noticeable (not in real-world) differences with CoD 4, with a 4FPS boost, but that’s about as big as it gets.
Regardless of whether or not you plan to overclock, having reasonable system temperatures is always welcomed. Not only will your machine be more reliable with cooler temps, it will likewise not add any unneeded heat to the room you are in (unless it happens to be wintertime and you keep the windows open, then it might be a good thing).
To test a GPU for idle and load temps, we do a couple things. First, with the test system turned off for at least a period of ten minutes, we measure the room temperature using a Type-K thermometer sensitive of up to 0.1°F. The result from this is placed beside the GPUs name in the graph below. Since we don’t test in a temperature-controlled environment, the room temp can vary by a few degrees, which is why we include the information here.
Once the room temp is captured, the test system is booted up and left idle for ten minutes, at which point GPU-Z is loaded up to grab the current GPU Core temperature. Then, a full run of 3DMark Vantage is run to help warm the card up, followed by another run of the same benchmark using the Extreme mode (1920×1200). Once the test is completed, we refer to the GPU-Z log file to find the maximum temperature hit. Please note that this is not an average. Even if the highest point was only hit once, it’s what we keep as a result.
After-market coolers generally perform much better than the stock versions, and that proved to be the case here. Although we don’t have a reference 9800 GTX+, compared to the 9800 GTX, ASUS’ cooler dropped 6°C off the load and 13°C off idle… all while being faster in the process.