Techgage logo

ASUS GeForce EN9800GT Matrix

Date: November 21, 2008
Author(s): Rob Williams

Picking out a new graphics card is easier to do now than ever, as there seems to be a model to cover every single price-range, and not just from one single GPU manufacturer, either. Today’s card is one that represents the ~$125 price spot and is designed as a step-up from the 9600 GT, with ASUS applying their usual TLC to help add even more appeal.



Introduction, Closer Look

If there’s one thing that’s more true today than ever, it’s that you can get some incredible gaming performance for not too much money. Taking a look at a GPU like a GeForce 9600 GT can prove that. At ~$80, it’s affordable for almost anyone, yet can handle some games with modest settings just fine at super-high resolutions, like 2560×1600. There’s definite value in GPUs today, and it’s great to see.

The next step-up from the previously-mentioned card is of course the 9800 GT, which features a slower core and shader frequencies, but almost doubles the number of cores, effectively giving us a card that’s close to being equivalent to the original 9800 GTX, save for higher frequencies. Given these specs, it’s hard to give a general percentage of performance increase we should see over the 9600 GT, but for the extra $20 the card costs, the value might be far greater.

As it stands today, there are four GPUs that cover this particular price-range. NVIDIA has their 9600 GT and 9800 GT, while ATI has their HD 4830 and HD 4670. Sadly, we don’t have either of the latter cards, but it’s likely safe to say that the performance differences between each card at its respective price-range shouldn’t be too great.

The card we’re taking a look at today comes from ASUS, and is directly battling against ATI’s HD 4830. Both cards hover around the $100 area, with some of the NVIDIA cards going even lower depending on the brand and mail-in rebate. Some at one particular e-tailer were selling for $85 after MIR… which offers further proof to just how much value can be had in today’s GPUs.

Closer Look

ASUS is a company that’s likely familiar to everyone, for varying reasons. The most important one is that generally speaking, most of their products are solid. We’ve had very few in our labs over the course of the past few years that have really let us down, and that’s rare to see from anyone.

The second reason might be the fact that the company produces more products than we can even keep track of. Because of this, the company has to keep conjuring up new ideas, throughout their entire catalogue. One of their product lines that has seen the greatest attention has been their graphics cards, and we’ve taken a good look at many of those in the past.

Model
Core MHz
Shader MHz
Mem MHz
Memory
Memory Bus
Stream Proc.
GTX 280
602
1296
1107
1GB
512-bit
240
GTX 260/216
576
1242
999
896MB
448-bit
216
GTX 260
576
1242
999
896MB
448-bit
192
9800 GX2
600
1500
1000
1GB
512-bit
256
9800 GTX+
738
1836
1100
512MB
256-bit
128
9800 GTX
675
1688
1100
512MB
256-bit
128
9800 GT
600
1500
900
512MB
256-bit
112
9600 GT
650
1625
900
512MB
256-bit
64
9600 GSO
550
1375
800
384MB
192-bit
96

I first saw this “Matrix” card this past summer at Computex, and my first thought was, “Hmm, that’s unattractive.”, but over the course of the past few months, I’ve come to realize that… I was right. It’s not an attractive card, but like their 9800 GTX+ Dark Knight card we took a look at last month, the goal isn’t looks, but rather performance and cooling.

I might not personally find the card that attractive, but it’s all a matter of opinion, and you might disagree. What I do like is this “cool” looking fin array towards the end of the card. The leaf-blower fan inside sweeps air both through these fins and also towards the back. This is an interesting design, and one I like quite a bit. It allows more air to be pushed away from the card faster, rather than designs that only push it towards the back (like most current reference designs).

Connectors at the back include a TV-Out, HDMI and of course, DVI. For those sticking with VGA, an adapter has been included. Also interesting is the addition of an S/PDIF output.

Taking a look at the side-view shot of the card, we can see there is a fair amount of airflow room available underneath the cooler itself. To the immediate right of the row of capacitors underneath the fan are three power phases. This is a feature we’ve seen only from Palit for quite some time, but it’s good to see others catching on. In some cases, additional power phases may do little for stock speeds, but it should greatly improve the chance of a higher successful overclock.

With a look at the card out of the way, let’s take care of the testing methodology on the next page and then kick right into our results, starting with one of the most system-intensive games ever, Crysis Warhead.

Test System & Methodology

At Techgage, we strive to make sure our results are as accurate as possible. Our testing is rigorous and time-consuming, but we feel the effort is worth it. In an attempt to leave no question unanswered, this page contains not only our testbed specifications, but also a fully-detailed look at how we conduct our testing.

If there is a bit of information that we’ve omitted, or you wish to offer thoughts or suggest changes, please feel free to shoot us an e-mail or post in our forums.

Test System

The below table lists our testing machine’s hardware, which remains unchanged throughout all GPU testing, minus the graphics card. Each card used for comparison is also listed here, along with the driver version used. Each one of the URLs in this table can be clicked to view the respective review of that product, or if a review doesn’t exist, it will bring you to the product on the manufacturer’s website.

Component
Model
Processor
Intel Core 2 Extreme QX9770 – Quad-Core, 3.6GHz (Overclocked), 1.35v
Motherboard
ASUS Rampage Extreme – X48-based, 0501 BIOS (08/28/08)
Memory
Corsair XMS3 DHX 2x2GB – DDR3-1333 7-7-7-15-1T, 1.91v
ATI Graphics
Palit Radeon HD 4870 X2 2GB (Catalyst 8.9)
Palit HD 4870 512MB (Catalyst 8.9)
ASUS EAH4850 512MB (Catalyst 8.9)
NVIDIA Graphics
Palit GTX 280 1GB (GeForce 178.13)
XFX GTX 260/216 896MB (GeForce 178.24)
Palit 9800 GX2 1GB (GeForce 178.13)
ASUS EN9800GTX+ 512MB Dark Knight (GeForce 178.13)
ASUS EN9800GTX 512MB (GeForce 178.13)
ASUS EN9800GT 512MB Matrix (GeForce 178.24)
Gigabyte 9600 GT 512MB (GeForce 178.13)
Audio
On-Board Audio
Storage
Seagate Barracuda 500GB 7200.11 x 2
Power Supply
Corsair HX1000W
Chassis
SilverStone TJ10 Full-Tower
Display
Gateway XHD3000 30″
Cooling
Zalman CNPS9700 Air CPU Cooler
Et cetera
Windows Vista Ultimate 64-bit

When preparing our testbeds for any type of performance testing, we follow these guidelines:

To aide with the goal of keeping accurate and repeatable results, we alter certain services in Windows Vista from starting up at boot. This is due to the fact that these services have the tendency to start up in the background without notice, potentially causing slightly inaccurate results. Disabling “Windows Search” turns off the OS’ indexing which can at times utilize the hard drive and memory more than we’d like.

Game Benchmarks

For graphic card reviews that pit us with a mid-range card or higher, we test at three popular resolutions that span the mid-range to high-end ground, consisting of monitor sizes of 20″ (1680×1050), 24″ (1920×1200) and 30″ (2560×1600).

In an attempt to offer “real-world” results, we do not utilize timedemos in our graphic card reviews, with the exception of Futuremark’s automated 3DMark Vantage. Each game in our test suite is benchmarked manually, with the minimum and average frames-per-second (FPS) captured with the help of FRAPS 2.9.5.

To deliver the best overall results, each title we use is exhaustively explored in order to find the best possible level in terms of intensiveness and replayability. Once a level is chosen, we play through repeatedly to find the best possible route and then in our official benchmarking, we stick to that route as close as possible. Since we are not robots and the game can throw in minor twists with each run, no run can be identical to the pixel.

Each game and setting combination is tested twice, and if there is a discrepancy between the initial results, the testing is repeated until we see results we are confident with.

The six games we currently use for our GPU reviews are listed below, with direct screenshots of the game’s setting screens and explanations of why we chose what we did.

Crysis Warhead

1680×1050
1920×1200
2560×1600






Crysis and its Warhead successor are two of the most demanding games on the PC today, and as a result, Anti-Aliasing is not our focus in testing. The noticeable differences come when the advanced options are increased, and to keep things simple, Crytek offers four profiles to choose from: Entry, Mainstream, Gamer and Enthusiast.

We run all three of our resolutions at the “Gamer” setting as it’s playable enough on all current mid-range (and higher) GPUs to complete a full run-through. The game unquestionably runs better on “Mainstream”, but “Gamer” helps push even the highest-end GPUs to their breaking-point.

Call of Duty 4

1680×1050
1920×1200
2560×1600






The Call of Duty series of war-shooters are without question some of the most gorgeous on the PC (and consoles), but what’s great is the fact that the games are also highly optimized, so no one has to max out their machine’s specs in order to play it. Since that’s the case, the in-game options are maxed out in all regards, except the Anisotropic Filtering, which is set to the center of the slider bar.

Half-Life 2: Episode Two

1680×1050
1920×1200
2560×1600

It might have been four-years-ago that we were able to play the first installment of the Half-Life 2 series, but it’s held up well with its new releases and engine upgrades. This is one title that thrives on both a fast CPU and GPU, and though it’s demanding at times, most any recent computer should be able to play the game with close to maxed-out detail settings, aside from the Anti-Aliasing.

In the case of very-recent mid-range cards, the game will run fine all the way up to 2560×1600 with maxed-out detail, minus Anti-Aliasing. All of our tested resolutions use identical settings, with 4xAA and 8xAF.

Unreal Tournament III

1680×1050
1920×1200
2560×1600






For as long as the Unreal Tournament series has been around, people have been benchmarking it. So it’s a little strange that UT III offers some of the most simplistic in-game graphics settings ever, with the “Texture Detail” and “World Detail” being of most importance. These two options scale between 1 and 5, and we of course use 5 as it can be used on almost any current mid-range GPU as long as the resolution is kept in check.

To help make both the game look better and our test more demanding, we force an Anti-Aliasing setting with the help of either ATI’s or NVIDIA’s Control Center. Both allow us to force a 4xAA setting, which is where we keep it.

S.T.A.L.K.E.R.: Clear Sky

1680×1050
1920×1200
2560×1600












The original S.T.A.L.K.E.R. was one of the most demanding games we’ve ever tested with, and its Clear Sky sequel is no different. Unlike most other games, S.T.A.L.K.E.R. not only offers quality setting profiles (Medium, High, etc) but also render settings. New to Clear Sky though is the “Enhanced Full Dynamic Lighting DX10” render setting, which enables all of the DX10 goodies that fans were waiting for.

We test the game at this setting for two reasons. First, it’s incredibly demanding on even the highest-end GPU, and second, FRAPS will not properly show the FPS counter or record an average FPS with any other render setting – I’m unsure why.

Need for Speed: ProStreet

1680×1050
1920×1200












The Need for Speed series has been a personal favorite ever since I first laid my hands on the third iteration, and I cannot see myself getting bored with any of the games soon. Sadly, Electronic Arts does not allow a 2560×1600 resolution with ProStreet, so we are limited to testing at both 1680×1050 and 1920×1200 only. In-game settings are completely maxed out, with 4xAA and V-Sync Off.

Crysis Warhead

As PC enthusiasts, we tend to be drawn to games that offer spectacular graphics… titles that help reaffirm your belief that shelling out lots of cash for that high-end monitor and PC was well worth it. But it’s rare when a game comes along that is so visually-demanding, it’s unable to run fully maxed out on even the highest-end systems on the market. In the case of the original Crysis, it’s easy to see that’s what Crytek was going for.

Funny enough, even though Crysis was released close to a year ago, the game today still has difficulty running at 2560×1600 with full detail settings – and that’s even with overlooking the use of anti-aliasing! Luckily, Warhead is better optimized and will run smoother on almost any GPU, despite looking just as gorgeous as its predecessor, as you can see in the screenshot below.

The game includes four basic profiles to help you adjust the settings based on how good your system is. These include Entry, Mainstream, Gamer and Enthusiast – the latter of which is for the biggest of systems out there, unless you have a sweet graphics card and are only running 1680×1050. We run our tests at the Gamer setting as it’s very demanding on any current GPU and is a proper baseline of the level of detail that hardcore gamers would demand from the game.

As we’d expect from this card, performance over the 9600 GT is quite good, but being still a low-end model, the “Gamer” setting is simply not possible.

Graphics Card
Best Playable
Avg. FPS
Palit HD 4870 X2 2GB
2560×1600, Gamer, 0xAA
31.382 FPS
Palit 9800 GX2 1GB
2560×1600, Mainstream, 0xAA
50.550 FPS
Palit GTX 280 1GB
2560×1600, Mainstream, 0xAA
46.038 FPS
XFX GTX 260/216 896MB
2560×1600, Mainstream, 0xAA
45.940 FPS
ASUS 9800 GTX+ 512MB
2560×1600, Mainstream, 0xAA
34.319 FPS
Palit HD 4870 512MB
2560×1600, Mainstream, 0xAA
32.973 FPS
ASUS 9800 GTX 512MB
2560×1600, Mainstream, 0xAA
30.840 FPS
ASUS HD 4850 512MB
2560×1600, Mainstream, 0xAA
26.530 FPS
ASUS 9800 GT 512MB
2560×1600, Mainstream, 0xAA
26.123 FPS
Gigabyte 9600 GT 512MB
1920×1200, Mainstream, 0xAA
31.979 FPS

Surprisingly, though, unlike the 9600 GT, the 9800 GT managed to handle the game using the Mainstream setting at 2650×1600 just fine. The frame rate is a little low (30FPS is ideal), but it’s still fully playable.

Call of Duty 4

Crysis Warhead might have the ability to bring any system to its knees even with what we consider to be reasonable settings, but Call of Duty 4 manages to look great regardless of your hardware, as long as it’s reasonably current. It’s also one of the few games on the market that will actually benefit from having a multi-core processor, although Quad-Cores offer no performance gain over a Dual-Core of the same frequency.

For our testing, we use a level called The Bog. The reason is simple… it looks great, plays well and happens to be incredibly demanding on the system. It takes place at night, but there is more gunfire, explosions, smoke, specular lighting and flying corpses than you can shake an assault rifle at.

Because the game runs well on all current mid-range GPUs at reasonable graphic settings, we max out what’s available to us, which includes enabling 4xAA and 8xAF, along with choosing the highest available options for everything else.

There are no surprises here so far. The 9800 GT performs a fair amount better at 1680×1050 and 1920×1200, while it consistently falls behind the slightly more expensive HD 4850.

Graphics Card
Best Playable
Avg. FPS
Palit HD 4870 X2 2GB
2560×1600, Max Detail, 8xAA
113.024 FPS
Palit GTX 280 1GB
2560×1600, Max Detail, 4xAA
85.440 FPS
XFX GTX 260/216 896MB
2560×1600, Max Detail, 4xAA
83.300 FPS
Palit 9800 GX2 1GB
2560×1600, Max Detail, 4xAA
76.192 FPS
Palit HD 4870 512MB
2560×1600, Max Detail, 4xAA
64.825 FPS
ASUS 9800 GTX+ 512MB
2560×1600, Max Detail, 0xAA
74.392 FPS
ASUS 9800 GTX 512MB
2560×1600, Max Detail, 0xAA
70.363 FPS
ASUS HD 4850 512MB
2560×1600, Max Detail, 0xAA
69.745 FPS
ASUS 9800 GT 512MB
2560×1600, Max Detail, 0xAA
57.431 FPS
Gigabyte 9600 GT 512MB
2560×1600, Max Detail, 0xAA
48.180 FPS

Since it was possible to play this game at 2560×1600 with the 9600 GT, it was no surprise to see similar performance from our 9800 GT, which gave us close to a 10 FPS performance boost.

Half-Life 2: Episode Two

The original Half-Life 2 might have first seen the light of day close to four years ago, but it’s still arguably one of the greatest-looking games ever seen on the PC. Follow-up versions, including Episode One and Episode Two, do well to put the Source Engine upgrades to full use. While playing, it’s hard to believe that the game is based on a four+ year old engine, but it still looks great and runs well on almost any GPU purchased over the past few years.

Like Call of Duty 4, Half-Life 2: Episode Two runs well on modest hardware, but a recent mid-range graphics card is recommended if you wish to play at higher than 1680×1050 or would like to top out the available options, including anti-aliasing and very high texture settings.

This game benefits from both the CPU and GPU, and the skies the limit. In order to fully top out the available settings and run the highest resolution possible, you need a very fast GPU or GPUs along with a fast processor. Though the in-game options go much higher, we run our tests with 4xAA and 8xAF to allow the game to remain playable on the smaller mid-range cards.

The card continues to perform well here, although at these exact settings, 1920×1200 would be the max playable setting. The full resolution of 2560×1600 and 4xAA is simply too much for this card to bear.

Graphics Card
Best Playable
Avg. FPS
Palit HD 4870 X2 2GB
2560×1600, Max Detail, 8xAA, 16xAF
81.418 FPS
XFX GTX 260/216 896MB
2560×1600, Max Detail, 8xAA, 16xAF
62.184 FPS
Palit GTX 280 1GB
2560×1600, Max Detail, 8xAA, 16xAF
61.437 FPS
Palit HD 4870 512MB
2560×1600, Max Detail, 8xAA, 16xAF
56.572 FPS
Palit 9800 GX2 1GB
2560×1600, Max Detail, 4xAA, 8xAF
89.596 FPS
ASUS 9800 GTX+ 512MB
2560×1600, Max Detail, 4xAA, 8xAF
54.977 FPS
ASUS 9800 GTX 512MB
2560×1600, Max Detail, 4xAA, 8xAF
51.272 FPS
ASUS HD 4850 512MB
2560×1600, Max Detail, 4xAA, 8xAF
48.142 FPS
ASUS 9800 GT 512MB
2560×1600, Max Detail, 0xAA, 8xAF
66.833 FPS
Gigabyte 9600 GT 512MB
2560×1600, Max Detail, 0xAA, 8xAF
52.297 FPS

Like our 9600 GT, the strain of anti-aliasing was just too much, so to find the best playable setting, we had to disable it. Once done that, we get some rather impressive performance of just over 60 FPS.

Unreal Tournament III

As odd as it may seem, every single game we currently use for our graphic card benchmarking is a sequel or an entry in a series of games, including this one. The original Unreal Tournament launched in late 1999, and since then, it has become a stature with GPU benchmarking. Similar to Call of Duty, the UT series of games is one that manages to deliver spectacular graphics, but doesn’t require a bleeding-edge machine to see them.

UTIII offers a variety of modes and levels, and has some of the most interesting and lush environments ever seen in a video game. If I could choose where I wanted to die, it would most likely be in the Gateway level, which you can see in the screenshot below. This level is one of the most interesting in the game as it’s essentially three levels in one, linked together with portals – and it’s hard to beat the feeling of scoring a portal frag.

The game might be one of the best-looking currently on the PC, but it doesn’t offer robust in-game settings like some others in our suite. Because of this, we are forced to enable anti-aliasing in the control panel of the current graphics card. Both ATI’s and NVIDIA’s drivers allow us to choose 4xAA, so that’s what we stick with throughout all of our testing.

Last our past few tests, this is another game where anti-aliasing will bog down performance tremendously. The game was very playable at 1680×1050, but anything lower would mean certain death constantly.

Graphics Card
Best Playable
Avg. FPS
Palit HD 4870 X2 2GB
2560×1600, Max Detail, 4xAA
55.479 FPS
Palit 9800 GX2 1GB
2560×1600, Max Detail, 0xAA
78.909 FPS
XFX GTX 260/216 896MB
2560×1600, Max Detail, 0xAA
72.954 FPS
Palit GTX 280 1GB
2560×1600, Max Detail, 0xAA
72.148 FPS
Palit HD 4870 512MB
2560×1600, Max Detail, 0xAA
57.617 FPS
ASUS 9800 GTX 512MB
2560×1600, Max Detail, 0xAA
48.874 FPS
ASUS 9800 GTX+ 512MB
2560×1600, Max Detail, 0xAA
47.707 FPS
ASUS HD 4850 512MB
2560×1600, Max Detail, 0xAA
42.228 FPS
ASUS 9800 GT 512MB
1920×1200, Max Detail, 0xAA
57.405 FPS
Gigabyte 9600 GT 512MB
1920×1200, Max Detail, 0xAA
43.781 FPS

This is one game where 2560×1600 just won’t happen without decreasing the detail levels. Doing that does make a rather noticeable difference, so in this case, we found the best-playable setting to be 1920×1200 with max detail, which gave us a stellar 58 FPS.

S.T.A.L.K.E.R.: Clear Sky

When it comes to first-person shooters, post-apocalyptic adventures are a dime a dozen. But when S.T.A.L.K.E.R. was first released in the spring of 2007, it dared to be different. How? By basing the game off of a real-world tragedy, the Chernobyl nuclear disaster, which occurred way back in 1986 near the city of Prypiat in the Ukraine. Despite the disaster happening so long ago, people are still unable to live in the surrounding area, and will be unable to for at least another 150 years.

In addition to the games real-world ties, S.T.A.L.K.E.R. happened to be one of the grittiest, realistic (aside from the problematic AI) and expansive games we’ve seen on the PC in a while. Having the ability to roam as you like is a huge benefit and really helped make the game feel real. Clear Sky further delivers on what made the original so great, but at the same time, adds support for DX10.

It might be difficult to judge from the screenshot, but Clear Sky (like the original) is one of the most demanding games on the PC today, especially if you wish to play using DX10. To help push all of our GPUs to their breaking-point, we stick to that mode while using the “High” quality setting.

It’s all too common to hear someone say, “But, will it run Crysis?”, but in all reality, it should be S.T.A.L.K.E.R. we’re talking about. With DX10 lighting, the game is a true bog on any rig. It looks good, but when it’s not that playable without a HD 4870 X2 and 1680×1050 resolution, all we can do is laugh.

Graphics Card
Best Playable
Palit HD 4870 X2 2GB
2560×1600, Enhanced Full Dynamic Lighting, Medium
Palit GTX 280 1GB
2560×1600, Enhanced Full Dynamic Lighting, Medium
XFX GTX 260/216 896MB
2560×1600, Enhanced Full Dynamic Lighting, Medium
Palit 9800 GX2 1GB
2560×1600, Full Dynamic Lighting, High
Palit HD 4870 512MB
2560×1600, Full Dynamic Lighting, High
ASUS HD 4850 512MB
2560×1600, Full Dynamic Lighting, Medium
ASUS 9800 GTX+ 512MB
1920×1200, Full Dynamic Lighting, High
ASUS 9800 GTX 512MB
1920×1200, Full Dynamic Lighting, High
Gigabyte 9800 GT 512MB
1920×1200, Full Dynamic Lighting, High
Gigabyte 9600 GT 512MB
1920×1200, Full Dynamic Lighting, Medium

Like all the other low-end cards in our roundup, this game is only playable at a maximum of 1920×1200 and with a render setting of Full Dynamic Lighting. It does manage to up the quality setting of the 9600 GT though, to High, from Medium.

Need for Speed: ProStreet

Where the racing genre is concerned, there are few games like Need for Speed. The first title launched in 1994, and since then, the series has done well to stick to its roots by offering an exciting racing experience that doesn’t hinge on being a simulator, like Gran Turismo or Forza. Instead, it delivers close to an arcade-like experience, which seems to be preferred by most people. EA has also kept incredibly regular with the series, having released sixteen different versions in a fourteen year span. That’s impressive.

What wasn’t impressive was ProStreet, however, as it took the franchise and turned it upside down. Sometimes reinventing a series is a good thing, but with concern to this game, EA should have left things as they were. The developers realized they goofed though, and the upcoming Undercover game (slated for a Nov. 17 release) looks to bring the series back on track. On release, we’ll replace ProStreet with Undercover in our testing.

ProStreet offers a wide-range of graphics options, allowing you to intricately tweak the game to work on your machine, regardless of what hardware you have. However, even when using maxed out detail settings, the game is still playable enough to complete a reliable benchmarking run, so we take that route. We also enable anisotropic filtering and 4x anti-aliasing.

The 9800 GT hasn’t budged at all throughout all our tests, so you definitely know what kind of performance to expect from most any game. Slightly faster than a 9600 GT and slightly slower than an HD 4850. Pretty simple.

Graphics Card
Best Playable
Avg. FPS
Palit 9800 GX2 1GB
1920×1200 Max Detail, 4xAA
111.112 FPS
XFX GTX 260/216 896MB
1920×1200 Max Detail, 4xAA
94.916 FPS
Palit GTX 280 1GB
1920×1200 Max Detail, 4xAA
93.939 FPS
Palit HD 4870 512MB
1920×1200 Max Detail, 4xAA
81.253 FPS
ASUS 9800 GTX+ 512MB
1920×1200 Max Detail, 4xAA
70.844 FPS
ASUS 9800 GTX 512MB
1920×1200 Max Detail, 4xAA
66.830 FPS
ASUS HD 4850 512MB
1920×1200 Max Detail, 4xAA
64.861 FPS
Gigabyte 9800 GT 512MB
1920×1200 Max Detail, 4xAA
55.853 FPS
Gigabyte 9600 GT 512MB
1920×1200 Max Detail, 4xAA
52.189 FPS

This game proves to be the only one in our roundup that’s playable at max detail with every card in our collection. The faster the card, the smoother the gameplay. Though for most people, it might actually be quite difficult to even tell the difference between the lowest and the highest results in real-world gameplay.

Futuremark 3DMark Vantage

Although we generally shun automated gaming benchmarks, we do like to run at least one to see how our GPUs scale when used in a ‘timedemo’-type scenario. Futuremark’s 3DMark Vantage is without question the best such test on the market, and it’s a joy to use, and watch. The folks at Futuremark are experts in what they do, and they really know how to push that hardware of yours to its limit.

The company first started out as MadOnion and released a GPU-benchmarking tool called XLR8R, which was soon replaced with 3DMark 99. Since that time, we’ve seen seven different versions of the software, including two major updates (3DMark 99 Max, 3DMark 2001 SE). With each new release, the graphics get better, the capabilities get better and the sudden hit of ambition to get down and dirty with overclocking comes at you fast.

Similar to a real game, 3DMark Vantage offers many configuration options, although many (including us) prefer to stick to the profiles which include Performance, High and Extreme. Depending on which one you choose, the graphic options are tweaked accordingly, as well as the resolution. As you’d expect, the better the profile, the more intensive the test.

Performance is the stock mode that most use when benchmarking, but it only uses a resolution of 1280×1024, which isn’t representative of today’s gamers. Extreme is more appropriate, as it runs at 1920×1200 and does well to push any single or multi-GPU configuration currently on the market – and will do so for some time to come.

According to 3DMark, the 9800 GT has far better capabilities than the 9600 GT, although it wasn’t exactly represented that way in our real-world tests. Still, performance here scales just as we’d expect.

Overclocking, Temperatures

Before tackling our overclocking results, let’s first clear up what we consider to be a real overclock and how we go about achieving it. If you’ve read our processor reviews, you might already be aware that I personally don’t care for an unstable overclock. It might look good on paper, but if it’s not stable, then it won’t be used. Very few people purchase a new GPU for the sole purpose of finding the maximum overclock, which is why we focus on finding what’s stable and usable.

To find the max stable overclock on an NVIDIA card, we use the latest available version of RivaTuner, which allows us to reach heights that are no way stable – a good thing.

Once we find what we feel could be a stable overclock, the card is put through the stress of dealing with 3DMark Vantage’s “Extreme” test, looped three times. Although previous versions of 3DMark offered the ability to loop the test infinitely, Vantage for some reason doesn’t. It’s too bad, as it would be the ideal GPU-stress test.

If no artifacts or performance issues arise, we continue to test the card in multiple games from our test suite, at their maximum available resolutions and settings that the card is capable of handling. If no issues arise during our real-world gameplay, we can consider the overclock to be stable and then proceed with testing.

Overclocking ASUS’ GeForce 9800 GT 512MB Matrix

One of ASUS’ selling points on this card is their ‘iTracker’ software, which is essentially a tuning application that lets you increase temperature thresholds, clocks and even voltages. The overall design is very clunky, but once you use it for a few minutes and understand where things are located and how to increase the clocks, it’s easier to deal with. I’m not that impressed by the actual GUI, either, but that’s a personal opinion and may vary from person to person. They are definitely targeting the gamer, and that’s understandable.

One part of the clunkiness is that it’s difficult to set your own values. Once you finally figure out how you’re able to increase them, it’s almost impossible to get them to exactly where you’d like them. The slider bars go left to right and not change the value by 1, but rather 3 or 5, or around there. So, it’s impossible as far as I can tell to get the slider to sit right on certain numbers. The GPU voltage for example, I wanted to set to 1300 (1.30v), but it would either hit 1298 or 1303. Not a huge deal, but a little frustrating.

Overall though, the tool works, and it works well. I still prefer the ease-of-use with RivaTuner, but that tool doesn’t allow adjustments of voltages, whereas iTracker does. Increasing the voltages does help, but I really wouldn’t recommend anyone spending too much time with it.

That all said, while the default clocks on this card are 612MHz Core, 1500MHz Shader and 900MHz Memory, our max stable overclock was 720MHz Core, 1750MHz Shader and 900MHz Memory. The memory is easily the most difficult thing to overclock here, and it would even crash at 925MHz, so I didn’t bother tweaking it too much.

With the new clocks, performance is definitely increased, but not by a great margin. It begs the question of why overclocking this card would even be important, and the answer is simple… it isn’t. There is really no reason to worry about overclocking whatsoever. It will just heat the card up more and offer minimal gains.

GPU Temperatures

Regardless of whether or not you plan to overclock, having reasonable system temperatures is always welcomed. Not only will your machine be more reliable with cooler temps, it will likewise not add any unneeded heat to the room you are in (unless it happens to be wintertime and you keep the windows open, then it might be a good thing).

To test a GPU for idle and load temps, we do a couple things. First, with the test system turned off for at least a period of ten minutes, we measure the room temperature using a Type-K thermometer sensitive of up to 0.1°F. The result from this is placed beside the GPUs name in the graph below. Since we don’t test in a temperature-controlled environment, the room temp can vary by a few degrees, which is why we include the information here.

Once the room temp is captured, the test system is booted up and left idle for ten minutes, at which point GPU-Z is loaded up to grab the current GPU Core temperature. Then, a full run of 3DMark Vantage is run to help warm the card up, followed by another run of the same benchmark using the Extreme mode (1920×1200). Once the test is completed, we refer to the GPU-Z log file to find the maximum temperature hit. Please note that this is not an average. Even if the highest point was only hit once, it’s what we keep as a result.

What’s nice to see is that the faster 9800 GT runs cooler than the 9600 GT, thanks to ASUS’ robust cooler. At full load, the card is definitely audible (not a huge whine, but still noticeable), but the temps were definitely kept in check.

Power Consumption, Final Thoughts

In the age where anyone can appreciate good power efficiency, it’s almost upsetting to see how much wattage any graphics card manages to pull from our walls. Even the lowest-end models don’t seem too impressive when compared to the power efficiency of a CPU, but that’s how it is, at least right now. It’s interesting to see how different GPUs compare in this regard, as some might perform better than others, but use less power, like we normally see with a shift to a smaller process node.

To help see what kind of wattage a given GPU eats on average, we use a Kill-A-Watt that’s plugged into a power bar that’s in turn plugged into one of the wall sockets, with the test system plugged directly into that. The monitor and other components are plugged into the other socket and is not connected to the Kill-A-Watt. For our system specifications, please refer to our methodology page.

Like our temperature testing, the computer is boot up and left idle for ten minutes, at which point the current wattage reading is recorded. To test for full load wattage, 3DMark Vantage is again loaded up and run at the “Extreme” setting. The space flight test is used here, with the result being recorded during a specific sequence during that run where it seems to stress the GPU the most.

With default clocks, the power consumption scales with the 9600 GT, and falls right behind the HD 4850. The included iTracker software allows you to go into a power-saving mode that will decrease the power even further when the GPU isn’t being used for 3D graphics. With stock speeds, our ‘idle’ wattage was 183W, while with the power-saving enabled, it was 165W.

Final Thoughts

From a value standpoint, the 9800 GT is a great card. Most such cards hover around the $100 area, which makes it appropriate since many 9600 GT’s can be had for around $80. The performance differences between the two aren’t major as you’d expect given the majorly increased number of cores, but it’s a good boost nonetheless.

Beyond the 9800 GT would come the HD 4850, by an additional $30, and for an average of $20 more than that, we get the even faster 9800 GTX+. There are so many models on the market right now, that the FPS/$ ratio is very good, and no matter what card you pick up, you’re generally going to get what you pay for.

In the case of this particular card, I’d have to say it’s not worth the cost-of-entry. It currently retails for $149.99, and for almost the same price ($160), you could score a much faster 9800 GTX+. If you happen to take advantage of mail-in rebates, then this card will end up costing you $129.99, which makes it a little bit easier to stomach. But again, there are many more 9800 GT’s available that go even lower. Some, as I mentioned, even go as low as $100.

What ASUS’ card does offer is extremely low noise when at idle. It’s so quiet… you’d swear it was using a passive cooler. Still, given the cost, it almost would have been nice to see an actual passive cooler, because at least then, this card would have a very important redeeming feature.

All in all, the 9800 GT is a good card for the money, and delivers a great value. But for even better value, this particular model should be avoided. ASUS themselves offer other 9800 GT’s that cost less than this (one is $109.99 after MIR at NewEgg, currently), which aside from the cooler, has almost identical specs to this one.

Discuss this article in our forums!

Have a comment you wish to make on this article? Recommendations? Criticism? Feel free to head over to our related thread and put your words to our virtual paper! There is no requirement to register in order to respond to these threads, but it sure doesn’t hurt!

Copyright © 2005-2021 Techgage Networks Inc. - All Rights Reserved.