Techgage logo

EVGA GeForce GTS 250 Superclocked

Date: March 3, 2009
Author(s): Rob Williams

The first mid-range offering of NVIDIA’s GeForce 200 series is here, in the form of the GTS 250. As a follow-up to the company’s 9800 GTX+, we already have a good idea of what to expect. But, various improvements aim to make things interesting, such as a redesigned PCB, smaller form-factor, single PCI-E connector, improved temperatures and refreshed pricing.



Introduction

It’s been well over eight months since NVIDIA first launched their GeForce 200 series, and until now, we haven’t seen much from the company in way of mid-range offerings. The closest we’ve come has been with the GTX 260, but at over $200 (much more at launch), it’s still more than some people will want to spend. We can stop waiting for such a thing to arrive though, as the GTS 250 makes its entrance now.

The lead up to this card’s launch has been a little different than most, though, as rumors and speculation of what the card is all about have been soaring through the Internet for the past few weeks. Some of the points made have been valid, some have been false, but if anything’s true, it’s that NVIDIA definitely caught the attention of enthusiasts with this one.

To kick things off, let’s figure out what this card is, and what it isn’t. As rumors have been clear to point out, the GTS 250 utilizes NVIDIA’s G92b core, which is the exact same chip used in their 9800 GTX+. It’s for that reason that some have been quick to throw a quick jab at NVIDIA, since the origin of this GPU was the 65nm G92 – first found in the 8800 GT. Realizing that, it becomes a little easier to understand why some are up in arms.

For the most part, I don’t really blame anyone. I’d like to see a brand-new mid-range core as much as the next guy. But for what it’s worth, NVIDIA isn’t trying to hide anything, and they’ve done well to fully disclose exactly what the GTS 250 is. So, the important thing to note is that they’re not trying to fool anyone into thinking this is something that it’s not, although some may disagree.

Closer Look at NVIDIA’s GeForce GTS 250

What’s going on, then? As pointed out during a press briefing last week, NVIDIA discussed their desire to “simplify” their product line-up. One example used to explain the situation was to take a look at the card that the GTS 250 replaces. At quick glance, a regular consumer may assume that a 9800 GTX+ is much more powerful than, say, a GTX 285, when their performance is in fact on opposite sides of the spectrum. Hence, the need to simplify comes into play.

Throughout the coming year, the company will target their entire current line-up and rename models appropriately, so that when consumers go into a retailer or hop onto an e-tailer, they’ll feel confident in their purchasing decision (although, that would also assume they visited our site first!). Current naming schemes are a real problem, but not only with GPUs. Currently, AMD’s own processors are about to hit the “10K” mark, and Intel hit it not long ago, hence the revised naming scheme for their Core i7 line-up. So, any advances made to simplify product naming schemes is fine by me.

To reiterate, the GTS 250 = 9800 GTX+ in almost all regards, except for some that I’ll talk about now. First, thanks to a board redesign, NVIDIA has shortened the card an entire inch, to settle in at 9″. The GTX 285, by comparison, is 10.5″, so the smaller body of this card will be appreciated by those with a smaller chassis. Also thanks to this board redesign, one power connector was removed, so all you need to power the GTS 250 is a single 6-Pin PCI-E and you’re good to go. Despite these changes though, the TDP is 5W higher on the GTS 250, at 150W.

Another advantage to the GTS 250 is the increase in memory size, to 1GB. The 9800 GTX+ was limited to 512MB, so without even testing, we can be assured that the newer card will fare a bit better in high-resolution gaming (1920×1200+). That said, there will also be a 512MB version of the GTS 250 available, which is ideal for those who want to pair it with their already-existing 9800 GTX+ for SLI. Since the architecture is the same, SLI will work fine as long as the memory densities are the same.

Finally pricing for the GTS 250 1GB is $149, while the GTS 250 512MB will debut at $129. The latter is quite similar to current 9800 GTX+ pricing, although some of those may be had for even less after mail-in rebates. The competition that the GTS 250 goes after is ATI’s Radeon HD 4850 1GB, a card that at recent check, retails for at least $30 more (before mail-in-rebates).

Model
Core MHz
Shader MHz
Mem MHz
Memory
Memory Bus
Stream Proc.
GTX 295
576
1242
1000
1792MB
448-bit
480
GTX 285
648
1476
1242
1GB
512-bit
240
GTX 280
602
1296
1107
1GB
512-bit
240
GTX 260/216
576
1242
999
896MB
448-bit
216
GTX 260
576
1242
999
896MB
448-bit
192
GTS 250
738
1836
1100
1GB
256-bit
128
9800 GX2
600
1500
1000
1GB
512-bit
256
9800 GTX+
738
1836
1100
512MB
256-bit
128
9800 GTX
675
1688
1100
512MB
256-bit
128
9800 GT
600
1500
900
512MB
256-bit
112
9600 GT
650
1625
900
512MB
256-bit
64
9600 GSO
550
1375
800
384MB
192-bit
96

Below, you can see the card’s status report courtesy of the latest version of GPU-Z. Everything here is accurate, aside from the GPU code name and manufacturing process. As already mentioned, this card uses the G92b core, which is built on a 55nm process, not a 65nm one as it appears in the shot. Also seen here are the 128 Shader (or CUDA) processors and 16 ROP units. Overall, the card may be “outdated” by some standards, but it’s still a solid offering for the price, which is the most important thing.

The GTS 250 also features a rather familiar design, one that’s similar for the most part to the GTX 200 cards before it, but smaller. One noticeable difference is that the glossiness of the higher-end cards is lost, replaced with a matte surface (which I actually prefer). Also, despite the card’s budget pricing, you are able to hook three of these guys together for Tri-SLI. It would be an odd route to take (as opposed to purchasing a larger card to begin with), but having the option doesn’t hurt.

Taking a look at the opposite side of the card, we can see the lone PCI-E 6-Pin connector, and also have a good view of the fan. The cooler design is simple in general, but pretty efficient for a stock offering. There’s no doubt that GPU vendors will follow-up to this launch with their own cooler designs, however, which tend to be much better for overclocking and temperatures in general.

You can expect that all of the launch GTS 250’s will look identical, save for the vendor sticker, but similar to the case of our EVGA card here, pre-overclocked models are also sure to be plentiful.

Before we dive into our testing results, one thing I wanted to point out was that while NVIDIA believes the GTS 250’s main competition is ATI’s Radeon HD 4850 1GB, we didn’t have one on hand, to use for the sake of comparison. Also, due to time constraints, we were unable to put the 512MB variant through our most-recent test suite. However, since that’s a 512MB version, the comparison doesn’t matter a great deal at this point. A more appropriate comparison would be with the 9800 GTX+ 512MB, which we do have results for.

So let’s get to it! On the next page, we have our test setup and methodology explained in some depth, and following that, we’ll get right into our Call of Duty: World at War results.

Test System & Methodology

At Techgage, we strive to make sure our results are as accurate as possible. Our testing is rigorous and time-consuming, but we feel the effort is worth it. In an attempt to leave no question unanswered, this page contains not only our testbed specifications, but also a fully-detailed look at how we conduct our testing.

If there is a bit of information that we’ve omitted, or you wish to offer thoughts or suggest changes, please feel free to shoot us an e-mail or post in our forums.

Test System

The below table lists our testing machine’s hardware, which remains unchanged throughout all GPU testing, minus the graphics card. Each card used for comparison is also listed here, along with the driver version used. Each one of the URLs in this table can be clicked to view the respective review of that product, or if a review doesn’t exist, it will bring you to the product on the manufacturer’s website.

Component
Model
Processor
Intel Core i7-965 Extreme Edition – Quad-Core, 3.2GHz, 1.25v
Motherboard
ASUS Rampage II Extreme – X58-based, 0903 BIOS (12/31/08)
Memory
OCZ Gold PC3-12800 – DDR3-1333 7-7-7-24-1T, 1.60v
ATI Graphics
Palit Radeon HD 4870 X2 2GB (Catalyst 8.12 Hotfix)
Diamond Radeon HD 4870 1GB (Catalyst 8.12 Hotfix)
NVIDIA Graphics
Audio
On-Board Audio
Storage
Seagate Barracuda 500GB 7200.11 x 2
Power Supply
Corsair HX1000W
Chassis
SilverStone TJ10 Full-Tower
Display
Gateway XHD3000 30″
Cooling
Thermalright TRUE Black 120
Et cetera
Windows Vista Ultimate 64-bit

When preparing our testbeds for any type of performance testing, we follow these guidelines:

To aide with the goal of keeping accurate and repeatable results, we alter certain services in Windows Vista from starting up at boot. This is due to the fact that these services have the tendency to start up in the background without notice, potentially causing slightly inaccurate results. Disabling “Windows Search” turns off the OS’ indexing which can at times utilize the hard drive and memory more than we’d like.

Game Benchmarks

For graphic card reviews that pit us with a mid-range card or higher, we test at three popular resolutions that span the mid-range to high-end ground, consisting of monitor sizes of 20″ (1680×1050), 24″ (1920×1200) and 30″ (2560×1600).

In an attempt to offer “real-world” results, we do not utilize timedemos in our graphic card reviews, with the exception of Futuremark’s automated 3DMark Vantage. Each game in our test suite is benchmarked manually, with the minimum and average frames-per-second (FPS) captured with the help of FRAPS 2.9.8.

To deliver the best overall results, each title we use is exhaustively explored in order to find the best possible level in terms of intensiveness and replayability. Once a level is chosen, we play through repeatedly to find the best possible route and then in our official benchmarking, we stick to that route as close as possible. Since we are not robots and the game can throw in minor twists with each run, no run can be identical to the pixel.

Each game and setting combination is tested twice, and if there is a discrepancy between the initial results, the testing is repeated until we see results we are confident with.

The six games we currently use for our GPU reviews are listed below, with direct screenshots of the game’s setting screens and explanations of why we chose what we did.

Crysis Warhead

1680×1050
1920×1200
2560×1600






Call of Duty: World at War

1680×1050
1920×1200
2560×1600






Far Cry 2

1680×1050
1920×1200
2560×1600

Left 4 Dead

1680×1050
1920×1200
2560×1600

Mirror’s Edge

1680×1050
1920×1200
2560×1600

Need for Speed: Undercover

1680×1050
1920×1200
2560×1600

Call of Duty: World at War

While some popular game franchises are struggling to keep themselves healthy, Call of Duty doesn’t have much to worry about. This is Treyarch’s third go at a game in the series, and a first for one that’s featured on the PC. All worries leading up to this title were all for naught, though, as Treyarch delivered on all promises.

To help keep things fresh, CoD: World at War focuses on battles not exhaustively explored in previous WWII-inspired games. These include battles which take place in the Pacific region, Russia and Berlin, and variety is definitely something this game pulls off well, so it’s unlikely you’ll be off your toes until the end of the game.

For our testing, we use a level called “Relentless”, as it’s easily one of the most intensive levels in the game. It features tanks, a large forest environment and even a few explosions. This level depicts the Battle of Peleliu, where American soldiers advance to capture an airstrip from the Japanese. It’s a level that’s both exciting to play and one that can bring even high-end systems to their knees.

It’s not much of a surprise, but the GTS 250 ranked right alongside the 9800 GTX+. Oddly enough though, despite having less GDDR3 on hand, the 9800 GTX+ performed a bit better in each configuration. This isn’t necessarily fact though, as different runs may yield slightly different results.

Graphics Card
Best Playable
Avg. FPS
NVIDIA GTX 295 1792MB x 2
2560×1600 – Max Detail, 8xAA
90.283 FPS
NVIDIA GTX 285 1GB x 2
2560×1600 – Max Detail, 8xAA
63.401 FPS
Zotac GTX 295 1792MB
2560×1600 – Max Detail, 8xAA
52.461 FPS
Palit HD 4870 X2 2GB
2560×1600 – Max Detail, 8xAA
37.825 FPS
EVGA GTX 285 1GB SSC Edition
2560×1600 – Max Detail, 4xAA
45.866 FPS
Zotac GTX 285 1GB AMP!
2560×1600 – Max Detail, 4xAA
43.711 FPS
NVIDIA GTX 285 1GB
2560×1600 – Max Detail, 4xAA
41.510 FPS
Palit GTX 280 1GB
2560×1600 – Max Detail, 4xAA
38.192 FPS
XFX GTX 260/216 896MB
2560×1600 – Max Detail, 4xAA
32.723 FPS
EVGA GeForce GTS 250 1GB SC
2560×1600 – Max Detail, 0xAA
35.165 FPS
ASUS GeForce 9800 GTX+ 512MB
2560×1600 – Max Detail, 0xAA
34.596 FPS
NVIDIA GeForce GTS 250 1GB
2560×1600 – Max Detail, 0xAA
34.192 FPS
Diamond HD 4870 1GB
2560×1600 – Max Detail, 0xAA
30.372 FPS
Sapphire HD 4830 512MB
1920×1200 – Max Detail, 0xAA
40.157 FPS
Sapphire HD 4670 512MB
1920×1200 – Max Detail, 0xAA
28.101 FPS

Although we saw ample performance in our above graphs, in order to remain fully playable at 2560×1600, we had to remove all traces of anti-aliasing. With that configuration, the game ran quite well, on both the GTS 250 and 9800 GTX+.

Crysis Warhead

As PC enthusiasts, we tend to be drawn to games that offer spectacular graphics… titles that help reaffirm your belief that shelling out lots of cash for that high-end monitor and PC was well worth it. But it’s rare when a game comes along that is so visually-demanding, it’s unable to run fully maxed out on even the highest-end systems on the market. In the case of the original Crysis, it’s easy to see that’s what Crytek was going for.

Funny enough, even though Crysis was released close to a year ago, the game today still has difficulty running at 2560×1600 with full detail settings – and that’s even with overlooking the use of anti-aliasing! Luckily, Warhead is better optimized and will run smoother on almost any GPU, despite looking just as gorgeous as its predecessor, as you can see in the screenshot below.

The game includes four basic profiles to help you adjust the settings based on how good your system is. These include Entry, Mainstream, Gamer and Enthusiast – the latter of which is for the biggest of systems out there, unless you have a sweet graphics card and are only running 1680×1050. We run our tests at the Gamer setting as it’s very demanding on any current GPU and is a proper baseline of the level of detail that hardcore gamers would demand from the game.

Once again, the GTS 250 performed as we expected it to, but neither it nor the 9800 GTX+ could handle the Gamer profile too well in resolutions other than 1680×1050. For higher resolutions, that had to be dropped to Mainstream, as we’ll see below:

Graphics Card
Best Playable
Avg. FPS
NVIDIA GTX 295 1792MB x 2
2560×1600 – Enthusiast, 0xAA
42.507 FPS
NVIDIA GTX 285 1GB x 2
2560×1600 – Gamer, 0xAA
45.835 FPS
Zotac GTX 295 1792MB
2560×1600 – Gamer, 0xAA
37.97 FPS
EVGA GTX 285 1GB SSC Edition
2560×1600 – Mainstream, 0xAA
54.551 FPS
Zotac GTX 285 1GB AMP!
2560×1600 – Mainstream, 0xAA
53.308 FPS
NVIDIA GTX 285 1GB
2560×1600 – Mainstream, 0xAA
51.283 FPS
Palit GTX 280 1GB
2560×1600 – Mainstream, 0xAA
46.912 FPS
XFX GTX 260/216 896MB
2560×1600 – Mainstream, 0xAA
40.750 FPS
EVGA GeForce GTS 250 1GB SC
2560×1600 – Mainstream, 0xAA
35.305 FPS
ASUS GeForce 9800 GTX+ 512MB
2560×1600 – Mainstream, 0xAA
34.735 FPS
NVIDIA GeForce GTS 250 1GB
2560×1600 – Mainstream, 0xAA
34.327 FPS
Diamond HD 4870 1GB
2560×1600 – Mainstream, 0xAA
33.849 FPS
Palit HD 4870 X2 2GB
2560×1600 – Mainstream, 0xAA
30.670 FPS
Sapphire HD 4830 512MB
1920×1200 – Mainstream, 0xAA
37.051 FPS
Sapphire HD 4670 512MB
1920×1200 – Mainstream, 0xAA
25.175 FPS

With the Mainstream profile selected, we get slightly better performance than what we saw at 1680×1050 with the Gamer profile. Luckily enough, the Mainstream profile still looks fantastic, and for it to run so well on both the GTS 250 and 9800 GTX+ is great. Cheap gaming doesn’t have to feel cheap, that’s for sure.

Far Cry 2

Sequels are common, and three of our six games used here prove it. But what’s different with Far Cry 2, though, is that while the other sequels here don’t throw you for a loop when you first load it up and generally give you what you’d expect to see, this game does the absolute opposite. We knew for months that Far Cry 2 wasn’t going to be a direct continuation of the original, but for the most part, this game could have gone by any other name and no one would even make a connection. Luckily for Ubisoft, though, the game can still be great fun.

Like the original, this game is a first-person shooter that offers open-ended gameplay, similar to S.T.A.L.K.E.R. You’ll be able to roam the huge map (50km^2) of a central African state which will mostly be traversed by vehicle, as walking even 2% in any direction gets very tedious after a while. This game is a perfect GPU benchmark simply because the graphics are better than the average, with huge draw distances, realistic nature and even a slew of animals to pass by (and kill if you are evil enough).

Our run through takes place in the Shwasana region, and consists of leaving a small hut and walking towards four people prepared to kill me for no apparent reason (except that this is a game). After the opponents are eliminated, a walk along the dirt road continues for another twenty seconds until we reach a small hut with supplies.

If I found out one thing while benchmarking the GTS 250, it’s that Far Cry 2 is brutal when it comes to lower-end cards. Like Crysis, our 1680×1050 setting was, for the most part, smooth, but anything higher would bring things to a crawl. Despite the FPS rating being higher than 30 at 1920×1200, it was simply unplayable, with not-so-rare lag spikes.

Graphics Card
Best Playable
Avg. FPS
NVIDIA GTX 285 1GB x 2
2560×1600 – Max Detail, 8xAA
46.502 FPS
NVIDIA GTX 295 1792MB x 2
2560×1600 – Max Detail, 4xAA
88.608 FPS
Zotac GTX 295 1792MB
2560×1600 – Max Detail, 4xAA
55.951 FPS
Palit HD 4870 X2 2GB
2560×1600 – Max Detail, 4xAA
43.600 FPS
Diamond HD 4870 1GB
2560×1600 – Max Detail, 4xAA
41.777 FPS
EVGA GTX 285 1GB SSC Edition
2560×1600 – Max Detail, 4xAA
41.712 FPS
Zotac GTX 285 1GB AMP!
2560×1600 – Max Detail, 4xAA
40.375 FPS
NVIDIA GTX 285 1GB
2560×1600 – Max Detail, 4xAA
37.785 FPS
Palit GTX 280 1GB
2560×1600 – Max Detail, 0xAA
43.460 FPS
XFX GTX 260/216 896MB
2560×1600 – Max Detail, 0xAA
38.527 FPS
ASUS GeForce 9800 GTX+ 512MB
2560×1600 – Max Detail, 0xAA
34.735 FPS
EVGA GeForce GTS 250 1GB SC
2560×1600 – Max Detail, 0xAA
32.659 FPS
NVIDIA GeForce GTS 250 1GB
2560×1600 – Max Detail, 0xAA
31.521 FPS
Sapphire HD 4830 512MB
1920×1200 – Max Detail, 0xAA
38.323 FPS
Sapphire HD 4670 512MB
1920×1200 – Max Detail, 0xAA
28.819 FPS

Not surprisingly, we were forced to drop AA in order to achieve better performance, and that we did. With it gone, we could run the game at 2560 just fine. There were some occasions where the game would stick for well under a second, but they were spread out and not all too annoying.

Left 4 Dead

Not too many game publishers can brag about having such a great track record like Valve can. None of their major game releases have ever been released to anything but praise, which goes to show that not rushing to release a game to please investors can make a huge difference. Take Half-Life 2, Team Fortress 2 and Portal, for example.

Left 4 Dead is one game I didn’t take seriously up until its launch. After playing it though, my opinions changed drastically, and even as I type this, I feel like saving the document and going to play. But, I’m also scared of Zombies, so continue writing I shall. Like Dead Space, this game is a survival shooter, but unlike that game, this title focuses completely on co-op. For the most part, the game is dulled in single player, but team up with three of your friends and let the laughs and excitement begin.

The portion of the level we use for testing is contained within the No Mercy campaign. The ultimate goal in the entire campaign is to make it to the top of a hospital in order to be picked up and brought off to safety. Our run through takes place in the final part of the the campaign, which leads up towards the roof tops. If one thing can be said about this title, it’s that causing a Boomer to explode (as seen in the above screenshot) proves to be one of the most satisfying things to do in any game I’ve played in a while.

Valve’s releasing of games that both look great and run well on most machines is nothing new, with Left 4 Dead being the latest in their collection to be able to brag about such a thing. That said, both the GTX 250 and 9800 GTX+ could handle the game well with all of our configurations.

Graphics Card
Best Playable
Avg. FPS
NVIDIA GTX 295 1792MB x 2
2560×1600 – Max Detail, 8xMSAA
117.701 FPS
Palit HD 4870 X2 2GB
2560×1600 – Max Detail, 8xMSAA
117.039 FPS
NVIDIA GTX 285 1GB x 2
2560×1600 – Max Detail, 8xMSAA
109.491 FPS
Zotac GTX 295 1792MB
2560×1600 – Max Detail, 8xMSAA
102.422 FPS
EVGA GTX 285 1GB SSC Edition
2560×1600 – Max Detail, 8xMSAA
86.831 FPS
Zotac GTX 285 1GB AMP!
2560×1600 – Max Detail, 8xMSAA
73.075 FPS
NVIDIA GTX 285 1GB
2560×1600 – Max Detail, 8xMSAA
72.072 FPS
Palit GTX 280 1GB
2560×1600 – Max Detail, 8xMSAA
66.775 FPS
Diamond HD 4870 1GB
2560×1600 – Max Detail, 8xMSAA
66.294 FPS
XFX GTX 260/216 896MB
2560×1600 – Max Detail, 8xMSAA
56.608 FPS
EVGA GeForce GTS 250 1GB SC
2560×1600 – Max Detail, 8xMSAA
47.621 FPS
NVIDIA GeForce GTS 250 1GB
2560×1600 – Max Detail, 8xMSAA
47.142 FPS
ASUS GeForce 9800 GTX+ 512MB
2560×1600 – Max Detail, 4xMSAA
62.571 FPS
Sapphire HD 4830 512MB
2560×1600 – Max Detail, 4xMSAA
48.612 FPS
Sapphire HD 4670 512MB
2560×1600 – Max Detail, 0xAA
39.770 FPS

Like the vast majority of our cards here, the GTS 250 could still handle the game with 8xMSAA enabled. This produced no visible lag during my tests, and was rather impressive overall. The 512MB limitation of the 9800 GTX+ became a little more clear here, since its performance with 8xMSAA was not so good. The resulting FPS was similar to the GTS 250, but actually playing the game at that setting showed a far different story, with frequent lag… it was simply undesirable to play.

Mirror’s Edge

What was the last first-person game on the PC to truly blow you away, or offer some unique gameplay experience? New first-person shooters come out quite often, and while some show off some new features and gameplay twists, few of them truly regenerate the genre like we’d hope. Mirror’s Edge is a title that strived to do just that, and for the most part, I’d have to say they’ve done a great job.

First and foremost, Mirror’s Edge isn’t so much a first-person shooter as it is a first-person adventure game, because for the most part, combat isn’t the main focus. Throughout some of the few levels I played through, at times there could be a full ten-minute span without even seeing a single person, which is actually somewhat refreshing. The game focuses on figuring out the best way to get from point A to point B, heavily utilizing the parkour style of travel.

Most levels in Mirror’s Edge offers a similar level of system-intensity, so I based our choice on one that was fun to play through, and one that allowed an easily-replicable run-through. It takes place in chapter six, “Pirandello Kruger”, and Checkpoint A. We begin in a large building, behind a window, looking out at the city. Our run-through takes us outside of this building, down to the street and up to the top of the building shown to the right in the above screenshot.

The GTS 250 continued to perform well here, with respectable performance at all resolutions. The 512MB limitation of the 9800 GTX+ became clear here once again, with a loss of 7FPS at 2560×1600, when compared to the GTS 250.

Mirror’s Edge – PhysX Testing

If there’s one title that’s been burned in editor’s brains over the course of the past few months, it’s this one. NVIDIA has been quite proactive in making sure we know how great the game is, and with its heavy use of PhysX, it’s not hard to understand why they believe that. Luckily though, as I mentioned above, the game is actually quite fun, and unique, so I think it deserves to be pushed a little bit.

Since Mirror’s Edge is really the first commercial game to feature PhysX use throughout, I thought it’d be appropriate to test each card with the technology enabled, since it’s generally going to be something that people would want. Bear in mind, though, that ATI cards are automatic losers, simply because they are unable to accelerate PhysX on the GPU like NVIDIA’s cards can. For that reason, their cards are going to be unable to handle PhysX computation reliably at any resolution, regardless of the CPU. Using the old-school PhysX dedicated card would rid this problem, however.

I was quite surprised here. Despite having already pushed the GTS 250 to what I thought was its furthest limit with Mirror’s Edge at 2560×1600, enabling PhysX still kept the game playable. Gameplay wasn’t as ideal with it enabled, despite only having a few frames dropped off (it felt like more), but the performance I did see was acceptable.

Graphics Card
Best Playable
Avg. FPS
NVIDIA GTX 295 1792MB x 2
2560×1600 – Max Detail, 8xAA
118.680 FPS
NVIDIA GTX 285 1GB x 2
2560×1600 – Max Detail, 8xAA
88.346 FPS
Zotac GTX 295 1792MB
2560×1600 – Max Detail, 8xAA
70.562 FPS
EVGA GTX 285 1GB SSC Edition
2560×1600 – Max Detail, 8xAA
52.316 FPS
Zotac GTX 285 1GB AMP!
2560×1600 – Max Detail, 8xAA
51.733 FPS
NVIDIA GTX 285 1GB
2560×1600 – Max Detail, 8xAA
48.385 FPS
Palit GTX 280 1GB
2560×1600 – Max Detail, 8xAA
44.806 FPS
Diamond HD 4870 1GB
2560×1600 – Max Detail, 8xAA
41.452 FPS
XFX GTX 260/216 896MB
2560×1600 – Max Detail, 8xAA
38.122 FPS
Palit HD 4870 X2 2GB
2560×1600 – Max Detail, 8xAA
35.297 FPS
EVGA GeForce GTS 250 1GB SC
2560×1600 – Max Detail, 4xAA
36.956 FPS
NVIDIA GeForce GTS 250 1GB
2560×1600 – Max Detail, 4xAA
35.756 FPS
Sapphire HD 4830 512MB
2560×1600 – Max Detail, 4xAA
32.589 FPS
ASUS GeForce 9800 GTX+ 512MB
2560×1600 – Max Detail, 0xAA
46.250 FPS
Sapphire HD 4670 512MB
1920×1200 – Max Detail, 0xAA
39.204 FPS

Ultimately, the GTS 250’s best playable settings mimic the above graph with 4xAA enabled and PhysX disabled. The 9800 GTX had to drop anti-aliasing, at which point it became much more playable.

Need for Speed: Undercover

The Need for Speed series is one that remains close to my heart, as I’ve been played through each title since the release of the second title. Although the series has taken some strange turns most recently, the series still manages to deliver a great arcade-like experience that can be enjoyed by NFS die-hards and casual gamers alike. Sadly, more serious racing fans have had to look elsewhere lately, so hopefully the next NFS incarnation will finally perfect what fans are really looking for.

While ProStreet diverted from the usual “open-world” design, Undercover returned to it. Also returning are police cars, a favorite of most fans. I’m a firm believer that most NFS titles should include police chases, and for the most part, they’re executed well in Undercover. There’s not too much that exists in this world that proves more frustrating than running over a spike strip after a clean 30-minute run, though.

For all of our tests, the graphics settings available are maxed out to their highest ability, with 4xAA being our chosen Anti-Aliasing setting.

This particular game loves NVIDIA cards, and that’s obvious by the fact that this is the first set of graphs that show the GTS 250 and 9800 GTX+ performing better than higher-end ATI cards. There’s great performance across the board though, from all cards.

As mentioned in previous GPU content, due to a bug with NVIDIA’s drivers, 2560×1600 is not a selectable resolution on our Gateway XHD3000 display with a very small number of games, NFS Undercover being one of them. I was recently passed an alpha driver that enabled the ability to run this resolution, but it doesn’t include the profile for the GTS 250, so I was unable to test that card here at that resolution. Once NVIDIA finalizes this revised driver, I’ll update the entire graph with fresh results.

Futuremark 3DMark Vantage

Although we generally shun automated gaming benchmarks, we do like to run at least one to see how our GPUs scale when used in a ‘timedemo’-type scenario. Futuremark’s 3DMark Vantage is without question the best such test on the market, and it’s a joy to use, and watch. The folks at Futuremark are experts in what they do, and they really know how to push that hardware of yours to its limit.

The company first started out as MadOnion and released a GPU-benchmarking tool called XLR8R, which was soon replaced with 3DMark 99. Since that time, we’ve seen seven different versions of the software, including two major updates (3DMark 99 Max, 3DMark 2001 SE). With each new release, the graphics get better, the capabilities get better and the sudden hit of ambition to get down and dirty with overclocking comes at you fast.

Similar to a real game, 3DMark Vantage offers many configuration options, although many (including us) prefer to stick to the profiles which include Performance, High and Extreme. Depending on which one you choose, the graphic options are tweaked accordingly, as well as the resolution. As you’d expect, the better the profile, the more intensive the test.

Performance is the stock mode that most use when benchmarking, but it only uses a resolution of 1280×1024, which isn’t representative of today’s gamers. Extreme is more appropriate, as it runs at 1920×1200 and does well to push any single or multi-GPU configuration currently on the market – and will do so for some time to come.

Given the known scalability of 3DMark Vantage, there’s absolutely nothing unexpected here.

Final Thoughts

So, was NVIDIA’s GTS 250 worth the wait? That’s of course a comedic question, because as mentioned earlier, the company’s latest card is little different than the 9800 GTX+, which we’ve had available since last summer. That’s important to point out, because if you already own a decent GPU that you purchased within the past year, then the GTS 250 is not likely for you. It would be much more worthwhile to go with something a little beefier, such as the GTX 260/216.

But while the GTS 250 doesn’t bring anything entirely new to the table, it’s not really a waste of time, either. After all, we received a revamped PCB that allowed for a smaller form-factor, we cut down on one power connector, we enjoy lower temperatures and also more memory. Not to mention a revised price. It’s a solid addition to NVIDIA’s line-up, but it’s still too bad that we didn’t see a truly current-gen product, such as a scaled-down GTX 260/216.

As mentioned in our last few graphics card reviews, we’re still in the process of re-benchmarking our collection of current GPUs, so we’re missing a few notable ones here. I regret not being able to at least include the HD 4850 512MB for comparison’s sake, but time constraints got the better of me. I’m confident in the fact that the HD 4850 is quite comparable to the GTS 250, though, just as the original 9800 GTX+ was.

The fact is, the 9800 GTX+ is still a great card, and as seen throughout our test results, the only real gain seen from having 1GB worth of memory on board was with higher resolutions. But, that fact is rather moot, because if someone has the desire to game at 2560×1600, they’re not likely considering a $150 GPU. At least I’d hope not!

Before we wrap up, let’s discuss power and temps. First, the power. EVGA’s pre-overclocked GTS 250 idled at a comfortable 169W, and hit a max load of 324W. The 9800 GTX+, by comparison, idled at 195W and hit a max load of 313W. Odd results, as both cards had opposing benefits, but at least with the GTS 250, we’re down to one power connector.

Where temperatures were concerned, things improved even further with the GTS 250. While both cards hit a load temperature of 76°C, the GTS 250 idled 7°C cooler. That might not seem that impressive, but realize that the 9800 GTX+ was utilizing a robust third-party cooler (ASUS Dark Knight), so for the GTS 250 to hit the same load temperature with a reference design is great.

Aside from the card itself, NVIDIA of course touts many other features as well on the side, including their PhysX technology, their CUDA platform and also their 3D Vision gaming peripheral. Consider it a good or bad thing, NVIDIA has really been on the ball with cool technologies lately, and I personally find quite a bit of value in each one of these. PhysX will of course become more appreciated as time goes on, but CUDA has some cool uses now (such as video encoding), and will also (hopefully) improve a lot more over the course of this year.

You can expect us to take a continued look at each one of NVIDIA’s extra technologies as time passes. In particular I’d like to evaluate CUDA-based applications a bit more, since I haven’t spent much time with them in the past. I tried to gather some opinions of Badaboom for this article, but I quickly realized that much more time will be needed before I can make some solid conclusions.

That aside, although article embargos lift today, the cards are not going to be released until next week (March 10). At that time, the 512MB version of the card will be available for $129, while the 1GB model, as we took a look at here today, will retail for $149.

Discuss this article in our forums!

Have a comment you wish to make on this article? Recommendations? Criticism? Feel free to head over to our related thread and put your words to our virtual paper! There is no requirement to register in order to respond to these threads, but it sure doesn’t hurt!

Copyright © 2005-2020 Techgage Networks Inc. - All Rights Reserved.