Techgage logo

NVIDIA GeForce GTX 480 – GF100 Has Landed

Date: March 29, 2010
Author(s): Rob Williams

We’ve learned a lot about NVIDIA’s GF100 (Fermi) architecture over the past year, and after what seemed like an eternal wait, the company has officially announced the first two cards as part of the series; the GeForce GTX 470 and GTX 480. To start, we’re taking a look at the latter, so read on to see if it GF100 was worth the wait.



Introduction

The past six months have been rather difficult for both NVIDIA and consumer alike, as both sides have been eagerly awaiting the launch of the company’s first GF100-based (Fermi) graphics cards. The long wait is over, though, and AMD’s Radeon HD 5000 series finally has some competition. Given the extra time NVIDIA spent on GF100, can we expect that its latest releases can give the HD 5000 series a run for its money?

The answer to that will of course be answered throughout the progression of this article. The first cards to be launched as part of the GF100 series are the GeForce GTX 470 and GTX 480. The latter comes in at $499, placing it $100 above the HD 5870. The GTX 470 is still considered a higher-end offering, but is more modestly priced at $349. Availability is expected to begin the week of April 12, with “tens of thousands” of cards being shipped out to retailers by then.

The road to GF100 has of course been a rough one, with NVIDIA being hit with one issue after another. The biggest hit has been the fact that ATI beat the company to the punch by a full six months, something that no doubt drives some of the company’s execs up the wall. Another rather significant issue has been yields, which by the looks of things, actually could still be an issue (something I’ll talk about later).

When NVIDIA first announced its GF100 architecture last fall at the company’s own GPU Technology Conference, there was an undeniable focus on having the GPU act as a CPU in many different regards. Given that this conference is more developer-focused, NVIDIA set out to prove the effect that CUDA could have not only in gaming, but in applications as well.

Since that time, the company has seemingly stepped back from pushing CUDA quite so hard, and in the past couple of weeks has shifted its focus back to the pure gaming aspect. During a briefing almost two weeks ago, the company made almost no mention of CUDA or GPGPU, but rather talked mostly about the GTX 480’s excellent gaming performance – especially where DirectX 11’s tessellation feature is concerned.

NVIDIA's GeForce GTX 480

We took a thorough look at GF100 a couple of months ago, and I recommend reading through that article if you want to see all of what Fermi brings to the table. I won’t rehash what was said there, but to put it simply, NVIDIA touts GF100 as being the ultimate gaming architecture, and one that can increase computational performance by as much as 3.5x (this can include CUDA and physics). The company also stresses its dominance where tessellation is concerned, as mentioned above.

As you would expect with a product such as GF100, which is based on a revamped and rethought-out architecture, the improvements NVIDIA brings is going to make its aging GT 200 architecture look out-of-date. We have far better capabilities all-around, and will see nice competition between it and AMD’s own offerings.

One of the easiest ways to compare one generation to the next is to take a look at all of the company’s current models in a table, so we’ve provided that below. You can see that the GTX 480 (which NVIDIA compares mostly to the GTX 285) has twice the number of CUDA cores, and also features higher clock speeds. It’s also had its memory bus decreased from 512-bit to 384-bit, but we’ve seen the overall density receive a boost from 1GB to 1.5GB.

Model
Core MHz
Shader MHz
Mem MHz
Memory
Bus Width
Processors
GeForce GTX 480
700
1401
924
1536MB
384-bit
480
GeForce GTX 470
607
1215
837
1280MB
320-bit
448
GeForce GTX 295
576
1242
1000
1792MB
448-bit
480
GeForce GTX 285
648
1476
1242
1GB
512-bit
240
GeForce GTX 275
633
1404
1134
896MB
448-bit
240
GeForce GTX 260
576
1242
999
896MB
448-bit
216
GeForce GTS 250
738
1836
1100
1GB
256-bit
128
GeForce GT 240
550
1340
1700
512MB – 1GB
128-bit
96
GeForce GT 220
625
1360
790
1GB
128-bit
48
GeForce 210
589
1402
790
512MB
64-bit
16

By specs alone, it’s not hard to understand that the GTX 480 should blow the GTX 285 out of the water, and even possibly the HD 5870. To see the differences between the GTX 480 and GTX 470 in more detail, I’ve grabbed a chart NVIDIA provided to us in our press kit, since it sums it all up nicely.

NVIDIA's GeForce GTX 480

The GTX 470 is scaled down just as you would expect a $499 –> $349 GPU to be. The card includes a lower number of overall CUDA cores, ROP units and texture units, and also has its clock speeds degraded. It also includes less memory, at 1.25GB. Like the lower-end cards of the last-generation (210, GT 220 and GT 240), GF100 cards are being built on a 40nm process.

As the process shrinks, so does the physical size of the die. Well, as long as the transistor count doesn’t drastically increase during the next-generation, which isn’t the case here. In fact, I almost wonder if GF100 sets a record, with its 3.2 billion (!) transistors resulting in a die-size of 529mm2. Compare that to the 2.15 billion (334mm2) for AMD’s Radeon HD 5870 or to the 1.17 billion (258mm2) for Intel’s six-core Core i7-980X Extreme Edition!

NVIDIA's GeForce GTX 480

Unfortunately, the screws on the back of my particular sample refused to cooperate, so to avoid stripping them, I am falling back on this make-shift graphic above to show the size difference between the HD 5870 die and the die on the GTX 480. As you can see, one is certainly larger than the other… 58% larger in volume, to be precise.

Before we get into performance testing, let’s first take a deeper look at the card itself, including its design and features.

Closer Look at the GTX 480

Over the past couple of years, we’ve begun to see both ATI and NVIDIA take more care in designing their respective GPU coolers, and today, both deliver some good ones. In the past, cost used to be a major concern, but today, GPUs tend to cost just a wee bit less than they did, say, five years ago, so both companies are willing to sink a bit of R&D into making sure their reference designs are suitable for all usage models.

For the GTX 470, NVIDIA stuck with the general design from the GT 200 series, but for the GTX 480, the cooler was built from the ground up with ultra-effective cooling in mind. And it’s no surprise, given that the chip consists of over 3 billion transistors and has an enormous surface area… it’s going to need efficient cooling.

Not only was the general cooling factor taken into consideration, but looks as well. The first time I saw the card on paper, I wasn’t that impressed, but I was more so impressed once I received the actual card. In fact, I’ve come to kind of like it, and it sure screams “high-end” like no other. The entire first batch of GTX 480’s will be using this cooler, as they’re all being sent out straight from NVIDIA. It might take a couple of months before we see companies such as EVGA selling the card with a custom cooler.

NVIDIA GeForce GTX 480

The photo below exhibits a first for a reference design… heat pipes. In total, there are five, with one hidden inside the shroud. I had hoped to tear this card apart in order to show off the cooler inside and out, but thanks to the stubborn screws on the back, I was unable. The fact that NVIDIA includes heat pipes is important to note, though, as it proves that we can expect this card to run hot.

NVIDIA GeForce GTX 480

In what’s been more common for dual-GPU cards, the GTX 480 adopts a PCI-E 8-pin + 6-pin configuration. Given the rated 250W TDP NVIDIA’s applied here, it seems like we could have sufficed with 6-pin + 6-pin, but as we’ll see later, we’ll see that NVIDIA was a bit generous in labeling the card as only 250W.

NVIDIA GeForce GTX 480

At an NVIDIA-held editor’s day this past January, the company said that for multi-monitor support (as in, three or more), more than one GPU would be required. This is in stark contrast to AMD’s current solution which can power 3 displays just fine off of a single GPU (and even six off of a single GPU with the Radeon HD 5870 Eyefinity 6 edition).

I didn’t have a major problem with that, because if I was to power a game on more than one monitor, I’m likely to crave more power than a single GPU would avail me. But the issue I now see is that the card doesn’t even support anything but DVI right out of the box, unless you consider mini-HDMI, which requires the use of an adapter.

NVIDIA GeForce GTX 480

Strangely enough, I didn’t even notice the lack of ports while taking pictures or during installation, so it wasn’t until I decided to test out HDMI that I clued in. I haven’t a clue why NVIDIA decided to go with a near-useless mini-HDMI port in lieu of a real one, when it should be possible. In a recent NVIDIA interview, the company mentions that GF 100 is an architecture built from the ground-up, and if that’s true, then the deliberate omission of DisplayPort and a real HDMI port is rather upsetting.

It’s important to note that AMD offers 2x DVI, 1x DisplayPort and 1x HDMI on all of its mainstream and higher cards. The only reason it doesn’t offer all four on the lower-end cards is due to the lack of space, since those cards can be built with a single-slot design.

NVIDIA downplays the lack of connectors available here, but the issue just shouldn’t exist. The good thing in all of this is that vendors such as EVGA and others will likely include important adapters to solve this problem, such as DVI to HDMI and possibly even mini-HDMI to HDMI.

NVIDIA’s CUDA & PhysX Tech Demos

During NVIDIA’s editor’s day, held this past January, the company showed off numerous tech demos that highlighted the benefits of CUDA, PhysX, and also tessellation. Some of these demos were quite fun to look at, and I imagined they’d be just as fun to play with. I admit, as soon as I received the GTX 480, I didn’t load up a game… but rather each one of these demos first.

Tech demos are nothing more than that, but they do well to show what a product is capable of, and in the case of these, I’d love to see the results in upcoming games. Let’s first start with a demo called Design Garage. No, you can’t go and create a vehicle with this, but what you can do is load one of the vehicle models that are shipped with the tool, and also an environment, and then render it. Sounds boring, huh? Well, it isn’t.

What makes the tool fun is that you can change what seems like a countless number of variables, and the result will be a scene and vehicle that simply look fantastic. This isn’t for the weak of heart, though. If you choose the ray tracing mode, the entire scene is rendered fast. But if you want to step things up and use the much more robust path rendering technique, the scene can essentially be rendered infinitely… you stop it when you are happy with the result.

The image below shows off what the scene looks like as soon as you begin the path renderer:

NVIDIA GeForce GTX 480

You can turn off the path renderer to opt into ray tracing instead, which results in the image below on the left. If you use the path rendering, which is represented in the right image, you can see that the result is far better… there’s just no comparison. I only let the render run for 20 minutes, so the scene will only improve if you run it for an hour, or even overnight.

NVIDIA GeForce GTX 480 NVIDIA GeForce GTX 480

So what’s this prove? It might seem like a rather pointless demo at first, but NVIDIA talked about some great ideas at the aforementioned editor’s day. One example was a car dealer. Imagine being able to walk in, and the sales-person put in your desired color, features, and other elements, and a minute or two later be able to look at a high-resolution and high-quality version of what you chose.

Another example that I find to be more interesting is the one where a feature like this is implemented into a game. In some racing games, you’re able to customize and build cars to your liking, choosing even the most minute detail. Since you spend so much time on the tweaking, wouldn’t it be cool to toss the car into an environment, customize your lighting, and then render a high-detail image that you could use as a desktop wallpaper, or to pass around to your friends for bragging rights?

If you drive the above Bugatti Veyron with the window open or top down, your hair is likely to blow all over the place (assuming you have hair, and long hair at that), so this next tech demo loosely compliments Design Garage. Here, a model has a full head of hair, and using the GPU, it’s completely physics-accelerated (using PhysX).

The demo seems simple, but it highlights (no pun) the cool effect that advanced physics can have in a game. The hair moves incredibly realistically, right down to the strand-level. Hair has always been a bit complicated to render in a game, and this tech demo certainly does the best hair I’ve ever seen. It would be quite interesting to see realistic hair like this rendered in a game, that’s for sure.

NVIDIA GeForce GTX 480

Hair might be complicated to render in a game, but even more complicated is water. At least, realistic water. To show off the benefits of tessellation, NVIDIA created a demo to help people understand the dramatic improvements that the DirectX 11 feature can provide. On the left below image, you can see decent-looking water. It’s rather flat, and not entirely interesting. But look at the image on the right, with tessellation cranked up. Again, there’s just no comparison… it looks fantastic.

NVIDIA GeForce GTX 480 NVIDIA GeForce GTX 480

Although the differences between the two images above are obvious, the wireframe images below make it even clearer. In the left image, we likely have a few thousand triangles in total, while on the right, we’ve no doubt entered the millions. And boy, does it ever make a difference!

NVIDIA GeForce GTX 480 NVIDIA GeForce GTX 480

Finally, the last tech demo is one that I was really waiting to try out. Called Supersonic Sled, a poor soul is placed in one of the wackiest-looking vehicles ever devised. There’s virtually no protection, and on the back… a massive rocket. Sounds fun, right? Well, it is. In this demo, you can adjust the amount of debris to fall from the three main elements in the demo (shack, arch and bridge) and also the level of tessellation.

The demo shows off both PhysX and tessellation to a great degree, but it’s not so much the graphics or detail that makes this demo fun, but just the fact that you can catapult your victim at blazing speeds down a track and then hundreds of feet into the air. Ahh. Is it sick that I found this demo refreshing?

NVIDIA GeForce GTX 480 NVIDIA GeForce GTX 480

NVIDIA GeForce GTX 480 NVIDIA GeForce GTX 480

All of these demos will become available to anyone equipped with a GT 200 or GF100-equipped graphics card. I assume NVIDIA will release them to the public as soon as the GF100 cards hit retail.

Test System & Methodology

At Techgage, we strive to make sure our results are as accurate as possible. Our testing is rigorous and time-consuming, but we feel the effort is worth it. In an attempt to leave no question unanswered, this page contains not only our testbed specifications, but also a fully-detailed look at how we conduct our testing. For an exhaustive look at our methodologies, even down to the Windows Vista installation, please refer to this article.

Test Machine

The below table lists our testing machine’s hardware, which remains unchanged throughout all GPU testing, minus the graphics card. Each card used for comparison is also listed here, along with the driver version used. Each one of the URLs in this table can be clicked to view the respective review of that product, or if a review doesn’t exist, it will bring you to the product on the manufacturer’s website.

Component
Model
Processor
Intel Core i7-975 Extreme Edition – Quad-Core, 3.33GHz, 1.33v
Motherboard
Gigabyte GA-EX58-EXTREME – X58-based, F7 BIOS (05/11/09)
Memory
Corsair DOMINATOR – DDR3-1333 7-7-7-24-1T, 1.60
ATI Graphics Radeon HD 5870 1GB (Reference) – Catalyst 10.3
Radeon HD 5850 1GB (Sapphire Toxic) – Catalyst 10.2
Radeon HD 5850 1GB (ASUS) – Catalyst 9.10
Radeon HD 5830 1GB (Reference) – Beta Catalyst (02/10/10)
Radeon HD 5770 1GB (Reference) – Beta Catalyst (10/06/09)
Radeon HD 5750 1GB (Sapphire) – Catalyst 9.11
Radeon HD 5670 512MB (Reference) – Beta Catalyst (12/16/09)
Radeon HD 5570 1GB (Sapphire) – Beta Catalyst (12/11/09)
NVIDIA Graphics GeForce GTX 480 1536MB (Reference) – GeForce 197.17
GeForce GTX 295 1792MB (Reference) – GeForce 186.18
GeForce GTX 285 1GB (EVGA) – GeForce 186.18
GeForce GTX 275 896MB (Reference) – GeForce 186.18
GeForce GTX 260 896MB (XFX) – GeForce 186.18
GeForce GTS 250 1GB (EVGA) – GeForce 186.18
GeForce GT 240 512MB (ASUS) – GeForce 196.21
Audio
On-Board Audio
Storage
Seagate Barracuda 500GB 7200.11
Power Supply
Corsair HX1000W
Chassis
SilverStone TJ10 Full-Tower
Display
Gateway XHD3000 30″
Cooling
Thermalright TRUE Black 120
Et cetera
Windows Vista Ultimate 64-bit

When preparing our testbeds for any type of performance testing, we follow these guidelines:

To aide with the goal of keeping accurate and repeatable results, we alter certain services in Windows Vista from starting up at boot. This is due to the fact that these services have the tendency to start up in the background without notice, potentially causing slightly inaccurate results. Disabling “Windows Search” turns off the OS’ indexing which can at times utilize the hard drive and memory more than we’d like.

For more robust information on how we tweak Windows, please refer once again to this article.

Game Titles

At this time, we currently benchmark all of our games using three popular resolutions: 1680×1050, 1920×1080 and also 2560×1600. 1680×1050 was chosen as it’s one of the most popular resolutions for gamers sporting ~20″ displays. 1920×1080 might stand out, since we’ve always used 1920×1200 in the past, but we didn’t make this change without some serious thought. After taking a look at the current landscape for desktop monitors around ~24″, we noticed that 1920×1200 is definitely on the way out, as more and more models are coming out as native 1080p. It’s for this reason that we chose it. Finally, for high-end gamers, we also benchmark using 2560×1600, a resolution that’s just about 2x 1080p.

For graphics cards that include less than 1GB of GDDR, we omit Grand Theft Auto IV from our testing, as our chosen detail settings require at least 800MB of available graphics memory. Also, if the card we’re benchmarking doesn’t offer the performance to handle 2560×1600 across most of our titles reliably, only 1680×1050 and 1920×1080 will be utilized.

Because we value results generated by real-world testing, we don’t utilize timedemos whatsoever. The possible exception might be Futuremark’s 3DMark Vantage. Though it’s not a game, it essentially acts as a robust timedemo. We choose to use it as it’s a standard where GPU reviews are concerned, and we don’t want to rid our readers of results they expect to see.

All of our results are captured with the help of Beepa’s FRAPS 2.98, while stress-testing and temperature-monitoring is handled by OCCT 3.1.0 and GPU-Z, respectively.

Call of Duty: Modern Warfare 2

Call of Juarez: Bound in Blood

Crysis Warhead

F.E.A.R. 2: Project Origin

Grand Theft Auto IV

Race Driver: GRID

World in Conflict: Soviet Assault

Call of Duty: Modern Warfare 2

When the original Call of Duty game launched in 2003, Infinity Ward was an unknown. Naturally… it was the company’s first title. But since then, the series and company alike have become household names. Not only has the series delivered consistently incredible gameplay, it’s pushed the graphics envelope with each successive release, and where Modern Warfare is concerned, it’s also had a rich storyline.

The first two titles might have been built on the already-outdated Quake III engine, but since then, the games have been built with improved graphical features, capable of pushing the highest-end PCs out there. Modern Warfare 2 is the first such exception, as it’s more of a console port than a true PC title. Therefore, the game doesn’t push PC hardware as much as we’d like to see, but despite that, it still looks great, and lacks little in the graphics department. You can read our review of the game here.

Manual Run-through: The level chosen is the 10th mission in the game, “The Gulag”. Our teams fly in helicopters up to an old prison with the intention of getting closer to finding the game’s villain, Vladimir Makarov. Our saved game file begins us at the point when the level name comes on the screen, right before we reach the prison, and it ends after one minute of landing, following the normal progression of the level. The entire run takes around two-and-a-half minutes.

NVIDIA’s GTX 480 kicks off to a good start here, but the HD 5870 sure isn’t far behind. As has become a relative theme, the “bigger” differences are seen at lower resolutions, only because the raw numbers are much higher. At 2560×1600, the difference is almost nil.

Graphics Card
Best Playable
Min FPS
Avg. FPS
NVIDIA GTX 480 1.5GB (Reference)
2560×1600 – Max Detail, 4xAA
50
81.669
ATI HD Radeon 5870 1GB (Reference)
2560×1600 – Max Detail, 4xAA
44
81.351
ATI HD 5770 1GB CrossFireX
2560×1600 – Max Detail, 4xAA
40
81.311
ATI HD 5850 1GB (ASUS)
2560×1600 – Max Detail, 4xAA
37
68.563
NVIDIA GTX 285 1GB (EVGA)
2560×1600 – Max Detail, 4xAA
41
66.527
NVIDIA GTX 275 896MB (Reference)
2560×1600 – Max Detail, 4xAA
37
61.937
ATI HD 5830 1GB (Reference)
2560×1600 – Max Detail, 4xAA
30
53.569
NVIDIA GTX 260 896MB (XFX)
2560×1600 – Max Detail, 4xAA
33
53.314
ATI HD 5770 1GB (Reference)
2560×1600 – Max Detail, 0xAA
36
60.337
NVIDIA GTS 250 1GB (EVGA)
2560×1600 – Max Detail, 0xAA
30
53.253
ATI HD 5750 1GB (Sapphire)
2560×1600 – Max Detail, 0xAA
28
50.727
ATI HD 5670 512MB (Reference)
1920×1080 – Max Detail, 4xAA
24
43.96
NVIDIA GT 240 512MB (ASUS)
1920×1080 – Max Detail, 0xAA
30
53.139
ATI HD 5570 1GB (Sapphire)
1920×1080 – Max Detail, 0xAA
27
45.841

Modern Warfare 2 looks quite good on the PC, but given that it’s a console port, it doesn’t begin to stress our graphics cards half as much as we’d like. So, not surprisingly, NVIDIA’s latest card handles the game at maxed-out settings and resolution.

Call of Juarez: Bound in Blood

When the original Call of Juarez was released, it brought forth something unique… a western-styled first-person shooter. That’s simply not something we see too often, so for fans of the genre, its release was a real treat. Although it didn’t really offer the best gameplay we’ve seen from a recent FPS title, its storyline and unique style made it well-worth testing.

After we retired the original title from our suite, we anxiously awaited for the sequel, Bound in Blood, in hopes that the series could be re-introduced into our testing once again. Thankfully, it could, thanks in part to its fantastic graphics, which are based around the Chrome Engine 4, and improved gameplay of the original. It was also well-received by game reviewers, which is always a good sign.

Manual Run-through: The level chosen here is Chapter I, and our starting point is about 15 minutes into the mission, where we stand atop a hill that overlooks a large river. We make our way across the hill and ultimately through a large trench, and we stop our benchmarking run shortly after we blow up a gas-filled barrel.

Bound in Blood tends to favor ATI cards for whatever reason, and that’s well evidenced here. The GTX 480 falls well behind at our 1680×1050 and 1920×1080 resolutions, but comes out even at 2560×1600. It doesn’t need to be said, but after 100 FPS, the differences just aren’t going to be noticeable, so for what it’s worth, both cards are ideal for this game.

Graphics Card
Best Playable
Min FPS
Avg. FPS
ATI HD Radeon 5870 1GB (Reference)
2560×1600 – Max Detail
58
82.863
NVIDIA GTX 480 1.5GB (Reference)
2560×1600 – Max Detail
58
82.711
ATI HD 5770 1GB CrossFireX
2560×1600 – Max Detail
59
87.583
NVIDIA GTX 295 1792MB (Reference)
2560×1600 – Max Detail
37
80.339
ATI HD 5850 1GB (ASUS)
2560×1600 – Max Detail
51
69.165
ATI HD 5830 1GB (Reference)
2560×1600 – Max Detail
35
54.675
NVIDIA GTX 285 1GB (EVGA)
2560×1600 – Max Detail
45
54.428
NVIDIA GTX 275 896MB (Reference)
2560×1600 – Max Detail
41
51.393
ATI HD 5770 1GB (Reference)
2560×1600 – Max Detail
28
45.028
NVIDIA GTX 260 896MB (XFX)
2560×1600 – Max Detail
35
44.023
ATI HD 5750 1GB (Sapphire)
2560×1600 – Max Detail
27
38.686
NVIDIA GTX 250 1GB (EVGA)
2560×1600 – Max Detail
25
33.751
ATI HD 5670 512MB (Reference)
1920×1080 – Max Detail
38
47.23
NVIDIA GT 240 512MB (ASUS)
1920×1080 – Max Detail
29
39.446
ATI HD 5570 1GB (Sapphire)
1920×1080 – Max Detail
24
32.931

Bound in Blood suffers the same issue that many other PC games today do… the engine doesn’t take full advantage of our PC’s. So once again, topped-out settings are fine here on almost our entire line-up of cards.

Crysis Warhead

Like Call of Duty, Crysis is another series that doesn’t need much of an introduction. Thanks to the fact that almost any comments section for a PC performance-related article asks, “Can it run Crysis?”, even those who don’t play computer games no doubt know what Crysis is. When Crytek first released Far Cry, it delivered an incredible game engine with huge capabilities, and Crysis simply took things to the next level.

Although the sequel, Warhead, has been available for just about a year, it still manages to push the highest-end systems to their breaking-point. It wasn’t until this past January that we finally found a graphics solution to handle the game at 2560×1600 at its Enthusiast level, but even that was without AA! Something tells me Crysis will be de facto for GPU benchmarking for the next while.

Manual Run-through: Whenever we have a new game in-hand for benchmarking, we make every attempt to explore each level of the game to find out which is the most brutal towards our hardware. Ironically, after spending hours exploring this game’s levels, we found the first level in the game, “Ambush”, to be the hardest on the GPU, so we stuck with it for our testing. Our run starts from the beginning of the level and stops shortly after we reach the first bridge.

The GTX 480 continues to taunt the HD 5870, beating it out at each one of our three resolutions. Surprisingly (or maybe not), the older dual-GPU GTX 295 outpaces the GTX 480 at 2560×1600.

Graphics Card
Best Playable
Min FPS
Avg. FPS
NVIDIA GTX 295 1792MB (Reference)
2560×1600 – Gamer, 0xAA
19
40.381
NVIDIA GTX 480 1.5GB (Reference)
2560×1600 – Gamer, 0xAA
23
37.135
ATI HD Radeon 5870 1GB (Reference)
2560×1600 – Gamer, 0xAA
15
34.41
ATI HD 5850 1GB (ASUS)
2560×1600 – Mainstream, 0xAA
28
52.105
NVIDIA GTX 285 1GB (EVGA)
2560×1600 – Mainstream, 0xAA
27
50.073
NVIDIA GTX 275 896MB (Reference)
2560×1600 – Mainstream, 0xAA
24
47.758
ATI HD 5830 1GB (Reference)
2560×1600 – Mainstream, 0xAA
23
41.621
NVIDIA GTX 260 896MB (XFX)
2560×1600 – Mainstream, 0xAA
21
40.501
ATI HD 5770 1GB (Reference)
2560×1600 – Mainstream, 0xAA
20
35.256
NVIDIA GTX 250 1GB (EVGA)
2560×1600 – Mainstream, 0xAA
18
34.475
ATI HD 5750 1GB (Sapphire)
1920×1080 – Mainstream, 0xAA
21
47.545
ATI HD 5670 512MB (Reference)
1920×1080 – Mainstream, 0xAA
20
35.103
NVIDIA GT 240 512MB (ASUS)
1920×1080 – Mainstream Detail, 0xAA
19
33.623
ATI HD 5570 1GB (Sapphire)
1920×1080 – Mainstream Detail, 0xAA
17
29.732

Given that Crysis Warhead is such a glutton when it comes to eating your PC’s hardware, I almost feel like there’s no such thing as a “best playable” when it comes to this title. Even if you have a top-of-the-line rig, you rarely can use settings that you’d imagine the PC could handle. So for this title, I’m a bit more lenient, and find that 30 FPS is very playable, which both cards manage to surpass. NVIDIA’s GTX 480 has a more desirable minimum FPS on average, though.

F.E.A.R. 2: Project Origin

Five out of the seven current games we use for testing are either sequels, or titles in an established series. F.E.A.R. 2 is one of the former, following up on the very popular First Encounter Assault Recon, released in fall of 2005. This horror-based first-person shooter brought to the table fantastic graphics, ultra-smooth gameplay, the ability to blow massive chunks out of anything, and also a very fun multi-player mode.

Three-and-a-half years later, we saw the introduction of the game’s sequel, Project Origin. As we had hoped, this title improved on the original where gameplay and graphics were concerned, and it was a no-brainer to want to begin including it in our testing. The game is gorgeous, and there’s much destruction to be had (who doesn’t love blowing expensive vases to pieces?). The game is also rather heavily scripted, which aides in producing repeatable results in our benchmarking.

Manual Run-through: The level used for our testing here is the first in the game, about ten minutes in. The scene begins with a travel up an elevator, with a robust city landscape behind us. Our run-through begins with a quick look at this cityscape, and then we proceed through the level until the point when we reach the far door as seen in the above screenshot.

Similar to Call of Juarez, F.E.A.R. 2 favors ATI cards, and it’s evident here. Not to sound like a broken record, but with average FPS’ like this, all except the $100 and under cards can handle this game at most resolutions and detail settings.

Graphics Card
Best Playable
Min FPS
Avg. FPS
NVIDIA GTX 295 1792MB (Reference)
2560×1600 – Max Detail, 4xAA, 16xAF
45
95.767
ATI HD 5870 1GB (Reference)
2560×1600 – Max Detail, 4xAA, 16xAF
62
91.733
NVIDIA GTX 480 1.5GB (Reference)
2560×1600 – Max Detail, 4xAA, 16xAF
52
82.357
ATI HD 5770 1GB CrossFireX
2560×1600 – Max Detail, 4xAA, 16xAF
57
87.194
ATI HD 5850 1GB (ASUS)
2560×1600 – Max Detail, 4xAA, 16xAF
51
73.647
NVIDIA GTX 285 1GB (EVGA)
2560×1600 – Max Detail, 4xAA, 16xAF
39
62.014
NVIDIA GTX 275 896MB (Reference)
2560×1600 – Max Detail, 4xAA, 16xAF
37
57.266
ATI HD 5830 1GB (Reference)
2560×1600 – Max Detail, 4xAA, 16xAF
40
57.093
NVIDIA GTX 260 896MB (XFX)
2560×1600 – Max Detail, 4xAA, 16xAF
29
48.110
ATI HD 5770 1GB (Reference)
2560×1600 – Max Detail, 4xAA, 16xAF
31
47.411
ATI HD 5750 1GB (Sapphire)
2560×1600 – Max Detail, 0xAA, 16xAF
27
39.563
NVIDIA GTX 250 1GB (EVGA)
2560×1600 – Max Detail, 4xAA, 16xAF
24
36.331
ATI HD 5670 512MB (Reference)
1920×1080 – Max Detail, 4xAA, 16xAF
31
46.87
NVIDIA GT 240 512MB (ASUS)
1920×1080 – Max Detail, 0xAA, 4xAF
30
45.039
ATI HD 5570 1GB (Sapphire)
1920×1080 – Max Detail, 0xAA, 4xAF
22
40.430

With an average FPS of over 80, the GTX 480 can handle this game in all shapes and form. The best part is that it’s genuinely a great-looking title as well.

Grand Theft Auto: IV

If you look up the definition for “controversy”, Grand Theft Auto should be listed. If it’s not, then that should be a crime, because throughout GTA’s many titles, there’s been more of that than you can shake your fist at. At the series’ beginning, the games were rather simple, and didn’t stir up too much passion in certain opposers. But once GTA III and its successors came along, its developers enjoyed all the controversy that came their way, and why not? It helped spur incredible sales numbers.

Grand Theft Auto IV is yet another continuation in the series, though it follows no storyline from the previous titles. Liberty City, loosely based off of New York City, is absolutely huge, with much to explore. This is so much so the case, that you could literally spend hours just wandering around, ignoring the game’s missions, if you wanted to. It also happens to be incredibly stressful on today’s computer hardware, similar to Crysis.

Manual Run-through: After the first minor mission in the game, you reach an apartment. Our benchmarking run starts from within this room. From here, we run out the door, down the stairs and into an awaiting car. We then follow a specific path through the city, driving for about three minutes total.

I found the results here to be quite interesting. The HD 5870 had an obvious lead at both 1680×1050 and 1920×1080, but the GTX 480 soared past it at 2560×1600. If I had to guess, the extra 0.5GB the GTX 480 has to juggle around helped with that difference.

Graphics Card
Best Playable
Min FPS
Avg. FPS
NVIDIA GTX 480 1.5GB (Reference)
2560×1600 – H/H/VH/H/VH Detail
35
56.840
NVIDIA GTX 295 1792MB (Reference)
2560×1600 – H/H/VH/H/VH Detail
27
52.590
ATI HD 5770 1GB CrossFireX
2560×1600 – H/H/VH/H/VH Detail
30
51.813
ATI HD 5870 1GB (Reference)
2560×1600 – H/H/VH/H/VH Detail
31
47.194
NVIDIA GTX 285 1GB (EVGA)
2560×1600 – High Detail
32
45.573
NVIDIA GTX 275 896MB (Reference)
2560×1600 – High Detail
30
44.703
NVIDIA GTX 260 896MB (XFX)
2560×1600 – High Detail
24
38.492
ATI HD 5850 1GB (ASUS)
1920×1080 – High Detail
39
58.886
ATI HD 5830 1GB (Reference)
1920×1080 – High Detail
31
51.213
ATI HD 5770 1GB (Reference)
1920×1080 – High Detail
33
47.719
NVIDIA GTX 250 1GB (EVGA)
1920×1080 – High Detail
21
34.257
ATI HD 5750 1GB (Sapphire)
1920×1080 – High Detail
27
39.904

Similar to Crysis Warhead, GTA IV is a game that never seems to run ideally even with a sweet PC and modest detail settings. Nature of the beast, I guess. Thanks to the raw brawn that both the GTX 480 and HD 5870 have, some detail settings could be increased resulting only in a very minor drop in performance.

Race Driver: GRID

If you primarily play games on a console, your choices for quality racing games are plenty. On the PC, that’s not so much the case. While there are a good number, there aren’t enough for a given type of racing game, from sim, to arcade. So when Race Driver: GRID first saw its release, many gamers were excited, and for good reason. It’s not a sim in the truest sense of the word, but it’s certainly not arcade, either. It’s somewhere in between.

The game happens to be great fun, though, and similar to console games like Project Gotham Racing, you need a lot of skill to succeed at the game’s default difficulty level. And like most great racing games, GRID happens to look absolutely stellar, and each of the game’s locations look very similar to their real-world counterparts. All in all, no racing fan should ignore this one.

Manual Run-through: For our testing here, we choose the city where both Snoop Dogg and Sublime hit their fame, the LBC, also known as Long Beach City. We choose this level because it’s not overly difficult, and also because it’s simply nice to look at. Our run consists of an entire 2-lap race, with the cars behind us for almost the entire race.

GRID is another game that tends to favor ATI cards to some small degree, so it’s of little surprise to see the HD 5870 outpace it. But again, the real-world difference is minimal, so it’s as though there’s no difference at all.

Graphics Card
Best Playable
Min FPS
Avg. FPS
ATI HD 5870 1GB (Reference)
2560×1600 – Max Detail, 4xAA
83
103.622
ATI HD 5770 1GB CrossFireX
2560×1600 – Max Detail, 4xAA
81
104.32
NVIDIA GTX 295 1792MB (Reference)
2560×1600 – Max Detail, 4xAA
84
103.958
NVIDIA GTX 480 1.5GB (Reference)
2560×1600 – Max Detail, 4xAA
81
98.578
ATI HD 5850 1GB (ASUS)
2560×1600 – Max Detail, 4xAA
68
84.732
NVIDIA GTX 285 1GB (EVGA)
2560×1600 – Max Detail, 4xAA
54
66.042
ATI HD 5830 1GB (Reference)
2560×1600 – Max Detail, 4xAA
53
65.584
NVIDIA GTX 275 896MB (Reference)
2560×1600 – Max Detail, 4xAA
52
63.617
ATI HD 5770 1GB (Reference)
2560×1600 – Max Detail, 4xAA
45
56.980
NVIDIA GTX 260 896MB (XFX)
2560×1600 – Max Detail, 4xAA
45
54.809
ATI HD 5750 1GB (Sapphire)
2560×1600 – Max Detail, 4xAA
39
47.05
NVIDIA GTX 250 1GB (EVGA)
2560×1600 – Max Detail, 4xAA
35
43.663
ATI HD 5670 512MB (Reference)
1920×1080 – Max Detail, 4xAA
36
47.36
ATI HD 5570 1GB (Sapphire)
1920×1080 – Max Detail, 0xAA
33
41.143
NVIDIA GT 240 512MB (ASUS)
1920×1080 – Max Detail, 0xAA
33
51.071

Once again, GRID is one of those games that runs well on almost anything, therefore, 2560×1600 at maxed settings was no issue at all. Naturally, it becomes our best playable setting.

World in Conflict: Soviet Assault

I admit that I’m not a huge fan of RTS titles, but World in Conflict intrigued me from the get go. After all, so many war-based games continue to follow the same story-lines we already know, and WiC was different. It counteracts the fall of the political and economic situation in the Soviet Union in the late 80’s, and instead provides a storyline that follows it as if the USSR had succeeded by proceeding with war in order to remain in power.

Many RTS games, with their advanced AI, tend to favor the CPU in order to deliver smooth gameplay, but WiC favors both the CPU and GPU, and the graphics prove it. Throughout the game’s missions, you’ll see gorgeous vistas and explore areas from deserts and snow-packed lands, to fields and cities. Overall, it’s a real visual treat for the eyes – especially since you’re able to zoom to the ground and see the action up-close.

Manual Run-through: The level we use for testing is the 7th campaign of the game, called Insurgents. Our saved game plants us towards the beginning of the mission with two squads of five, and two snipers. The run consists of bringing our men to action, and hovering the camera around throughout the duration. The entire run lasts between three and four minutes.

Up to this point, the GTX 480 hasn’t been able to definitively prove that it’s the better card when compared to the HD 5870, but Soviet Assault does a great job in helping its case. At all three resolutions, the GTX 480 clearly handles the game better than ATI’s single-GPU best.

Graphics Card
Best Playable
Min FPS
Avg. FPS
NVIDIA GTX 295 1792MB (Reference)
2560×1600 – Max Detail, 8xAA, 16xAF
40
55.819
NVIDIA GTX 480 1.5GB (Reference)
2560×1600 – Max Detail, 8xAA, 16xAF
39
53.714
ATI HD 5870 1GB (Reference)
2560×1600 – Max Detail, 8xAA, 16xAF
38
45.200
ATI HD 5770 1GB CrossFireX
2560×1600 – Max Detail, 4xAA, 16xAF
38
49.335
ATI HD 5850 1GB (ASUS)
2560×1600 – Max Detail, 4xAA, 16xAF
29
40.581
NVIDIA GTX 285 1GB (EVGA)
2560×1600 – Max Detail, 0xAA, 16xAF
34
49.514
NVIDIA GTX 275 896MB (Reference)
2560×1600 – Max Detail, 0xAA, 16xAF
36
46.186
ATI HD 5830 1GB (Reference)
2560×1600 – Max Detail, 0xAA, 16xAF
31
42.543
NVIDIA GTX 260 896MB (XFX)
2560×1600 – Max Detail, 0xAA, 16xAF
23
39.365
ATI HD 5770 1GB (Reference)
2560×1600 – Max Detail, 0xAA, 16xAF
28
37.389
NVIDIA GTX 250 1GB (EVGA)
2560×1600 – Max Detail, 0xAA, 4xAF
24
32.453
ATI HD 5750 1GB (Sapphire)
2560×1600 – Max Detail, 0xAA, 4xAF
23
31.769
NVIDIA GT 240 512MB (ASUS)
1920×1080 – Max Detail, 0xAA, 4xAF
22
33.788
ATI HD 5670 512MB (Reference)
1920×1080 – Max Detail, 0xAA, 16xAF
21
31.872
ATI HD 5570 1GB (Sapphire)
1920×1080 – Medium Detail, 0xAA, 4xAF
51
79.790

Soviet Assault is fussy with 8xAA on many GPUs, but on both ATI’s and NVIDIA’s top-end single-GPU cards, it’s no problem. Therefore, those settings become our best playable here.

Futuremark 3DMark Vantage

Although we generally shun automated gaming benchmarks, we do like to run at least one to see how our GPUs scale when used in a ‘timedemo’-type scenario. Futuremark’s 3DMark Vantage is without question the best such test on the market, and it’s a joy to use, and watch. The folks at Futuremark are experts in what they do, and they really know how to push that hardware of yours to its limit.

The company first started out as MadOnion and released a GPU-benchmarking tool called XLR8R, which was soon replaced with 3DMark 99. Since that time, we’ve seen seven different versions of the software, including two major updates (3DMark 99 Max, 3DMark 2001 SE). With each new release, the graphics get better, the capabilities get better and the sudden hit of ambition to get down and dirty with overclocking comes at you fast.

Similar to a real game, 3DMark Vantage offers many configuration options, although many (including us) prefer to stick to the profiles which include Performance, High and Extreme. Depending on which one you choose, the graphic options are tweaked accordingly, as well as the resolution. As you’d expect, the better the profile, the more intensive the test.

Performance is the stock mode that most use when benchmarking, but it only uses a resolution of 1280×1024, which isn’t representative of today’s gamers. Extreme is more appropriate, as it runs at 1920×1200 and does well to push any single or multi-GPU configuration currently on the market – and will do so for some time to come.

You are reading these charts right. Despite the GTX 480 being the technically superior card, the HD 5870 manages to out-perform it at three out of four resolutions – 1920×1200 being the exception.

Extra Games: HD 5870 vs. GTX 480

Though we run seven games in our current GPU-testing gauntlet, some are becoming a bit more aged than others. Therefore, it’s a little difficult to make sure that we’re testing the higher-end cards as best we can. So to make sure we’re being fair to both ATI and NVIDIA, I chose five current PC games to pit each card through. For good measure, we’ve also tossed in some Unigine Heaven goodness, to test each card under a heavy tessellation workload.

Unlike the rest of our game fleet, I’m not going to exhaust the detail settings I used for each title, but I’ll give a brief description. For each title, I tested at both 1680×1050 and 2560×1600, and if it was possible, I maxed out the game’s detail and tested with that. Fortunately, some of today’s games are too hardcore at maxed settings (it means we have room to breathe in the future), so I had to lower the detail in some regards for the sake of testing.

For AvP, BioShock 2, Dark Void and Just Cause 2, the detail settings were maxed, with 4xAA. Metro 2033 was the main game that had to use lower-than-top-end settings, as its detail is simply incredible. So much so, that I used different settings for each resolution. For 1680, I used a Very High detail setting with 4xAA, and for 2560, I dropped the detail to High and disabled both AA and the two available DirectX 11 features (DOF and Tessellation). Unigine’s benchmark was run with default settings, but with AF increased to 16x.

I apologize for these graphs being a little confusing at first. It was difficult to give all this information without creating four or more separate graphs! Purple and green is most important, as they are the averages. Purple for NVIDIA, and green for ATI (no I didn’t mean to further confuse like this on purpose).

Aside from Dark Void and Just Cause 2, the GTX 480 comes out on top in each test. The largest gain is seen in Alien vs. Predator… a full 25% boost in performance at 2560×1600. This is one of the few games on the market that uses DirectX 11’s tessellation, so I assume that the performance gain is attributed to that (due to being late on this article as is, I didn’t re-test the game without tessellation, but will at a later date).

Speaking of tessellation, this feature is one that NVIDIA has been touting hard since the beginning of the year, and a couple of weeks ago, it shot along a graph that explained the reason why. The company ran a 60-second-long benchmark using Unigine’s Heaven, and showed the stark performance gain that its card held. Naturally, I wanted to do the same exact test in our lab, but rather than do a 60 second run, I allowed the entire benchmark to complete, which totals just over 4 minutes.

Thanks to the heavy focus NVIDIA put on accelerating tessellation in GF100, it’s not too surprising to see the HD 5870 fall well behind here, but the importance of this benchmark is hard to settle on. Tessellation as it stands isn’t a major force in gaming today, and there’s no real sign to predict whether it will be soon. NVIDIA states that it will be in the year ahead.

Power & Temperatures

To test our graphics cards for both temperatures and power consumption, we utilize OCCT for the stress-testing, GPU-Z for the temperature monitoring, and a Kill-a-Watt for power monitoring. The Kill-a-Watt is plugged into its own socket, with only the PC connect to it.

As per our guidelines when benchmarking with Windows, when the room temperature is stable (and reasonable), the test machine is boot up and left to sit at the Windows desktop until things are completely idle. Once things are good to go, the idle wattage is noted, GPU-Z is started up to begin monitoring card temperatures, and OCCT is set up to begin stress-testing.

To push the cards we test to their absolute limit, we use OCCT in full-screen 2560×1600 mode, and allow it to run for 30 minutes, which includes a one minute lull at the start, and a three minute lull at the end. After about 10 minutes, we begin to monitor our Kill-a-Watt to record the max wattage.

The entire fleet of NVIDIA’s last-generation cards fell behind ATI where temperatures and power was concerned, and not much changes with GF100… although the situation has gotten worse. The GTX 480 almost managed to break a TG lab record with its top-end temperature of 96°, falling just behind the GTX 295’s 98°C. With such non-impressive temperatures, can we bet on modest power consumption at least?

I think it’s safe to say, “No.”

While AMD, both in its CPU and GPU divisions, and Intel, are both striving to make sure its products are as power-efficient as possible, NVIDIA seems to be going in the opposite direction, focusing on higher performance despite the power draw increase.

We’re not talking about a small power increase here. In fact, the differences between NVIDIA’s own current and previous generation, and especially to ATI’s current cards, is just staggering. Compared to the GTX 285, the GTX 480 draws an additional 49W at full load. Given the performance gain, that’s somewhat reasonable. But compared to the HD 5870, the GTX 480 sucks down an additional 107W at full load.

Ever since AMD’s HD 5000 launch, I’ve commended the company for developing its graphics cards with such excellent power efficiency, but looking at the power draw from the GTX 480 truly emphasizes two possible facts… that AMD has really done a tremendous job on designing its architecture for ultimate power efficiency and performance, or that NVIDIA has done the stark opposite.

I’ll talk more about this on the following page, along with the rest of my final thoughts.

Final Thoughts

NVIDIA’s Fermi architecture has been brewing for a while, but from a consumer perspective, we’ve been waiting a long six months to see the results come to fruition. Has the wait been worth it? I have to say that no, it wasn’t. It’s not that GF100 or the GTX 480 are “bad”, but they don’t come close to the leap we hoped for. For those who’ve held off on purchasing a high-end graphics card, there’s just no reward for your patience.

To dispel the notion that I might be anti-GF100, let’s clear up the obvious. The GTX 480 is the fastest single-GPU on the planet. The HD 5870 managed to beat it in select tests, but most of those were aging titles (the exceptions being Dark Void and Just Cause 2). I’m confident that if 100 titles were tested head-to-head, NVIDIA would come out on top based on what I’ve learned throughout all my testing.

One area where we didn’t perform testing is with 8xAA, a mode that NVIDIA stresses will run better on GF100. I’ve never made it a real point to test at that setting, as in the past, most games would halt to a crawl or show absolutely minimal gain in image quality with a much larger impact on performance. Personally, I still feel that 8xAA is for little more than ego-stroking. This is a theory I’d like to test in the near-future with a couple of select titles, since I wasn’t able to devote the time to it for this article.

NVIDIA also has the strong hand with tessellation performance. We saw that with AvP, Metro 2033, and Unigine’s Heaven benchmark, the GTX 480 was able to far surpass the HD 5870. It’s hard to say at this point in time just how important this is, since tessellation-supported games are few. But what I do know is that tessellation can make a noticeable difference in games, so I’m hoping to see the feature gain greater adoption from game developers all over. If that happens, and ATI doesn’t follow-up with some form of solution to accelerate tessellation on its own cards, NVIDIA’s GF100 will have gained one important selling point.

It’s clear that the GTX 480 holds the performance crown, and NVIDIA’s to be lauded for that accomplishment, but that’s where my praise for Fermi ends. The GTX 480, as we’ve seen, is a card that has multiple caveats, all of which are rather important to note.

NVIDIA GeForce GTX 480

On the previous page, we saw the GTX 480 break a couple of records we wish it didn’t. The card became our second-hottest ever, only to be surpassed by the dual-GPU GTX 295. For a lot of people, this isn’t a major issue if the card can handle it, but with a card that idles at close to 60°C and never dips below 90°C during gameplay, you can expect your room to be toasty in the summer. Though in the winter, the slogan “The Way Your House is Meant to be Heated” would have a nice ring to it…

The temperatures alone don’t hurt my view of the GTX 480, but it’s the side-effects that come with it that do. The GTX 480, without question, has the loudest GPU fan I’ve ever heard in my life. So much so, that I even mentioned it as a concern to NVIDIA. The reply I was given was that “The acoustics should be no worse than say, a GTX 295.“, but I don’t remember ever being worried that my PC was going to lift off the ground when I tested with that card.

What you get when you build a monolithic chip with 3.2 billion transistors with possibly inefficient power efficiency is the worst power consumption we’ve seen from a GPU in recent memory. Not even dual-GPUs can compete. I’m no devout environmentalist, but I do care enough about our earth to make simple decisions that can drastically decrease the amount of power I’m using. The GTX 480 draws 107W more than the HD 5870 at full load… that is not a small difference.

The rest of the issues are minor, but the fact that there are more issues is an issue within itself. When AMD delivered the HD 5000 series, I didn’t immediately draw up a list of downsides, but I can’t help but do it with the GTX 480. Take something as simple as the video outputs, for example. While AMD gives us dual DVI ports, DisplayPort and HDMI, NVIDIA gives us dual DVI ports and a mini-HDMI port… the latter of which will require a $20 – $25 cable to utilize (unless there are displays that use mini-HDMI that I’m aware of).

The unfortunate thing is that all of my complaints surely have everything to do with GF100’s architecture. It could be that NVIDIA wasn’t able to give us a full selection of video outputs due to the fact that there isn’t much free space on the card, but again, if GF100 was truly built from the ground up, this shouldn’t have been an issue.

Further proof of issues with the architecture were highlighted by our friend Nate from Legit Reviews, who discovered that while using a dual monitor setup with the GTX 480, the card draws 80W more power even without gaming, and runs at a constant 90°C. That’s certainly a major trade-off, and a strange issue, given neither the competition nor NVIDIA’s own last-generation cards never had that issue.

I hate to harp on all the downsides of the GTX 480, I really do, but I’d be doing a major disservice to our readers by downplaying any of them, because to me, no issue here is truly minor. To put everything into perspective, let’s sum things up where the GTX 480 stands:

The GTX 480 is 25% more expensive than the HD 5870, but the performance gain certainly doesn’t match it (except where tessellation is concerned). But, the upside still remains that performance gain. On the downside, NVIDIA’s latest and greatest consumes far more power, runs far hotter, almost has a mini-vacuum noise-level, has weak video-out options, and last but not least, comes a full six months after AMD’s HD 5000 series.

To NVIDIA’s advantage, the GTX 480 and GF100 in general offers more than just good gaming performance. With features like CUDA and PhysX, the company pretty much owns the landscape currently. AMD’s certainly involved as well, but not nearly to the extent that NVIDIA is. The fact that NVIDIA gets its hands dirty where game development is concerned is one reason the company should be commended. As far as I can tell, AMD truly has nowhere near the level of game-developer immersion as NVIDIA does.

The fact that GF100 supports CUDA, PhysX, and improves tessellation performance, all add up to the company’s trade-off theory. NVIDIA believes that the extra power consumption and higher temperatures are a proper trade-off for the card’s unique features, and that’s fair. I don’t quite agree, but that’s only my opinion. I am sure that by this point in the article, you have already formed your own opinion as well. If it varies from mine, that’s fine.

Like many others, I had high hopes for NVIDIA’s GF100 architecture despite all the negative press that surrounded it since last fall. Even though it seemed unlikely that Fermi would have given AMD a true run for its money, there was always that hope. But it became clear in months past that GF100 wasn’t going to deliver as we hoped it would. The company itself proved this by offering no performance previews outside of the Unigine Heaven benchmark, where the card obviously has the upper-hand.

What’s NVIDIA to do to make both the GTX 470 and GTX 480 more attractive? It’s hard to say, because of all the issues I mention, I’m certain most are tied to the architecture itself. We’re not quite at the point where we’re going to see a die-shrink anytime soon, and rumor has it that NVIDIA is struggling with its 40nm yields as is. Another rumor has it that that’s the reason the launch GTX 480 saw a drop to 480 CUDA cores from 512. Another issue could have been heat, but either way, both of these issues would still be attributed to the architecture.

Price changes would be a good start. At $500, the GTX 480 isn’t all too attractive given its disadvantages. If it was priced at $400, it would give enthusiasts a more tempting option, and possibly even cause AMD to lower its prices. But at $500, AMD has no reason at all to lower its prices, and it knows it. Another solution is to just wait, and let NVIDIA’s engineers fix issues that exist and release a revision. That sounds outlandish, but when the card as it stands draws so much power and runs so hot, it’s almost time to take drastic action.

Another thing I could see happening is the release of mainstream cards that wouldn’t suffer many of the same issues of the GTX 480 due to their scaled-back feature-sets (namely, number of CUDA cores). If NVIDIA could deliver cards to directly compete with ATI’s entire mainstream line-up, such as the HD 5830, HD 5770 and even the HD 5750, we’d be in for some interesting times. And while NVIDIA delivers cards to that market, it could tweak its enthusiast parts to fix the issues mentioned above and then the company could get back on track.

So who should purchase this card? Fans of NVIDIA, those who love PhysX and CUDA, and also those who want the best performance and can put up with the higher power draw and loud fan. As mentioned earlier, the GTX 480 isn’t a “bad” card, but with its issues, it’s not all too attractive, either.

Discuss this article in our forums!

Have a comment you wish to make on this article? Recommendations? Criticism? Feel free to head over to our related thread and put your words to our virtual paper! There is no requirement to register in order to respond to these threads, but it sure doesn’t hurt!

Copyright © 2005-2019 Techgage Networks Inc. - All Rights Reserved.