NVIDIA’s Fermi architecture has been brewing for a while, but from a consumer perspective, we’ve been waiting a long six months to see the results come to fruition. Has the wait been worth it? I have to say that no, it wasn’t. It’s not that GF100 or the GTX 480 are “bad”, but they don’t come close to the leap we hoped for. For those who’ve held off on purchasing a high-end graphics card, there’s just no reward for your patience.
To dispel the notion that I might be anti-GF100, let’s clear up the obvious. The GTX 480 is the fastest single-GPU on the planet. The HD 5870 managed to beat it in select tests, but most of those were aging titles (the exceptions being Dark Void and Just Cause 2). I’m confident that if 100 titles were tested head-to-head, NVIDIA would come out on top based on what I’ve learned throughout all my testing.
One area where we didn’t perform testing is with 8xAA, a mode that NVIDIA stresses will run better on GF100. I’ve never made it a real point to test at that setting, as in the past, most games would halt to a crawl or show absolutely minimal gain in image quality with a much larger impact on performance. Personally, I still feel that 8xAA is for little more than ego-stroking. This is a theory I’d like to test in the near-future with a couple of select titles, since I wasn’t able to devote the time to it for this article.
NVIDIA also has the strong hand with tessellation performance. We saw that with AvP, Metro 2033, and Unigine’s Heaven benchmark, the GTX 480 was able to far surpass the HD 5870. It’s hard to say at this point in time just how important this is, since tessellation-supported games are few. But what I do know is that tessellation can make a noticeable difference in games, so I’m hoping to see the feature gain greater adoption from game developers all over. If that happens, and ATI doesn’t follow-up with some form of solution to accelerate tessellation on its own cards, NVIDIA’s GF100 will have gained one important selling point.
It’s clear that the GTX 480 holds the performance crown, and NVIDIA’s to be lauded for that accomplishment, but that’s where my praise for Fermi ends. The GTX 480, as we’ve seen, is a card that has multiple caveats, all of which are rather important to note.
On the previous page, we saw the GTX 480 break a couple of records we wish it didn’t. The card became our second-hottest ever, only to be surpassed by the dual-GPU GTX 295. For a lot of people, this isn’t a major issue if the card can handle it, but with a card that idles at close to 60°C and never dips below 90°C during gameplay, you can expect your room to be toasty in the summer. Though in the winter, the slogan “The Way Your House is Meant to be Heated” would have a nice ring to it…
The temperatures alone don’t hurt my view of the GTX 480, but it’s the side-effects that come with it that do. The GTX 480, without question, has the loudest GPU fan I’ve ever heard in my life. So much so, that I even mentioned it as a concern to NVIDIA. The reply I was given was that “The acoustics should be no worse than say, a GTX 295.“, but I don’t remember ever being worried that my PC was going to lift off the ground when I tested with that card.
What you get when you build a monolithic chip with 3.2 billion transistors with possibly inefficient power efficiency is the worst power consumption we’ve seen from a GPU in recent memory. Not even dual-GPUs can compete. I’m no devout environmentalist, but I do care enough about our earth to make simple decisions that can drastically decrease the amount of power I’m using. The GTX 480 draws 107W more than the HD 5870 at full load… that is not a small difference.
The rest of the issues are minor, but the fact that there are more issues is an issue within itself. When AMD delivered the HD 5000 series, I didn’t immediately draw up a list of downsides, but I can’t help but do it with the GTX 480. Take something as simple as the video outputs, for example. While AMD gives us dual DVI ports, DisplayPort and HDMI, NVIDIA gives us dual DVI ports and a mini-HDMI port… the latter of which will require a $20 – $25 cable to utilize (unless there are displays that use mini-HDMI that I’m aware of).
The unfortunate thing is that all of my complaints surely have everything to do with GF100′s architecture. It could be that NVIDIA wasn’t able to give us a full selection of video outputs due to the fact that there isn’t much free space on the card, but again, if GF100 was truly built from the ground up, this shouldn’t have been an issue.
Further proof of issues with the architecture were highlighted by our friend Nate from Legit Reviews, who discovered that while using a dual monitor setup with the GTX 480, the card draws 80W more power even without gaming, and runs at a constant 90°C. That’s certainly a major trade-off, and a strange issue, given neither the competition nor NVIDIA’s own last-generation cards never had that issue.
I hate to harp on all the downsides of the GTX 480, I really do, but I’d be doing a major disservice to our readers by downplaying any of them, because to me, no issue here is truly minor. To put everything into perspective, let’s sum things up where the GTX 480 stands:
The GTX 480 is 25% more expensive than the HD 5870, but the performance gain certainly doesn’t match it (except where tessellation is concerned). But, the upside still remains that performance gain. On the downside, NVIDIA’s latest and greatest consumes far more power, runs far hotter, almost has a mini-vacuum noise-level, has weak video-out options, and last but not least, comes a full six months after AMD’s HD 5000 series.
To NVIDIA’s advantage, the GTX 480 and GF100 in general offers more than just good gaming performance. With features like CUDA and PhysX, the company pretty much owns the landscape currently. AMD’s certainly involved as well, but not nearly to the extent that NVIDIA is. The fact that NVIDIA gets its hands dirty where game development is concerned is one reason the company should be commended. As far as I can tell, AMD truly has nowhere near the level of game-developer immersion as NVIDIA does.
The fact that GF100 supports CUDA, PhysX, and improves tessellation performance, all add up to the company’s trade-off theory. NVIDIA believes that the extra power consumption and higher temperatures are a proper trade-off for the card’s unique features, and that’s fair. I don’t quite agree, but that’s only my opinion. I am sure that by this point in the article, you have already formed your own opinion as well. If it varies from mine, that’s fine.
Like many others, I had high hopes for NVIDIA’s GF100 architecture despite all the negative press that surrounded it since last fall. Even though it seemed unlikely that Fermi would have given AMD a true run for its money, there was always that hope. But it became clear in months past that GF100 wasn’t going to deliver as we hoped it would. The company itself proved this by offering no performance previews outside of the Unigine Heaven benchmark, where the card obviously has the upper-hand.
What’s NVIDIA to do to make both the GTX 470 and GTX 480 more attractive? It’s hard to say, because of all the issues I mention, I’m certain most are tied to the architecture itself. We’re not quite at the point where we’re going to see a die-shrink anytime soon, and rumor has it that NVIDIA is struggling with its 40nm yields as is. Another rumor has it that that’s the reason the launch GTX 480 saw a drop to 480 CUDA cores from 512. Another issue could have been heat, but either way, both of these issues would still be attributed to the architecture.
Price changes would be a good start. At $500, the GTX 480 isn’t all too attractive given its disadvantages. If it was priced at $400, it would give enthusiasts a more tempting option, and possibly even cause AMD to lower its prices. But at $500, AMD has no reason at all to lower its prices, and it knows it. Another solution is to just wait, and let NVIDIA’s engineers fix issues that exist and release a revision. That sounds outlandish, but when the card as it stands draws so much power and runs so hot, it’s almost time to take drastic action.
Another thing I could see happening is the release of mainstream cards that wouldn’t suffer many of the same issues of the GTX 480 due to their scaled-back feature-sets (namely, number of CUDA cores). If NVIDIA could deliver cards to directly compete with ATI’s entire mainstream line-up, such as the HD 5830, HD 5770 and even the HD 5750, we’d be in for some interesting times. And while NVIDIA delivers cards to that market, it could tweak its enthusiast parts to fix the issues mentioned above and then the company could get back on track.
So who should purchase this card? Fans of NVIDIA, those who love PhysX and CUDA, and also those who want the best performance and can put up with the higher power draw and loud fan. As mentioned earlier, the GTX 480 isn’t a “bad” card, but with its issues, it’s not all too attractive, either.
Discuss this article in our forums!
Have a comment you wish to make on this article? Recommendations? Criticism? Feel free to head over to our related thread and put your words to our virtual paper! There is no requirement to register in order to respond to these threads, but it sure doesn’t hurt!