We’ve learned a lot about NVIDIA’s GF100 (Fermi) architecture over the past year, and after what seemed like an eternal wait, the company has officially announced the first two cards as part of the series; the GeForce GTX 470 and GTX 480. To start, we’re taking a look at the latter, so read on to see if it GF100 was worth the wait.
Over the past couple of years, we’ve begun to see both ATI and NVIDIA take more care in designing their respective GPU coolers, and today, both deliver some good ones. In the past, cost used to be a major concern, but today, GPUs tend to cost just a wee bit less than they did, say, five years ago, so both companies are willing to sink a bit of R&D into making sure their reference designs are suitable for all usage models.
For the GTX 470, NVIDIA stuck with the general design from the GT 200 series, but for the GTX 480, the cooler was built from the ground up with ultra-effective cooling in mind. And it’s no surprise, given that the chip consists of over 3 billion transistors and has an enormous surface area… it’s going to need efficient cooling.
Not only was the general cooling factor taken into consideration, but looks as well. The first time I saw the card on paper, I wasn’t that impressed, but I was more so impressed once I received the actual card. In fact, I’ve come to kind of like it, and it sure screams “high-end” like no other. The entire first batch of GTX 480’s will be using this cooler, as they’re all being sent out straight from NVIDIA. It might take a couple of months before we see companies such as EVGA selling the card with a custom cooler.
The photo below exhibits a first for a reference design… heat pipes. In total, there are five, with one hidden inside the shroud. I had hoped to tear this card apart in order to show off the cooler inside and out, but thanks to the stubborn screws on the back, I was unable. The fact that NVIDIA includes heat pipes is important to note, though, as it proves that we can expect this card to run hot.
In what’s been more common for dual-GPU cards, the GTX 480 adopts a PCI-E 8-pin + 6-pin configuration. Given the rated 250W TDP NVIDIA’s applied here, it seems like we could have sufficed with 6-pin + 6-pin, but as we’ll see later, we’ll see that NVIDIA was a bit generous in labeling the card as only 250W.
At an NVIDIA-held editor’s day this past January, the company said that for multi-monitor support (as in, three or more), more than one GPU would be required. This is in stark contrast to AMD’s current solution which can power 3 displays just fine off of a single GPU (and even six off of a single GPU with the Radeon HD 5870 Eyefinity 6 edition).
I didn’t have a major problem with that, because if I was to power a game on more than one monitor, I’m likely to crave more power than a single GPU would avail me. But the issue I now see is that the card doesn’t even support anything but DVI right out of the box, unless you consider mini-HDMI, which requires the use of an adapter.
Strangely enough, I didn’t even notice the lack of ports while taking pictures or during installation, so it wasn’t until I decided to test out HDMI that I clued in. I haven’t a clue why NVIDIA decided to go with a near-useless mini-HDMI port in lieu of a real one, when it should be possible. In a recent NVIDIA interview, the company mentions that GF 100 is an architecture built from the ground-up, and if that’s true, then the deliberate omission of DisplayPort and a real HDMI port is rather upsetting.
It’s important to note that AMD offers 2x DVI, 1x DisplayPort and 1x HDMI on all of its mainstream and higher cards. The only reason it doesn’t offer all four on the lower-end cards is due to the lack of space, since those cards can be built with a single-slot design.
NVIDIA downplays the lack of connectors available here, but the issue just shouldn’t exist. The good thing in all of this is that vendors such as EVGA and others will likely include important adapters to solve this problem, such as DVI to HDMI and possibly even mini-HDMI to HDMI.