ATI Radeon HD 4890 & NVIDIA GeForce GTX 275

Print
by Rob Williams on April 3, 2009 in Graphics & Displays

It’s not often we get to take two brand-new GPUs and pit them against each other in one launch article, but that’s what we’re doing with ATI’s HD 4890 and NVIDIA’s GTX 275. Both cards are priced at $249, and both also happen to offer great performance and insane overclocking-ability. So coupled with those and other factors, who comes out on top?

Page 3 – NVIDIA’s GeForce GTX 275

Like ATI, NVIDIA has had a busy few weeks, so before we look at the company’s recent announcements, let’s first take a look at the important part of the article, the GTX 275. Introduced as an in-between to the GTX 260/216 and GTX 285, the GTX 275 offers fast clocks at the typical mid-range price of ~$249. This card is of course designed to compete directly with ATI’s HD 4890, so instead of speculating which will come out on top, we’ll let the results on the following pages speak for themselves.

Interestingly, the GTX 275 doesn’t fall too far below the much-more expensive GTX 285. While the GTX 285 has a Core clock of 648MHz and Shader clock of 1476MHz, the GTX 275’s has its clocks dropped to 633MHz Core and 1404MHz Shader. What does this say? Well, it looks like NVIDIA really wanted to compete, and didn’t mind taking away from GTX 285 sales.

There’s potential here for an obvious winner: gamers. Although the clocks on the GTX 275 are noticeably lower than the GTX 285, we’ve come to expect quite a bit in way of overclocking from NVIDIA in the past, and I have doubts that every-single launch GTX 275 wouldn’t be able to hit GTX 285 speeds. That makes the GTX 275 just a little more desirable – not to mention a great value.

Like the HD 4890, NVIDIA has said that their card will retail for $249, and so far, we can see that we are coming close, with Newegg’s listings showing $259.99 (although none are in stock). Like NVIDIA’s other single-GPU cards, this one uses two PCI-E 6-pin connectors and has a max board power TDP of 219W. In addition, the memory configuration is slightly tweaked from the GTX 285. Instead of a 512-bit memory bus and 1GB of GDDR3, we see a 448-bit bus and 896MB of GDDR3.

Model
Core MHz
Shader MHz
Mem MHz
Memory
Bus Width
Processors
GeForce GTX 295
576
1242
1000
1792MB
448-bit
480
GeForce GTX 285
648
1476
1242
1GB
512-bit
240
GeForce GTX 275
633
1404
1134
896MB
448-bit
240
GeForce GTX 280
602
1296
1107
1GB
512-bit
240
GeForce GTX 260
576
1242
999
896MB
448-bit
216
GeForce GTS 250
738
1836
1100
1GB
256-bit
128
GeForce 9800 GX2
600
1500
1000
1GB
512-bit
256
GeForce 9800 GTX+
738
1836
1100
512MB
256-bit
128
GeForce 9800 GTX
675
1688
1100
512MB
256-bit
128
GeForce 9800 GT
600
1500
900
512MB
256-bit
112
GeForce 9600 GT
650
1625
900
512MB
256-bit
64
GeForce 9600 GSO
550
1375
800
384MB
192-bit
96

I’ve pretty well said everything that can be said about the card, so let’s move right along into what else NVIDIA has been up to. As mentioned in our news last week, the company unveiled their APEX technology during the Game Developers Conference, and though I’m not a developer, much less a game developer, this is one piece of technology that does seem to hold real potential, at least, as far as PhysX is concerned.

Although our mention in that post was quite brief, APEX is actually pretty cool in both its design and goals. NVIDIA recognized the fact that most game titles are built with far more artists than coders (no surprise), so APEX makes it easier for those artists to get become a larger part of the development process. As the example in the news post showed, the developer could essentially take a tree, make it explode, and tweak it as necessary in order to match their vision. It’s an interesting tool, so we’ll just have to sit back and wait to see how successful it will be.

During NVIDIA’s own conference call, Ambient Occlusion was again a topic of definite interest. Given that both ATI and NVIDIA spoke so highly of it, it’s bound to catch on. Although both manufacturer’s cards are said to offer near-identical results, NVIDIA has an added benefit of being able to add support to games that don’t natively offer it. Their prime example was Half-Life 2, and as a five-year-old game, it’s no surprise that it’s not supported. Thanks to an added control inside of the NVIDIA Control Center, though, you can enable it in any title and hope for the best.

Some examples of forced Ambient Occlusion are shown in these screenshots:

The topic of PhysX is one that’s never far from NVIDIA’s mouth, and the call last week was further proof of that. Exciting things are happening with the technology, and developers continue to pile on and claim support. Not too much in way of actual game titles were announced, but the company did toot their own horn for there being over 100+ games currently on the market to support it (including consoles).

This year, there are a few noteworthy games due to come out that will support the technology, from MKZ, a first-person shooter, to Star Tales, a game I’m not too familiar with, but does a good job of making me wonder about the potential of breast physics. In addition to future titles supporting the tech, NVIDIA also pointed out that patches exist for games like Sacred 2 to open up the support (and the results look quite good, actually).

Past PhysX, they talked a bit more about CUDA, and what applications are now available. Aside from Badaboom, which has become a standard in the discussion of CUDA, they also mentioned MotionDSP’s vReveal, a program we talked about briefly in our news last week. This application aims to take your shoddy-looking videos and improve them, a la CSI-style. It holds real promise, but as others have learned, it also has a few serious limitations.

I could write more about what NVIDIA discussed and showed off, but really, this article is about two graphics cards, so I’ll wrap this up. The last notable thing NVIDIA unveiled was their third power pack, which is available right now. In this one, you’ll be able to download the Sacred 2 PhysX patch, a benchmark demo of Star Tales, a PhysX screensaver (complete with source code), a trial of vReveal and a GPU-accelerated SETI@home client.

Both ATI and NVIDIA have been up to lots over the course of the past few weeks, and boy, has it made more work for editors! So finally, let’s get to the real good stuff… our test results. On the next page, we have our test rig and methodology, and afterwards, we’ll get right into Call of Duty: World at War.

Support our efforts! With ad revenue at an all-time low for written websites, we're relying more than ever on reader support to help us continue putting so much effort into this type of content. You can support us by becoming a Patron, or by using our Amazon shopping affiliate links listed through our articles. Thanks for your support!

Rob Williams

Rob founded Techgage in 2005 to be an 'Advocate of the consumer', focusing on fair reviews and keeping people apprised of news in the tech world. Catering to both enthusiasts and businesses alike; from desktop gaming to professional workstations, and all the supporting software.

twitter icon facebook icon instagram icon