AMD Radeon R9 290X & NVIDIA GeForce GTX 780 Ti Review

Print
by Rob Williams on March 3, 2014 in Graphics & Displays

More often than not, every battle in the GPU Wars is hotly contested. From performance to appraisals of value, AMD and NVIDIA always engage in apparent mortal combat with each generation of GPU. This current gen of GPU, though, sees a clear-cut winner in most catagories. So did Team Red win, or did Team Green? Read on to find out!

Page 1 – Introduction

At the moment, both AMD’s and NVIDIA’s top enthusiast offerings hover around the $700 price-point, and as the title of this article suggests, I’m going to be taking a look at both of them in some depth here.

But before I go further, there are a couple of things that need tackling. First, the GTX 780 Ti is not NVIDIA’s highest-end offering; that distinction belongs to the TITAN Black. However, that card isn’t targeted at the same sort of consumer that the 780 Ti is; those who opt for TITAN Black will be using multiple monitors (or a 4K monitor) and perhaps be able to take advantage of the vastly-improved double-precision performance the card offers. So, ignoring TITAN Black entirely, both AMD’s and NVIDIA’s top-end cards cost about $700; and thus begins the theme for this article.

There’s a reason I didn’t use the word “versus” in the title. The suggested retail price for AMD’s R9 290X is $549, but that’s been inflated to ~$700 in response to coin-mining enthusiasts snatching up AMD’s GCN-based cards as if one lucky card contained a map to the fountain of youth. Etailers don’t want to run out of cards (or, perhaps more accurately, they want to price gouge given the insatiable desire for the cards), and so actual gamers that happen to be AMD fans are the ones feeling the pain.

The fact of the matter is, though, both cards I’ll be taking a look at here can be had right now for about $700. So while I’m being careful to not call this a “versus” article, it can be taken as such. Just bear in mind that if SRP prices were kept in-tact, this would be a $549 vs. $700 head-to-head.

AMD Radeon R9 290X and NVIDIA GeForce GTX 780 Ti - Glamour Pose

From the red team, we have the Radeon R9 290X. On paper, this card’s biggest advantage over its main competitor is a 4GB framebuffer – something that those running big resolutions are going to like the look of. Further, being part of the Volcanic Islands series, the R9 290X supports True Audio and the Mantle API. Admittedly, both of these technologies are in their infancy, but both have good potential to make a splash when game developers jump on them down the road.

Carried-over features include support for Eyefinity (multi-monitor), CrossFire (multi-GPU), HD3D (3D gaming), PowerTune (advanced power control), and ZeroCore (ultra-low power consumption when idle). Unique to the R9 290/X is the ability to use CrossFire without connecting the cards with a bridge.

Another feature that’s almost invisible without being told about it is the “Uber” and “Quiet” mode switch. In the default Uber mode, the card’s fan settings allow the card to ramp up very noticeable noise levels when stressed, whereas the Quiet mode relaxes the fan values to be quieter with the caveat of the card running hotter. I’ll talk more about the temperatures situation on this card later.

It’s not too often that a GPU vendor will release a new model that completely disrupts the position of another one in the same lineup, but AMD did just that when it released the R9 290. At $399 (SRP), this card costs $150 less than the 290X. The difference between them? The R9 290X has 10% more cores and a 53MHz higher core clock. In this respect, the R9 290 is a no-brainer… that’s a lot of savings for so little loss. Due to the same inflation the R9 290X has suffered, the R9 290 can be had for about $150 above SRP, with some models available for about $550.

AMD Radeon Series Cores Core MHz Memory Mem MHz Mem Bus TDP
Radeon R9 290X 2816 1000 4096MB 5000 512-bit 250W
Radeon R9 290 2560 947 4096MB 5000 512-bit 250W
Radeon R9 280X 2048 <1000 3072MB 6000 384-bit 250W
Radeon R9 270X 1280 <1050 2048MB 5600 256-bit 180W
Radeon R9 270 1280 <925 2048MB 5600 256-bit 150W
Radeon R9 265 1024 <925 2048MB 5600 256-bit 150W
Radeon R7 260X 896 <1100 2048MB 6500 128-bit 115W
Radeon R7 260 768 <1000 1024MB 6000 128-bit 95W
Radeon R7 250X 640 <1000 1024MB 4500 128-bit 95W
Radeon R7 250 384 <1050 1024MB 4600 128-bit 65W

Due to reference PCIe power limits, both AMD and NVIDIA avoid releasing a card with a stated TDP higher than 250W, and so, that’s where both the R9 290X and GTX 780 Ti sit. However, it doesn’t take much imagination to realize that true TDPs would be higher if these vendors were a bit more honest – it’s nonsensical that three cards with wildly varying specs share the exact same TDP (and not to mention, don’t share the same power draw numbers when stressed).

One thing that might be worth mentioning is that for both of AMD’s and NVIDIA’s current lineups, only the top-tier cards in the lineup feature a new or improved architecture, although all include updated features (which are in effect added through the drivers). On AMD’s side, the R9 290 and R9 290X are the only cards based on a brand-new architecture, Hawaii. For NVIDIA, everything below the GTX 770 is based on the previous generation of GK10X GPUs.

Speaking of NVIDIA, let’s see where the 780 Ti settles itself into the green team’s lineup:

NVIDIA GeForce Series Cores Core MHz Memory Mem MHz Mem Bus TDP
GeForce GTX TITAN Black 2880 889 6144MB 7000 384-bit 250W
GeForce GTX TITAN 2688 837 6144MB 6008 384-bit 250W
GeForce GTX 780 Ti 2880 875 3072MB 7000 384-bit 250W
GeForce GTX 780 2304 863 3072MB 6008 384-bit 250W
GeForce GTX 770 1536 1046 2048MB 7010 256-bit 230W
GeForce GTX 760 1152 980 2048MB 6008 256-bit 170W
GeForce GTX 750 Ti 640 1020 2048MB 5400 128-bit 60W
GeForce GTX 750 512 1020 2048MB 5000 128-bit 55W
GeForce GTX 660 960 980 2048MB 6000 192-bit 140W
GeForce GTX 650 384 1058 1024MB 5000 128-bit 64W

I mentioned above that the release of AMD’s R9 290 disrupted the overall appeal of the R9 290X, but NVIDIA’s lineup suffered the same sort of thing when the 780 Ti came along; after all, how tempting is a $1,000 GPU that gets outperformed by a $700 one? For most enthusiasts, a TITAN would have been hard to justify, but the model itself was of course still justified by those who needed such a massive framebuffer and / or improved double-precision performance. NVIDIA corrected this issue with the TITAN Black, which about matches the 780 Ti in gaming performance, and still offers the other perks that helps set the card apart.

While the 700 series has introduced a bunch of tweaks under the hood, none of them result in clear features that a gamer might want to take advantage of. Fortunately for NVIDIA, it’s offered a rich featureset on its cards for some time; the carried-over features here includes support for CUDA (compute enhancements; apps need to support it explicitly), PhysX (advanced physics in games which supports the API), SLI (multi-GPU), 3D Vision (3D gaming), Surround (multi-monitor), Adaptive VSync (vertical sync with vastly reduced tearing), GPU Boost 2.0 (automatic boosts to the GPU clock if the card’s not hitting its peak temperature), and two technologies I’ve become a particular fan of: GameStream (the ability to stream a game from your PC to the SHIELD handheld), and ShadowPlay (game recording with virtually no performance hit).

Which Offers the Better Experience?

If I had to choose either the 780 Ti or 290X based on their featuresets alone, I’d have to tip my hat towards NVIDIA’s offering. Over the years, I’ve used both AMD’s and NVIDIA’s cards extensively, and I’ve come to find myself preferring NVIDIA’s featureset, and also its drivers. This is especially true on the multi-monitor front; I’ve found configuring multiple monitors on AMD’s cards to be an exercise in patience, whereas on NVIDIA, it’s an absolute breeze in comparison.

None of that is to say that NVIDIA’s multi-monitor implementation is perfect, as some oddities can still arise, but none of those oddities have matched the hassles I’ve experienced with AMD’s cards. In particular, while testing both the 290X and 780 Ti out in multi-monitor configurations, I experienced an issue with AMD where the driver refused to place my taskbar on the center display; instead, it kept it on the right one. This glitch required me to do a thorough uninstall of the driver.

There are other features NVIDIA offers that keeps me tied to its cards, but some or all of them might not matter to you. In particular, I appreciate the ability on NVIDIA’s cards to be able to configure games on a per-title basis via the driver (Note: I managed to overlook the fact that this can be done in AMD’s driver as well; thanks to Sean from the comments for pointing it out), something I couldn’t really live without at this point. Borderlands, for example, does not have a VSync option, which results in obvious tearing during gameplay. With NVIDIA’s Control Panel, I’m able to force Vsync through the driver. Another example is with my classic MMO Asheron’s Call. It offers no anti-aliasing options in-game, but with NVIDIA’s driver, I can force it.

AMD Radeon R9 290X and NVIDIA GeForce 780 Ti

Then, there’s ShadowPlay, the technology that allows you to record gameplay with a minimal hit to performance. As I mentioned above and elsewhere before, this is a technology that I’ve just about fallen in love with, because it works incredibly well, and is very convenient. Because ShadowPlay can record continually, you’re able to save video to your hard drive even if you had no intention to originally (useful if something awesome happens and you’d like to relive the experience). This is a feature I’ve taken advantage of three times in the past week. While some ShadowPlay competitors also offer continual recording, their performance hit is generally very noticeable (Fraps, for example, locks the framerate).

I feel that ShadowPlay is such a good feature, AMD must have its own solution in the works. I think that it kind of has to. With the unbelievable growth of online game streaming, gamers who partake in such activities are going to be attracted to the GPU that can vastly reduce the load off of their CPU to record their gameplay. And don’t forget: ShadowPlay doesn’t lock in the game’s framerate like Fraps does, so during gameplay, you’re not likely to realize it’s recording.

I’ve harped quite a bit on NVIDIA’s pluses, so what about AMD’s? Admittedly, in terms of features that can be taken advantage of right now, there’s not a whole lot to talk about. Both AMD and NVIDIA offer multi-monitor and multi-GPU support, for example, and other things like 3D gaming, overclocking flexibility, and power-related technologies.

That being said, Mantle could become a literal game-changer down the road, though it’s not likely to apply to R9 290X owners too much unless the PC the card’s installed in uses a very modest CPU, since Mantle’s biggest goal is easing the load off of the CPU. True Audio is another piece of technology that some could come to prefer once games start supporting it.

Of course, we can’t ignore one of the reasons AMD’s cards are flying off the shelves lately: The GCN architecture proves far more efficient at cryptocurrency mining, and it performs better than NVIDIA at other similar tasks as well, such as with Folding@home. If either of those things are important to you, AMD can’t be beat (for now, at least).

Ultimately, chances are you could go with either AMD or NVIDIA and not notice missing features. Personally, I like NVIDIA’s featureset enough to keep one of its cards in my personal rig, but you might not see value in the same features I do. From a stability standpoint, while I’ve experienced more hassles with AMD’s cards and drivers (even recently), I wouldn’t expect that the regular user would run into a roadblock. When you install a GPU every other day, you’re bound to run into more issues than is normal.

With all of that said, let’s take a quick look at our test system configuration and methodology, and then get into the important stuff: Gaming performance.

Support our efforts! With ad revenue at an all-time low for written websites, we're relying more than ever on reader support to help us continue putting so much effort into this type of content. You can support us by becoming a Patron, or by using our Amazon shopping affiliate links listed through our articles. Thanks for your support!

Rob Williams

Rob founded Techgage in 2005 to be an 'Advocate of the consumer', focusing on fair reviews and keeping people apprised of news in the tech world. Catering to both enthusiasts and businesses alike; from desktop gaming to professional workstations, and all the supporting software.

twitter icon facebook icon instagram icon