Techgage logo

NVIDIA GeForce GTX 560 Ti – Titanium is Back

Date: January 26, 2011
Author(s): Rob Williams

After being hidden for the past eight years, NVIDIA has brought back the “Ti” name for its GeForce GTX 560 Ti. The GeForce4 Ti 4200, introduced in 2002, was a card that offered both great pricing and superb performance. NVIDIA looks to recreate the same sort of excitement with its GTX 560 Ti, but has it succeeded?



Introduction

In continuing its rollout of the GeForce GTX 500 series, NVIDIA has just unleashed its GTX 560 Ti, a mid-range graphics card targeting those looking for superb gaming performance but don’t care to take a hammer to their piggy bank.

At $249.99 USD, NVIDIA’s latest card competes directly with AMD’s brand-new Radeon HD 6950 1GB, and it’s our hope that after we’re through with this review, we’ll have a better understanding of which competitor comes out ahead at around the ~$250 mark.

When NVIDIA first contacted us about the GTX 560 Ti, I was hit with nostalgia immediately. At around the same time I was beginning to take PC hardware a little more seriously, I purchased a GeForce4 Ti 4200… an under-$200 card that at the time just screamed. I had a ton of fun with that card, and certainly got my money’s worth. Once the GeForce 5 series came to be, though, the “Ti” name was shelved and never to be seen again.

Of course, that is until now. Why exactly did NVIDIA revive this eight-year-old gem? In talking to the company about it, it seemed like there was no strong reason at all… it was just something to do. Plus, it looks good in marketing. “Ti” stands for “Titanium”, and as a metal that’s lighter than steel but just as strong, NVIDIA pushes the GTX 560 Ti as a card that packs a massive punch but in a modest physical package.

Closer Look

An interesting thing to mention about the GTX 560 Ti right off the bat is that this is not a replacement for the GTX 460. Strange, right? Well, it’s true, and as the price-points of both are quite far apart (~$70), it does tend to make sense. We’ll see how much sense after our performance benchmarks.

NVIDIA’s current line-up looks like this:

Model
Core MHz
Shader MHz
Mem MHz
Memory
Bus Width
Cores
GeForce GTX 580
772
1544
4008
1536MB
384-bit
512
GeForce GTX 570
732
1464
3800
1280MB
320-bit
480
GeForce GTX 560 Ti
822
1645
4008
1024MB
256-bit
384
GeForce GTX 465
607
1215
3206
1024MB
256-bit
352
GeForce GTX 460
675
675
1350
1350
3600
3600
768MB
1024MB
192-bit
256-bit
336
336
GeForce GTS 450
783
1566
3608
1024MB
128-bit
192

Compared to NVIDIA’s recently-launched GeForce GTX 570, the GTX 560 Ti is scaled down in numerous ways, such as by having a smaller number of cores, and a tighter memory bus width, but at the same time, the core speed has been bumped up, and overall, this should prove to be an interesting specimen.

The cooler on the GTX 560 Ti looks quite similar to the GTS 450 launched this past September, and overall it’s not a bad thing. The fan is still strangly protruding through the top of the open space, but that’s not an issue unless you for some reason enjoy rubbing the exterior of your graphics card while while gaming.

NVIDIA GeForce GTX 560 Ti

Like the other GTX 500 cards, this one features a mini-HDMI port along with dual DVI ports. For those interested in using more than two monitors for playing multi-monitor games or using NVIDIA 3D Surround, you’ll need to purchase a second GTX 560 Ti.

NVIDIA GeForce GTX 560 Ti

The rated TDP for the card is 170W, and NVIDIA recommends using a power supply rated at 500W or higher. Simple math will tell you that a 700W+ PSU would be suitable for two of these cards in SLI.

NVIDIA GeForce GTX 560 Ti

NVIDIA’s latest card looks interesting from both a specs and pricing perspective, and given that it features the Ti moniker that was once known for affordable excellency, this should be fun to benchmark. Let’s get on with it!

Test System & Methodology

At Techgage, we strive to make sure our results are as accurate as possible. Our testing is rigorous and time-consuming, but we feel the effort is worth it. In an attempt to leave no question unanswered, this page contains not only our testbed specifications, but also a detailed look at how we conduct our testing.

Test Machine

The below table lists our testing machine’s hardware, which remains unchanged throughout all GPU testing, minus the graphics card. Each card used for comparison is also listed here, along with the driver version used. Each one of the URLs in this table can be clicked to view the respective category on our site for that product.

Component
Model
Processor
Intel Core i7-975 Extreme Edition – Quad-Core @ 4.05GHz – 1.40v
Motherboard
Gigabyte GA-EX58-EXTREME – F13j BIOS (08/02/2010)
Memory
Corsair DOMINATOR – 12GB DDR3-1333 7-7-7-24-1T, 1.60v
ATI Graphics Radeon HD 6970 2GB CrossFireX (Reference) – Catalyst 10,12 Beta
Radeon HD 6950 2GB CrossFireX (Reference) – Catalyst 10.12 Beta
Radeon HD 6970 2GB (Reference) – Catalyst 10,12 Beta
Radeon HD 6950 2GB (Reference) – Catalyst 11.1
Radeon HD 6950 1GB (Reference) – Catalyst 11.1
Radeon HD 6870 1GB (Reference CrossFireX) – Catalyst 10.10
Radeon HD 6850 1GB (Reference CrossFireX) – Catalyst 10.10
Radeon HD 6870 1GB (Reference) – Catalyst Oct 5, 2010 Beta
Radeon HD 6850 1GB (Reference) – Catalyst Oct 5, 2010 Beta
Radeon HD 5870 1GB (Sapphire) – Catalyst 10.8
Radeon HD 5850 1GB (ASUS) – Catalyst 10.8
Radeon HD 5830 1GB (Reference) – Catalyst 10.8
Radeon HD 5770 1GB (Sapphire FleX) – Catalyst 10.9
Radeon HD 5770 1GB (Reference) – Catalyst 10.8
Radeon HD 5750 1GB (Sapphire) – Catalyst 10.8
NVIDIA Graphics GeForce GTX 580 1536MB (Reference) – GeForce 262.99
GeForce GTX 570 1280MB (Reference) – GeForce 263.09
GeForce GTX 560 Ti 1024MB (Reference) – GeForce 266.56
GeForce GTX 480 1536MB (Reference) – GeForce 260.63
GeForce GTX 470 1280MB (EVGA) – GeForce 260.63
GeForce GTX 460 1GB (EVGA) – GeForce 260.63
GeForce GTX 450 1GB (ASUS) – GeForce 260.63
Audio
ASUS Xonar D2X
Storage
Seagate Barracuda 500GB 7200.11
Power Supply
Corsair HX1000W
Chassis
Cooler Master HAF X Full-Tower
Display
Gateway XHD3000 30″
Cooling
Corsair H50 Self-Contained Liquid Cooler
Et cetera
Windows 7 Ultimate 64-bit

When preparing our testbeds for any type of performance testing, we follow these guidelines:

To aide with the goal of keeping accurate and repeatable results, we alter certain services in Windows 7 from starting up at boot. This is due to the fact that these services have the tendency to start up in the background without notice, potentially causing inaccurate test results. For example, disabling “Windows Search” turns off the OS’ indexing which can at times utilize the hard drive and memory more than we’d like.

The most important services we disable are:

The full list of Windows services we assure are disabled is large, but for those interested in perusing it, please look here. Most of the services we disable are mild, but we go to such an extent to have the PC as highly optimized as possible.

Game Titles

At this time, we benchmark with three resolutions that represent three popular monitor sizes available today, 20″ (1680×1050), 24″ (1920×1080) and 30″ (2560×1600). Each of these resolutions offers enough of a variance in raw pixel output to warrant testing with it, and each properly represent a different market segment: mainstream, mid-range and high-end.

Because we value results generated by real-world testing, we don’t utilize timedemos. The possible exceptions might be Futuremark’s 3DMark Vantage and Unigine’s Heaven 2.1. Though neither of these are games, both act as robust timedemos. We choose to use them as they’re a standard where GPU reviews are concerned.

All of our results are captured with the help of Beepa’s FRAPS 3.2.3, while stress-testing and temperature-monitoring is handled by OCCT 3.1.0 and GPU-Z, respectively.

For those interested in the exact settings we use for each game, direct screenshots can be seen below:

Colin McRae: Dirt 2

Colin McRae: Dirt 2 - Settings

Colin McRae: Dirt 2 - Settings

Just Cause 2

Just Cause 2 - Settings

Mafia II

Mafia II - Settings

Metro 2033

Metro 2033 - Settings

StarCraft II

StarCraft II - Settings

Colin McRae: Dirt 2

It’s not that often that faithful PC gamers get a proper racing game for their platform of choice, but Dirt 2 is one of those. While it is a “console port”, there’s virtually nothing in the game that will make that point stand out. The game as a whole takes good advantage of our PC’s hardware, and it’s as challenging as it is good-looking.

Colin McRae: Dirt 2

Manual Run-through: The race we chose to use in Dirt 2 is the first one available in the game, as it’s easily accessible and features a lot of GPU-pounding effects that the game has become known for, such as realistic dust and water effects, a large on-looking crowd of people and fine details on and off the track. Each run-through lasts the entire two laps, which comes out to about 2.5 minutes.

Except at 2560×1600, NVIDIA’s GeForce GTX 560 Ti excels in this title compared to the Radeon HD 6950 1GB – which I admit is a bit surprising since Dirt 2 has always been a little weighted towards AMD’s offerings.

Just Cause 2

Just Cause 2 might not belong to a well-established series of games, but with its launch, it looks like that might not be the case for long. The game offers not only superb graphics, but an enormous world to explore, and for people like me, a countless number of hidden items to find around it. During the game, you’ll be scaling skyscrapers, racing through jungles and fighting atop snow-drenched mountains. What’s not to like?

Just Cause 2

Manual Run-through: The level chosen here is part of the second mission in the game, “Casino Bust”. Our runthrough begins at the second-half of the level, which requires us to situate ourselves on top of a car and have our driver, Karl Blaine, speed us through part of the island to safety. This is a great mission for benchmarking as we get to see a lot of the landmass, even if some of it is at a distance.

Dirt 2 might be just a tad weighted towards AMD cards, but Just Cause 2 is really weighted towards them (it’s not necessarily a bad thing), and it’s apparent here. NVIDIA’s GTX 560 Ti surpassed the HD 6950 1GB in Dirt 2, but the latter soars ahead here. The differences aren’t small, either. At 2560×1600, the HD 6950 1GB performs about 33% better.

Mafia II

For fans of the original Mafia game, having to wait an incredible eight years for a sequel must’ve been tough. But as we found out in our review, the wait might be forgotten as the game is quite good. It doesn’t feature near as much depth as say, Grand Theft Auto IV, but it does a masterful job of bringing you back to the 1940’s and letting you experience the Mafia lifestyle.

Mafia II

Manual Run-through: Because this game doesn’t allow us to save a game in the middle of a level, we chose to use chapter 7, “In Loving Memory…”, to do our runthrough. That chapter begins us on a street corner with many people around, and from there, we run to our garage, get in our car, and speed out to the street. Our path ultimately leads us to the park, and takes close to two minutes to accomplish.

The previous couple of games seem to run better on AMD cards on average better than NVIDIA, but Mafia II is one of those that should run better on the green team (err, as in, NVIDIA). Here, that’s not so much the case. Though both the GTX 560 Ti and HD 6950 1GB should be about equal in performance, AMD’s card edges ahead here.

Metro 2033

One of the more popular Internet memes for the past couple of years has been, “Can it run Crysis?”, but as soon as Metro 2033 launched, that’s a meme that should have died. Metro 2033 is without question one of the beefiest games on the market, and though it supports DirectX 11, it’s almost a feature worth ignoring, because the extent you’ll need to go to in order to see playable framerates isn’t likely going to be worth it.

Metro 2033

Manual Run-through: The level we use for testing is part of chapter 4, called “Child”, where we must follow a linear path through multiple corridors until we reach our end point, which takes a total of about 90 seconds. Please note that due to the reason mentioned above, we test this game in DX10 mode, as DX11 simply isn’t that realistic from a performance standpoint.

Similar to the previous test, AMD’s Radeon HD 6950 1GB inches past NVIDIA’s GTX 560 Ti to come out on top, across all resolutions.

StarCraft II

Of all the games we test, it might be this one that needs no introduction. Back in 1998, Blizzard unleashed what was soon to be one of the most successful RTS titles on the planet, and even as of today, the original is still heavily played all around the world – even in actual competitions. StarCraft II of course had a lot of hype to live up to, and it did, thanks to its intense gameplay and superb graphics.

StarCraft II

Manual Run-through: The portion of the game we use for testing is part of the Zero Hour mission, which has us holding fort until we’re able to evacuate. Our saved game starts us in the middle of the mission, and from the get-go, we build a couple of buildings and concurrently move our main units up and around the map. Total playtime lasts about two minutes.

At 2560×1600, this game runs fine on almost all of the graphics cards we’ve tested with, at max detail. Due to this, we’re considering either ridding StarCraft II from our suite soon, or forcing anti-aliasing to bring the framerates down. Your opinions on which route to take would be appreciated! Is StarCraft II still worthy of benchmarking?

Futuremark 3DMark 11

Although we generally shun automated gaming benchmarks, we do like to run at least one to see how our GPUs scale when used in a ‘timedemo’-type scenario. Futuremark’s 3DMark 11 is without question the best such test on the market, and it’s a joy to use, and watch. The folks at Futuremark are experts in what they do, and they really know how to push that hardware of yours to its limit.

Futuremark 3DMark 11

Similar to a real game, 3DMark 11 offers many configuration options, although many (including us) prefer to stick to the profiles which include Performance, and Extreme. Depending on which one you choose, the graphic options are tweaked accordingly, as well as the resolution. As you’d expect, the better the profile, the more intensive the test. The benchmark doesn’t natively support 2560×1600, so to benchmark with that, we choose the Extreme profile and simply change the resolution.

Although I don’t much care for synthetic benchmarks, I hate to admit that 3DMark 11 tends to scale quite well with a GPU’s raw performance, and in the case here, AMD’s card does stay a fair bit ahead of NVIDIA’s GTX 560 Ti – about 12.5~16.6%.

Unigine Heaven 2.1

While Futuremark is a well-established name where PC benchmarking is concerned, Unigine is just beginning to become exposed to people. The company’s main focus isn’t benchmarks, but rather its cross-platform game engine which it licenses out to other developers, and also its own games, such as a gorgeous post-apocalytic oil strategy game. The company’s benchmarks are simply a by-product of its game engine.

Unigine Heaven 2.1

The biggest reason that the company’s “Heaven” benchmark grew in popularity rather quickly is that both AMD and NVIDIA promoted it for its heavy use of tessellation, a key DirectX 11 feature. Like 3DMark Vantage, the benchmark here is overkill by design, so results here aren’t going to directly correlate with real gameplay. Rather, they showcase which card models can better handle both DX11 and its GPU-bogging features.

Given the superb geometry performance that both NVIDIA’s GeForce GTX 400 and GTX 500 series offer, I was a bit surprised to see AMD’s Radeon HD 6950 1GB come ahead once again here. The differences aren’t large, but are still notable.

Power & Temperatures

To test our graphics cards for both temperatures and power consumption, we utilize OCCT for the stress-testing, GPU-Z for the temperature monitoring, and a Kill-a-Watt for power monitoring. The Kill-a-Watt is plugged into its own socket, with only the PC connect to it.

As per our guidelines when benchmarking with Windows, when the room temperature is stable (and reasonable), the test machine is boot up and left to sit at the desktop until things are completely idle. Because we are running such a highly optimized PC, this normally takes one or two minutes. Once things are good to go, the idle wattage is noted, GPU-Z is started up to begin monitoring card temperatures, and OCCT is set up to begin stress-testing.

To push the cards we test to their absolute limit, we use OCCT in full-screen 2560×1600 mode, and allow it to run for 15 minutes, which includes a one minute lull at the start, and a four minute lull at the end. After about 5 minutes, we begin to monitor our Kill-a-Watt to record the max wattage.

In the case of dual-GPU configurations, we measure the temperature of the top graphics card, as in our tests, it’s usually the one to get the hottest. This could depend on GPU cooler design, however.

Note: Due to power-related changes NVIDIA made with the GTX 580 & GTX 570, we couldn’t run OCCT on that GPU. Rather, we had to use a run of the less-strenuous Heaven benchmark.

Somewhat interestingly, NVIDIA’s GeForce GTX 560 Ti runs just a bit hotter than the GTX 570, but that could be in part to thanks to the higher core clocks and smaller frame. It does offer one of the best idle temperatures we’ve seen, though, and at the top end, it didn’t exceed 76°C in our test. Overall, it looks to be a great option for HTPC gamer.

Comparing the GTX 560 Ti to the Radeon HD 6950 1GB, NVIDIA’s card has a better idle, but AMD’s has a better load – despite being a bit faster.

Final Thoughts

When NVIDIA released its GeForce GTX 460 last summer, it became the “must have” card for most gamers. AMD’s offerings were far from slacking, but given the pricing and performance of NVIDIA’s card, it was an attractive offering, and one that NVIDIA has taken full advantage of. After all, since that original launch, we’ve seen two follow-ups (460 768MB and 460 SE).

Has NVIDIA’s GeForce GTX 560 Ti become the next “must have” mid-range card? That’s not so clear-cut, thanks to AMD’s clever move of releasing the Radeon HD 6950 1GB at exactly the same time. Both cards have availability now, and are priced at $250 (GTX 560 Ti) and $260 (HD 6950 1GB)… so which to choose?

Through all of our testing, AMD’s Radeon HD 6950 1GB performed better than NVIDIA’s GeForce GTX 560 Ti… it’s quite that simple. Though we used just five games to test with (three of which are AMD’s weighted), both 3DMark 11 and Unigine’s Heaven 2.0 benchmark backed up the scaling that we saw.

In looking around the Web, I’ve seen some other sites reporting the opposite – where NVIDIA’s card is the faster offering, but after a double-run of our suite (we re-tested both cards twice over to be sure), AMD’s HD 6950 1GB consistently came out ahead. According to 3DMark 11, the average gain was about 12%, so it’s nothing major, but for a 4% price gain, it does seem worthy enough to take note of.

NVIDIA GeForce GTX 560 Ti

Regardless of the performance, though, we have NVIDIA to thank for keeping the choices wide-open for gamers, because as it is, there are too many GPU models out there to keep track of, at all different price-points. It doesn’t really matter whether you have $150 or $400, there’s a great GPU waiting for you.

If I had to jump on a card today, I’d go with AMD’s Radeon HD 6950 1GB. I care a lot about performance, but I also care a lot about efficiency, and unfortunately for NVIDIA, AMD excels in both of those areas.

I do question the long-term availability of AMD’s cards, though, as at the current time of writing, Newegg.com has a mere three models available, two being from XFX. On the NVIDIA side, there are eight models available, some being pre-overclocked. We do know for sure that NVIDIA’s GTX 560 Ti is a long-term card, but we’re not positive AMD’s card is. After all, it came out of nowhere and in defense.

Still, as the landscape is today, you have a couple of choices. On AMD’s side, you get a slightly faster card for a minor premium, the rich Eyefinity multi-monitor support (not to mention no requirement to get a second card to do the job), and of course, better power efficiency and thermals compared to NVIDIA.

On NVIDIA’s side, the GTX 560 Ti is still a great card, and perfectly fits the $250 price-point. Despite not beating out AMD, it offers great power efficiency and thermals, and unlike AMD, can avail the user PhysX and CUDA support. Whether those are worth it are up to you, and totally depend on what you are looking to do with your PC and gaming.

Given AMD’s conveniently-timed release, pricing on both AMD’s and NVIDIA’s cards might very well be changed a bit as the weeks pass. NVIDIA is never quick to lower the pricing on any of its products, so it will likely see how things pan out and gauge whether or not it needs to drop the price of its card any.

Any way you look at it, though, the choice right now is enormous, regardless of budget.

Discuss this article in our forums!

Have a comment you wish to make on this article? Recommendations? Criticism? Feel free to head over to our related thread and put your words to our virtual paper! There is no requirement to register in order to respond to these threads, but it sure doesn’t hurt!

Copyright © 2005-2019 Techgage Networks Inc. - All Rights Reserved.