The New Single GPU King Of The Hill: A Look At NVIDIA’s GeForce GTX TITAN X

by Rob Williams on March 17, 2015 in Graphics & Displays

Continuing its TITAN legacy, NVIDIA’s GeForce GTX TITAN X gives the gaming world its latest ‘ultimate’ offering. With sights set on 4K gaming, TITAN X gives us a sleek black card that boasts 3,072 CUDA cores and an absurd 12GB framebuffer on a 384-bit bus. Words are not needed – let’s dive in.

It’s that time again, when we’re able to talk about a brand-new “world’s fastest” graphics card. This one shouldn’t come as much of a surprise, as we posted a super-quick look at it about two weeks ago.

The events that led up to this point are kind of interesting. Originally, NVIDIA wasn’t planning to unveil its GeForce GTX TITAN X until its GPU Technology Conference kicked-off this week. A change of plans came when Epic Games had some aptly epic VR demos to show off at San Francisco’s Game Developers Conference earlier this month. NVIDIA’s CEO Jen-Hsun Huang then deemed that to be the better event to announce the card at.

NVIDIA GeForce GTX TITAN X - Overview Shot

At the time of its announcement, I didn’t realize just how quickly the TITAN X would launch – I had even noted that it could be a month or two out. To my surprise, a sample arrived just two days later. Somehow, I received the card almost a week ahead of most other editors, but that advance didn’t mean I could get to benchmarking right away: NVIDIA didn’t give us a working driver until last Thursday, and with GTC trips coming up, I can’t imagine many editors were thrilled about it.

I admit that I would have appreciated more time, but as Mother Nature saw it fit to delay my outgoing flight for GTC, I managed to get more testing done than I initially expected. That said, this is going to be a more abbreviated look than usual, and as such, I am focusing on what’s important: 4K results. In lieu of benchmarking at smaller resolutions, which TITAN X can eat like a snack, I tackled some other neat tests which I think are more valuable.

That’s enough preamble; let’s get into the heart of this thing.

TITAN X sticks to the tried-and-true aesthetic that we’ve seen on all of NVIDIA’s high-end cards since the GTX 700 series launched (aside from the paint job, that is). Unlike the GTX 980, TITAN X features a vapor chamber cooler – like previous TITANs – to improve the cooling ability. Considering the fact that this GPU packs in 8 billion transistors, that’s a very good thing.


As with the GTX 980, TITAN X is “Designed For Overclocking”, and so it features similar technical components, like polarized capacitors and a 6+2 phase design (GPU+VRAM). I haven’t been able to overclock TITAN X due to time, but NVIDIA says that it’s been able to reach 1.4GHz in its own testing (400MHz beyond the base clock).


One thing that might strike some people as odd is that TITAN X doesn’t include a backplate, like the GTX 980. It’s worth noting that neither the original TITAN or TITAN Black did, either. NVIDIA says that the backplate has been removed to improve airflow, which be especially important when using more than one card in an SLI configuration.

TITAN X features a total of 6 GPCs which gives the card a total of 3,072 CUDA cores, as well as a 384-bit memory interface. Likewise, it has 192 texture units, 96 ROP units, and a 3MB L2 cache (all 50% higher than the GTX 980).

NVIDIA GeForce GTX TITAN X Block Diagram

The card’s GPU is clocked at 1,000MHz base and 1,075MHz boost, while its VRAM is once again spec’d at 7,000MHz effective. You can see other specs in the table below, straight from the horse’s mouth (note that the 3505MHz quoted memory clock is the double-date rate speed).

Graphics Processing Clusters6
Streaming Multiprocessors24
CUDA Cores (single precision)3072
Texture Units192
ROP Units96
Base Clock1000MHz
Boost Clock1075MHz
Memory Clock3505MHz
Memory Data Rate7Gbps
L2 Cache Size3072K
Total Video Memory12288MB GDDR5
Memory Interface384-bit
Total Memory Bandwidth336.5GB/s
Texture Rate (Bilinear)192 GigaTexels/sec
Fabrication Process28 nm
Transistor Count8 Billions
Connectors3x DisplayPort, 1x HDMI, 1x DL-DVI
Form FactorDual Slot
Power ConnectorsOne 8-pin and one 6-pin
Recommended Power Supply600 Watts
Thermal Design Power (TDP)250 Watts
Thermal Threshold91°C

And here’s how TITAN X compares to the other launched GTX 900 series cards:

NVIDIA GeForce SeriesCoresCore MHzMemoryMem MHzMem BusTDP
GeForce GTX TITAN X3072100012288MB7000384-bit250W
GeForce GTX 980204811264096MB7000256-bit165W
GeForce GTX 970166410504096MB7000256-bit145W
GeForce GTX 960102411262048MB7010128-bit120W

Outside of the expected increase in cores, the most striking thing about TITAN X is its total VRAM – 12GB is massive. There’s just no other way to say it. As you’ll see later, actually utilizing all 12GB of that VRAM in gaming – at least in this point in time – is challenging. Very challenging.

It sure didn’t take GPU-Z too long to support TITAN X:


In case it isn’t clear by now, TITAN X is a monster, and NVIDIA makes no secret regarding the audience it’s targeted at. Here’s a quote pulled from the reviewer’s guide: “Titan X is an extremely high-end graphics card that probably won’t appeal to those who are price conscious.” It follows-up by stating the obvious: “Titan X is for gamers who want the fastest, most powerful single GPU money can buy.

It’s worth noting that while TITAN X is targeted squarely at 4K resolutions, NVIDIA’s designed it to support 5K. That said, 5K boasts 77% more pixels than 4K – a resolution that’s already brutal on our GPUs – so you are best to ignore resolutions like 5K entirely for the next few years.

NVIDIA’s reiterating the importance of some of its technologies with the launch of its TITAN X, including PhysX (which has just seen its source code opened up), Voxel Global Illumination (VXGI), MFAA anti-aliasing, and of course, excellent VR support. You can read up a bit more on all of these technologies in my GTX 980 launch article.

As I mentioned in the intro, NVIDIA decided to pull the TITAN X launch date ahead because of VR at GDC. Ultimately, the reason for this is that with the fastest single-GPU solution in the world, you can get the absolute best VR experience possible. Demos powered by TITAN X at the show include Thief in the Shadows, an Epic Games’-led project, and Back to Dinosaur Island, one from Crytek. Unfortunately, I could not find great footage on YouTube of either of these, but as they’re VR, they’d probably be difficult to appreciate anyway.

With all of that covered, let’s get right into some benchmarks.

Quick Benchmarks

Remember when 1080p was a coveted resolution? Nowadays, 1440p seems to be the best resolution for any enthusiast to seek out, as monitors supporting it at more affordable than ever – and it’s nowhere near as taxing on GPU hardware as 4K. But speaking of 4K, that’s the resolution TITAN X has in its sights, and NVIDIA promises that you’ll be able to “Max Out” today’s hottest games when using it. As the show below shows, though, that comes at the cost of delivering a silky-smooth 60 FPS.

NVIDIA GeForce GTX TITAN X - Expected Performance
NVIDIA’s TITAN X Performance Expectations

I’ve made it no secret in the past that I’m not a big fan of 4K, and sub-60 FPS is why. In order to both max out the details in your games and hit 60 FPS, you’re going to be getting more than one GPU. Honestly, though, TITAN X strikes me as a GPU that someone is going to buy more than one of anyway, because with that mammoth 12GB buffer, the GPU is going to become a limitation long before the VRAM does.

I prefer to stick to 1440p, as it’s a massive improvement over 1080p, but doesn’t strain a GPU nearly as hard as 4K does. Because of that, a card like TITAN X could be even more attractive; it’d allow you to max out game settings for some time to come, and still deliver great performance. If money is no object, and you are able to splurge on multiple TITAN X cards, then I sure wouldn’t blame you for going the 4K route.

Regardless of all that, because 4K is the target resolution of TITAN X (and for good reason), that’s what I stuck to during testing. After this article, I probably won’t return to benchmark the card until I revise our GPU test suite, as one is soon due. At that time I’ll test TITAN X in both 1440p and 4K (and possibly 1080p x 3), and have a better spread of current games.

Our test rig’s specs are as follows:

Graphics Card Test System
ProcessorsIntel Core i7-4960X – Six-Core @ 4.50GHz
MotherboardASUS P9X79-E WS
MemoryKingston HyperX Beast 32GB (4x8GB) – DDR3-2133 11-12-11
GraphicsNVIDIA GeForce GTX 980 4GB – GeForce 347.84
NVIDIA GeForce GTX 980 4GB (SLI) – GeForce 347.84
NVIDIA GeForce GTX TITAN X 12GB – GeForce 347.84
StorageKingston HyperX 240GB SSD
Power SupplyCooler Master Silent Pro Hybrid 1300W
ChassisCooler Master Storm Trooper Full-Tower
CoolingThermaltake WATER3.0 Extreme Liquid Cooler
DisplaysAcer XB280HK 4K G-SYNC
Et ceteraWindows 7 Professional 64-bit

Note: For the sake of time, I am not going to discuss each of the results individually here. Instead, I am just putting all of the results for titles in alphabetical order, and will recap them at the end. Also, please note that all of the gaming screenshots in this performance section were snapped using TITAN X. I left the Fraps framerate counter in the top-right corner so that you can get an idea of what performance is like for that given scene.



Assassin’s Creed IV: Black Flag

NVIDIA GeForce GTX TITAN X - Assassin's Creed Black Flag at 4K

NVIDIA GeForce GTX TITAN X - Assassin's Creed Black Flag

Crysis 3

NVIDIA GeForce GTX TITAN X - Crysis 3 at 4K

NVIDIA GeForce GTX TITAN X - Crysis 3




Metro Last Light

NVIDIA GeForce GTX TITAN X - Metro Last Light at 4K

NVIDIA GeForce GTX TITAN X - Metro Last Light

Middle-earth: Shadow of Mordor

NVIDIA GeForce GTX TITAN X - Middle-earth Shadow of Mordor at 4K

NVIDIA GeForce GTX TITAN X - Middle-earth Shadow of Mordor

Sleeping Dogs

NVIDIA GeForce GTX TITAN X - Sleeping Dogs at 4K

NVIDIA GeForce GTX TITAN X - Sleeping Dogs

Tom Clancy’s Splinter Cell: Blacklist

NVIDIA GeForce GTX TITAN X - Tom Clancy's Splinter Cell Blacklist at 4K

NVIDIA GeForce GTX TITAN X - Tom Clancy's Splinter Cell Blacklist

Overall, the performance gains on TITAN X range between 17% ~ 36%, with 30%+ being more common than anything under. In fact, the only game that really drag TITAN X’s performance advanctage down was Black Flag.

SLI’d GTX 980s are considerably faster than TITAN X, but that shouldn’t come as much of a surprise given that there’s a 126MHz clock advantage there as well as 100% more transistors, not just 50%.

When I posted my super-quick look at TITAN X last week, I predicted that the card would feature 3,072 cores and have a clock speed of 1,000MHz. I am not usually quite so good at guessing correctly, but in this case, I am glad I did, because I already expected this kind of performance. There are really no oddities here; TITAN X is much faster than a GTX 980, but of course can’t hold a candle to SLI’d 980. Of course, it’s not that clear-cut of a comparison: the SLI’d 980s don’t have 12GB of VRAM.

Power Consumption

Making use of a Kill-a-Watt, GPU-Z, and 3DMark’s strenuous Fire Strike test at 4K, we’ve come up with some power draw results. Because I’ve found the load to be higher during the ‘Graphics Test 2’ portion of the run, that’s where I monitor for the peak.

NVIDIA GeForce GTX TITAN X - Power Consumption

Similar to the differences between SLI’d 980s and the TITAN X in performance, the power results are just where I’d expect them to be, as well. A 20% gain in power draw for an up to 36% gain in performance – not bad.

Pushing A 12GB Buffer Is Harder Than TITANium

When NVIDIA announced its TITAN X, the first thing that came to mind was, “How on earth is someone going to take advantage of 12GB?” Even 6GB is hard to breach, and games that can do that are rare. So what on earth will it take? As it turns out, a lot.

Because 3DMark can be customized to run its tests at different resolutions and settings, that’s just what I did in order to see how much VRAM I could push on TITAN X. Once again, time prevented me from spending too much time on this, so I stuck to the same Graphics Test 2 as I use for power testing for this one.

But here’s where things get complicated. Simply running 3DMark’s Fire Strike Ultra at 4K wasn’t enough to push the VRAM hard, so I had to take advantage of the ability to render at even higher resolutions. Yes, higher than the already gargantuan 3840×2160. Without some explanation, these results could be misunderstood, so allow me to prevent any confusion.

The table below shows how 1080p and 4K compare to other resolutions, like 5K and even 8K (and yes, companies have demonstrated real displays using these resolutions). The reason this is important is because 5K isn’t a “bit” bigger than 4K as its name implies; it actually has 77% more pixels. 8K is even more overwhelming – it’s in effect 4K x 4. That gives us an effective 33 million pixels rendered, which is 16x more than 1080p. This chart should help clear up just why it is that VRAM use starts to skyrocket at certain resolutions.

Resolution Comparisons
4K (3840×2160)8,294,440400%100%
5K (5120×2880)14,745,600711%177%
8K (7680×4320)33,177,6001600%400%
Example: 1440p has 177% the number of pixels of 1080p

NVIDIA GeForce GTX TITAN X - 3DMark Memory Usage

In order to take advantage of the entire 12GB framebuffer, I had to run 3DMark at its highest settings at 8K resolution while applying the absurd anti-aliasing setting of 8x. If anything more than what it set out to do, this chart can really highlight just how effective FXAA is; it might not look as good as a high MSAA setting, but its GPU expense is much reduced.

Thought the normal Fire Strike run was strenuous? It barely broke through 1GB of VRAM, and even 1440p with higher details couldn’t breach 2GB. 4K sits pretty at 4GB, even with 8xMSAA.

What should be gleaned from this is that a 12GB framebuffer is as overkill as it gets. NVIDIA could have released TITAN X with 8GB and it would have been perfect – even from the future-proofing standpoint. If you reach a point where you need more than 6GB of VRAM at 4K, you’ll likely be out of GPU horsepower way before the lack of VRAM comes into play.

As a quick test, I ran Shadow of Mordor at 8K using maxed-out detail levels, and it peaked at 8.5GB. Highlighting my point above, the benchmark averaged 9 FPS at those settings – not even quad TITAN X would help that situation much.

Final Thoughts

After poring over both the performance results and feature-set of NVIDIA’s GeForce GTX TITAN X, it’s not too difficult draw conclusions. During his annual opening keynote at NVIDIA’s GPU Technology Conference, CEO Jen-Hsun Huang told us that the card would cost $999 (no, we did not have advance notice!)

At that price-point, TITAN X has an 80% price premium over the GTX 980, performs ~36% better, and offers triple the VRAM. Unlike TITAN and TITAN Black, though, TITAN X does not have high double-precision performance – it’s an effective 1/32 of the single-precision performance, just like the GTX 980. That makes this a card with a major gaming focus, not one targeted at education and science. Jen-Hsun says that the TITAN Z remains a great choice for those needing double-precision.

NVIDIA GeForce GTX TITAN X - Glamor Shot

It goes without saying that TITAN X becomes the fastest single-GPU solution on the planet. The only card faster would be AMD’s Radeon R9 295 X2, and that’s not nearly as elegant a solution – it’s a dual-GPU in a single card, longer, draws more power, and requires an external liquid cooler. On the flipside, it costs less than TITAN X (around ~$650). Countering that, it’s based on AMD’s last-generation architecture, while NVIDIA’s is on a new one that introduced lots of features (and far improved efficiency).

As usual, with a card costing $999, you will have to weigh the pros and cons yourself. For those looking to build the fastest, beefiest gaming rig possible, TITAN X is the best choice. And where performance is concerned, the best choice rarely comes cheap.


  • Fastest single-GPU solution on the planet.
  • 4K gaming with very good performance; 1440p gaming with extreme performance.
  • Up to 36% faster than the already really fast GTX 980.
  • Sports a cool black shroud.
  • Great power consumption.


  • It’s price? Top-tier performance doesn’t come cheap.

Support our efforts! With ad revenue at an all-time low for written websites, we're relying more than ever on reader support to help us continue putting so much effort into this type of content. You can support us by becoming a Patron, or by using our Amazon shopping affiliate links listed through our articles. Thanks for your support!

Rob Williams

Rob founded Techgage in 2005 to be an 'Advocate of the consumer', focusing on fair reviews and keeping people apprised of news in the tech world. Catering to both enthusiasts and businesses alike; from desktop gaming to professional workstations, and all the supporting software.

twitter icon facebook icon instagram icon