Date: May 17, 2016
Author(s): Rob Williams
NVIDIA’s Pascal architecture brings a lot of goodness to the table, and its GeForce GTX 1080 encapsulates it all. This card isn’t just faster than the TITAN X, it can sometimes even beat out SLI’d GTX 980s. There’s a lot more than just performance boosts with this card, though, so let’s dive in and tackle all of what makes it so great.
It’s been 20 months since NVIDIA launched its first Maxwell-based graphics card, the GeForce GTX 980, and 25 months since the company revealed the first important details of Pascal. Now? Well, it’s no secret: the first Pascal-based GeForce card is here, and it comes to us in the form of the GTX 1080.
I’m not going to reveal too much off the top, but I will say that if you’re one of those who’s been holding out for Pascal to arrive before upgrading or building a new rig, your patience is going to be well rewarded. Simply put, the GTX 1080 is a monster, and we have the test results to prove it.
Leading up to the launch of the GTX 1080, I put my neck on the line multiple times to state that NVIDIA couldn’t possibly stick with the 1080 naming. After all, the company knew that people were going to draw similarities to 1080p resolution. Defiant as always, the company simply doesn’t care about such small things, and that’s fair enough. But how awful does GTX 1180 sound? I can’t see this naming scheme going on beyond this generation, but I’ve been proven wrong by this one, so you’re better off trusting a Magic 8 Ball.
I think that the GTX 1080 name could have been pulled off years ago when 1080p was actually a de facto resolution. Today, 1080p really is child’s play.
Simply put: if you buy a GTX 1080 and only have a 1080p monitor, it better be because you’re in the planning stages of an upgrade, or else NVIDIA’s latest and greatest is going to go largely to waste. If there’s an exception, it’d be with those who prefer higher refresh rates over higher resolutions. How well the GTX 1080 performs in that particular use case, I’m not sure, but it’s something I’ll be investigating in the very near-future.
When it becomes available on May 27, NVIDIA’s GeForce GTX 1080 is going to retail for an SRP of $599 USD (but hard to get hold of), but don’t be surprised if that price is breached a little bit right off. The Founders Edition, which features NVIDIA’s own elaborate cooler, carries an SRP of $699 (currently, the only place to get them right now is Newegg, but this will change over the next couple weeks). If you don’t care about the cooler, then you’d be better off opting for one of the ~$599 models instead. The Founders Edition is squarely targeted at those who want a high level of craftsmanship in their PC. This cooler isn’t supposed to allow for greater overclocks over the older one – it’s just meant to look good.
The second Pascal card to launch will be the GTX 1070, which should hit retail on June 10th. Whereas the GTX 1080 is spec’d at 9 TFLOPs of single-precision performance, the GTX 1070 drops down to 6.5 TFLOPs. For a $379 offering (when available), that’s quite impressive – especially so when you consider the fact that the $1,000 TITAN X pushes 6.15 TFLOPs. Another way to look at it: that’s almost double the GTX 970’s 3.5 TFLOPs. The GTX 1070, like the 1080, will include a Founders Edition, which will retail for $449.
|NVIDIA GeForce Series||Cores||Core MHz||Memory||Mem MHz||Mem Bus||TDP|
|GeForce GTX 1080||2560||1607||8192MB||10000||256-bit||180W|
|GeForce GTX 1070||?||?||?MB||?||?-bit||?W|
|GeForce GTX TITAN X||3072||1000||12288MB||7000||384-bit||250W|
|GeForce GTX 980 Ti||2816||1000||6144MB||7000||384-bit||250W|
|GeForce GTX 980||2048||1126||4096MB||7000||256-bit||165W|
|GeForce GTX 970||1664||1050||4096MB||7000||256-bit||145W|
|GeForce GTX 960||1024||1126||2048MB||7010||128-bit||120W|
|GeForce GTX 950||768||1024||2048MB||6600||128-bit||90W|
During his keynote in Austin earlier this month, NVIDIA’s CEO Jen-Hsun Huang said that the GTX 1080 is faster than a TITAN X for much less money than a TITAN X. That statement caused the crowd to get a little excited, but what pushed them over the top was when Huang followed-up by saying that even the GTX 1070 would beat the outgoing champ overall.
Pascal isn’t just faster, though: it’s more power efficient. Both the GTX TITAN X and 980 Ti were spec’d with a 250W TDP, but the 1080 shaves 70W off of that. Some of you might not care about power savings, but it’s notable nonetheless. It ultimately means that the GTX 1080 requires just 72% of the TITAN X’s from-the-wall power, but delivers a performance boost of at least 25% at the same time. Gains like these are not rare to new launches, but the fact that we’re still seeing them is what makes this so impressive.
As covered in a post last week, those who opt for the Founders Edition of the GTX 1080 will receive the card in the packaging seen above. The reference versions of the TITAN X and 980 Ti included the same kind of packaging, but NVIDIA opted to use more color this time around.
In some of the shots above, comparisons to the 980 Ti can be seen. Overall, the ‘reference’ coolers are quite similar, but the GTX 1080 looks more bad ass with its angular frame. I used quotes around ‘reference’ because reference for NVIDIA going forward is ‘Founders Edition’ – that naming is not a one-off.
There are a couple of other features to note, which might not be so obvious at quick glance. For starters, NVIDIA has equipped the GTX 1080 with a backplate which can help keep the card cool when it’s under stress. For those going the SLI route, a large portion of this backplate can be removed on the lower card for the sake of not blocking too much airflow. This would only matter if you were forced to sandwich both cards together; if you have ample airflow between them, the backplate can remain intact.
Another thing worth pointing out is the fact that the GTX 1080 includes just one 8-pin power connector, much like the TITAN X-derived Quadro M6000 does (even though the TITAN X required two connectors). Given the card’s 180W TDP, there should still be a ton of headroom for overclocking (you can read our GTX 1080 Overclock guide here), but because pro overclockers always want as much power available as possible, I’d wager that a fair few third-party variants of the card will reintroduce a second connector, either a 6- or 8-pin.
I am going to be tackling some of Pascal’s biggest features on the next page, but for those who want to jump right into the performance results, I’ll take care of our testing methodology and system setup here.
When we need to build a test PC for performance testing, “no bottleneck” is the name of the game. While we admit that few of our readers are going to be equipped with an Intel 8-core processor clocked to 4GHz, we opt for it to make sure our GPU testing is as apples-to-apples as possible, with as little variation as possible. Ultimately, the only thing that matters here is the performance from the GPUs, so the more we can rule out a bottleneck, the better.
That all said, our test PC:
|Graphics Card Test System|
|Processors||Intel Core i7-5960X (8-core) @ 4.0GHz|
|Motherboard||ASUS X99 DELUXE|
|Memory||Kingston HyperX Beast 32GB (4x8GB) – DDR4-2133 11-12-11|
|Graphics||AMD Radeon R9 Nano 4GB – Catalyst 16.5.2|
NVIDIA GeForce GTX 980 4GB x 2 – GeForce 365.10
NVIDIA GeForce GTX TITAN X 12GB – GeForce 365.10
NVIDIA GeForce GTX 1080 8GB – GeForce 368.14
|Storage||Kingston SSDNow V310 1TB SSD|
|Power Supply||Cooler Master Silent Pro Hybrid 1300W|
|Chassis||Cooler Master Storm Trooper Full-Tower|
|Cooling||Thermaltake WATER3.0 Extreme Liquid Cooler|
|Displays||Acer Predator X34 34″ Ultra-wide|
Acer XB280HK 28″ 4K G-SYNC
|Et cetera||Windows 10 Pro (10586) 64-bit|
Framerate information for all tests – with the exception of certain time demos and DirectX 12 tests – are recorded with the help of Fraps. For tests where Fraps use is not ideal, I use the game’s built-in test (the only option for DX12 titles right now). In the past, I’ve tweaked the Windows OS as much as possible to rule out test variations, but over time, such optimizations have proven pointless. As a result, the Windows 10 installation I use is about as stock as possible, with minor modifications to suit personal preferences.
In all, I use 9 different games for regular game testing, and 3 for DirectX 12 testing. That’s in addition to the use of three synthetic benchmarks. Because some games are sponsored, the list below helps oust potential bias in our testing.
(AMD) – Ashes of the Singularity (DirectX 12)
(AMD) – Battlefield 4
(AMD) – Crysis 3
(AMD) – Hitman (DirectX 12)
(NVIDIA) – Metro: Last Light Redux
(NVIDIA) – Rise Of The Tomb Raider (incl. DirectX 12)
(NVIDIA) – The Witcher 3: Wild Hunt
(NVIDIA) – Tom Clancy’s Rainbow Six Siege
(Neutral) – DOOM
(Neutral) – Grand Theft Auto V
(Neutral) – Total War: ATTILA
If you’re interested in benchmarking your own configuration to compare to our results, you can download this file (10MB) and make sure you’re using the exact same graphics settings. I’ll lightly explain how I benchmark each test before I get into each game’s performance results.
Because the GTX 1080 is so powerful, I’ve opted to ignore 1080p and 1440p testing for this article. Instead, given its target audience, I’ve stuck to two high-end resolutions: 4K and 3440×1440. The latter is definitely my favorite of the two, as it runs faster and exposes more of the game, but there’s no denying that 4K is a popular resolution (or is becoming one).
This review isn’t going to wrap-up Techgage‘s coverage of the GTX 1080; far from it. If you made a request for other resolutions to be tested, don’t fret, as more evaluations are en route. I might hold off until the GTX 1070 launches to work on certain articles, as its launch is not far off, but in any event, you can expect an in-depth look at GTX 1080 overclocking before then. Stay tuned. (Our GTX 1080 Overclock guide is now live)
From here, you can either learn more about Pascal by heading to the next page, or if you’ve not been living under a rock, you can jump to page three and fill up on some performance results.
When Jen-Hsun took the stage in Austin earlier this month, I am not sure anyone expected him to reveal so much about Pascal and its first GPU models. We were not just given performance ballparks, but were told about all of the new technologies that have come along for the ride.
Much of the information here was covered by Jamie at that time, so to avoid rewording all that he said, I am just going to tackle the basics here. Should a follow-up article complement one of these features, I’ll go more in-depth into them there.
Before going further, I want to tackle a question I’ve seen asked numerous times since the GTX 1080’s unveiling: “Does it support Async Compute?” The answer is simply, “Yes”. Pascal’s architecture was designed with Async Compute in mind, and can benefit applications that make good use of physics, post-processing effects, and of course, virtual reality.
Async Compute is just one of the many things that makes Pascal worth learning more about, though. There’s also the brand-new process that the chips are built on: 16nm FinFET.
NVIDIA has made great strides with its Pascal architecture. It’s fast, it’s efficient, and it’s feature-rich. But that being said, if not for the fact that NVIDIA was able to take advantage of a long-overdue die shrink, the GTX 1080 wouldn’t be as impressive as it is. While the smaller die can improve performance, it can also dramatically reduce power usage, and subsequently create less heat.
The GP104 die is built using a 16nm FinFET process which isn’t exclusive to NVIDIA. AMD announced months ago that its upcoming Polaris architecture would be using the same one, which gives us great hope that the red team might be able to surprise us with its own launch.
Improving efficiency even further, the first Pascal graphics cards come equipped with 8GB of GDDR5X memory, or G5X as NVIDIA likes to call it. This delivers a data rate of 10Gbps (10GHz), which in itself increases memory performance over the GTX 980 by 43%. If applications take advantage of the company’s ever-evolving memory compression technologies, that gain can escalate up to 70%.
In all, the GTX 1080 GPU consists of 7.2 billion transistors, which is actually 800 million less than the TITAN X and 980 Ti. As you’ll see in the performance results, that “loss” of transistors sure doesn’t affect the card’s performance versus the older gen.
During his keynote, Jen-Hsun noted “The Marvels of Pascal”, which consists of five separate things that the company believes makes it top-notch. For starters, there’s the architecture – a no-brainer. Beyond that, there’s the 16nm FinFET process, the use of G5X, and then craftsmanship and Simultaneous Multi-Projection. I’ll tackle the latter in a couple of minutes, but to start, let’s dive into a new feature that came out of nowhere: Ansel.
As someone who takes a lot of screenshots in their games, NVIDIA’s Ansel spoke to me. It is, in effect, a robust screenshot tool that a game’s developer must implement and build around its rules (eg: so cheating can’t happen). We were told that in most cases, a developer shouldn’t need to insert more than 150 lines of code – which is nothing in the grand scheme.
It’s the game implementation that makes Ansel so useful. Imagine being at a part of your game where a screenshot simply can’t do the landscape justice. With Ansel, you can detach your camera from the game, adjust it to your liking, and then capture an image in a resolution that redefines “high res”, or create a 360° view of the environment that can be enjoyed with a VR headset.
Once I can spend time testing Ansel out, I’ll follow-up with a more detailed post. However, it doesn’t require much more information to appreciate what the feature can do. If you want to create a seriously large image, you could capture a given scene to create an image file that could weigh into the hundreds of megabytes – or perhaps even surpass 1GB if you happen to dial the settings up high enough.
After a shot is taken, you’ll be able to enjoy a massively detailed shot, or explore the environment in VR. You can even export the screenshots to EXR format, so that you can adjust its settings as if it were a photograph shot in RAW. There will be a ton of different filters to pick and choose from here, so if you somehow can’t get a “perfect” shot, you’re clearly doing something wrong.
In an example seen above, an image of The Witcher 3 was captured at a super-high resolution. When viewed to fit inside the screen, we can see Geralt standing on a castle balcony. When he is highlighted and zoomed into, though, the detail is so high, that the writing in a book on another level can be read.
There is one thing to be aware of, though: Ansel can truly sag your GPU’s performance. When a capture is made, you may have to wait upwards of minutes for it to capture, as dozens of segments need to be rendered independently, and then stitched together afterwards. If that’s a downside, it’s a small one for those who want this kind of functionality (*raises hand*).
Ansel is coming soon to a number of titles: Tom Clancy’s The Division, The Witness, Lawbreakers, The Witcher 3: Wild Hunt, Paragon, No Man’s Sky, and Unreal Tournament.
SMP doesn’t just mean “symmetric multi-processing” anymore; it now also means Simultaneous Multi-Projection. This is in reference to a new multi-monitor and VR feature that helps better align the game world to your displays.
It’s the Perspective Surround element of SMP that impresses me so much, perhaps because I’ve been wanting it for a while. While multi-monitor gaming offers a number of benefits, it also brings with it a number of caveats. When stretched across three monitors, games usually don’t take into account the angles that the screens are on, which is a downer given most people do in fact angle them. With Perspective Surround, the GeForce driver injects a bit of logic to scale the game to more realistic perspectives.
Unfortunately, we were not provided with an example outside of some diagrams to explain the technology, but believe me, if you’re a multi-monitor user, you will definitely want to be taking advantage of Perspective Surround.
Other SMP features include Lens Matched Shading for VR, which both improves pixel shading performance and renders the scene to be more accurate to the player; and Multi-Res Shading, which allows a game to render higher-resolution details in the center of the screen, where players look most often.
If you play any games that run at sky-high framerates simply because they’re not that graphically intensive (most MOBAs, for example), NVIDIA has a new technology for you called “Fast Sync”. Fast Sync’s main goal is to continue delivering smooth gameplay even when your FPS is through the roof.
At quick glance, it might seem like this kind of technology negates the need for G-SYNC, but that’s not the case at all. While G-SYNC is designed to benefit gameplay that dips below your desired refresh rate (sub-60 FPS, for example), Fast Sync tackles the opposite problem of games creating tearing simply because they are running too fast.
In this situation, most people would just enable Vsync to lock the framerate to 60, but that’s not ideal, either. What that can cause to happen is the buffers to become overloaded and scenes be spit out in a less-than-ideal order. Fast Sync works by delivering only the final frame render in a given ‘Vsync off’ sequence. That means that of all of the 60 frames you see in a given second, you’d effectively be seeing the final frame from 60 separate segments.
NVIDIA admits that Fast Sync isn’t a perfect solution, but it’s preferable to running VSync since the ‘backpressure’ problem will no longer exist. The one thing that could be better? 200Hz+ monitors.
Don’t hold your breath.
To coincide with the launch of Pascal, NVIDIA is releasing a trio of new SLI bridges, called “SLI HB”. The HB stands for “high bandwidth”, and is required for Pascal GPUs to interact with each other in SLI more efficiently. Old bridges can be used for Pascal, but NVIDIA notes that the interconnection bandwidth will be capped at whatever the bridge is spec’d at.
Interestingly, while NVIDIA is offering a 2-, 3-, and 4-way length bridges, all of them support just 2 graphics cards. In case this is news to you, that means that NVIDIA is only officially supporting 2-way SLI with Pascal. That doesn’t mean that 3- and 4-way configurations won’t work; it’s just that they’re not recommended. At least for now, 3- and 4-way SLI is definitely a “your mileage will vary” technology.
Does this mean the end of officially supported 3- and 4-way configurations? Not likely. What I think NVIDIA’s move here signifies is that the world has way too many console ports, and as such, the need for more than 2 GPUs is minimal. Should game developers start pushing more of their games towards the PC’s much more advanced hardware, we could see NVIDIA promote 3- and 4-way configurations again.
For those who don’t care about NVIDIA’s recommendations, take note: to unlock 3- and 4-way support, you need to go to NVIDIA’s website to obtain an ‘Enthusiast Key’. Downloading and running an application will generate a unique signature for your GPU, at which point 3 and 4 GPU configurations will be enabled. Why NVIDIA has decided to make this so complicated, I’m not sure.
Since NVIDIA’s newest bridges support only two graphics cards, to use 3- or 4-way SLI you will need to use an older bridge for two of the cards, and a new one for the main two. If you have one of the older floppier connectors, you might be able to hide those underneath the new bridge. It’s an odd design, but at least the highest-end of enthusiasts out there won’t be out-of-luck.
With that all taken care of, we can move onto the performance results. Please note that not everything notable about Pascal was talked about here, but the important things were. Some of the features will be talked about in some more depth in the near-future, as certain articles would better complement them.
Thanks to the fact that DICE cares more about PC gaming than most developers, the Battlefield series continues to give us titles that are well-worth benchmarking. While Battlefield 4 is growing a little long in the tooth, it’s still a great test at high resolutions. Once Battlefield 1 drops, we’re sure to replace BF4.
Testing: The game’s Singapore level is chosen for testing, as it provides a lot of action that can greatly affect the framerate. The saved game we use starts us off on an airboat that we must steer towards shore, at which point a huge firefight commences. After the accompanying tank gets past a hump in the middle of the beach, the test is stopped.
While it’s not a focus of this review, it’s worth pointing out something that might not have been too obvious: 3440×1440 is much easier on the GPU than 4K is. This is one of the reasons I prefer it; the other is that more of the game world is exposed, thanks to its 21:9 aspect ratio (4K is 16:9, just like 1080p). Want gaming examples of 16:9 versus 21:9? Look no further.
When the GTX 1080 was first unveiled, Jen-Hsun said that it was faster than the TITAN X and dual 980s in SLI. The first is true in every single case, but the second is only true sometimes. The biggest reason the GTX 1080 will come out ahead is if a game doesn’t take much or any advantage of SLI. If that’s disappointing, consider the fact that a single GTX 1080 just about matches 980 x 2 but uses 185W less.
At 4K, the GTX 1080 is 32% faster than the TITAN X, while at 3440×1440, it’s 36% faster.
Like Battlefield 4, Crysis 3 is getting a little up there in years. Fortunately, though, that doesn’t matter, because the game is still more intensive than most current titles. Even though the game came out in 2013, if you’re able to equip Very High settings at your resolution of choice, you’re in a great spot.
Testing: The game’s Red Star Rising level is chosen for benchmarking here, with the lowest difficulty level chosen (dying during a benchmarking run is a little infuriating!) The level starts us out in a broken-down building and leads us down to a river, where we need to activate an alien device. Once this is done, the player is run back underneath a nearby roof, at which point the benchmark ends.
The fact that NVIDIA’s latest and greatest hits 60 FPS at 4K in Crysis 3 really proves how powerful the card is. It also proves just how hardcore the game can be, too, as all testing is done with High detail, not Very High. 4K Very High @ 60 FPS? You’ll want more than one GPU. Not bad for a three-year-old game, huh?
If you want to go the ultra-wide route, running the game at Very High is a no-brainer.
DOOM 3 was released a couple of months before Techgage launched (March 1, 2005, for the record), and it was a game featured in our GPU testing right from the get-go. For this reason, this latest DOOM feels a bit special, even though it follows DOOM 3 up eleven years later. As we hoped, the game proves to be more than suitable for GPU benchmarking.
Testing: Due to time constraints, an ideal level could not be chosen for benchmarking. Instead, our test location starts us off at the bottom of a short set of stairs early on in the game, where we must climb them, open up a door, and then go to a big room where demons are taken care of and the benchmark is stopped.
Yet again, the GTX 1080 struts its stuff at the ultra-wide resolution, well surpassing a 60 FPS target. At 4K, it’s not much of a slouch, either, hitting 52 FPS on average. Based on the performance seen, it’s clear that DOOM supports SLI just fine, but the dual 980s didn’t manage to surpass the performance of the GTX 1080.
It’s worth noting that AMD released a new driver a day before publication time that boosts DOOM‘s performance, so take the R9 Nano result with a grain of salt.
Does a game like this even need an introduction? Any Grand Theft Auto game on the PC is a ‘console port’, proven by the fact that it always comes to the PC long after the consoles, but Rockstar has at least done PC gamers a favor here by offering them an almost overwhelming number of graphical options to fine-tune, helping to make it suitable for benchmarking, especially at high resolutions.
Testing: The mission Repossession is chosen for testing here, with the benchmark starting as soon as our character makes his way to an unsuspecting car. The benchmark ends after a not-so-leisurely drive to a parking garage, right before a cutscene kicks in.
When NVIDIA released its GeForce GTX TITAN X, it labeled it as a “4K” gaming card, despite the fact that at good settings, gamers would almost never see 60 FPS. At least with GTA V, this is a modern game where 60 FPS at 4K can be had by a single GTX 1080. The best part? The minimum FPS doesn’t even dip below 60 FPS. Want to go the ultra-wide route? The performance is even more impressive.
Like a couple of other games in our stable, Metro Last Light might seem like an odd choice give its age. After all, the original version of the game came out in 2013, and its Redux version came out in late 2014. None of that matters, though, as the game is about as hardcore as it can get when it comes to GPU punishment.
Testing: The game’s built-in timedemo is used for testing here, which lasts 2m 40s. While the game can spit out its own results file, it’s horribly inaccurate, so Fraps is still used here.
Last Light‘s timedemo is only good as a timedemo; it’s not representative of real gameplay whatsoever. Bearing that in mind, this is one game where SLI’d 980s come out ahead of the 1080, but not by too much. Conversely, the GTX 1080 sits comfortably ahead of the TITAN X.
Lara Croft has sure come a long way. The latest Tomb Raider iteration becomes one of the first titles on the market to support DirectX 12, but even without it, the game looks phenomenal at high detail settings (as the below screenshot can attest).
Testing: Geothermal Valley is the location chosen for testing with this title, as it features a lot shadows and a ton of foliage. From the start of our saved game, we merely walk down a fixed path for just over a minute and stop the benchmark once we reach a broken down bridge (the shot below is from the benchmarked area).
For some reason, the SLI’s 980s managed to overtake the GTX 1080 at 4K, but at the ultra-wide resolution of 3440×1440, the 1080 reigns supreme with an impressive 10 FPS lead over the SLI configuration. Note that this isn’t with the game at max detail, so this is one game where SLI’d 1080s would be needed to hope for 4K/60 at that detail.
Since the original The Witcher title came out in 2007, the series has become one of the best RPGs going. Each one of the titles in the series offers deep gameplay, amazing locales, and comprehensive lore. Wild Hunt, the series’ third game, also happens to be one of the best-looking games out there and requires a beefy PC to take great advantage of.
Testing: Our saved game starts us just outside Hierarch Square, where we begin a manual runthrough (literally – the run button is held down as much as possible) through and around the town, to wind up back at a bridge near a watermill (pictured below). The entire runthrough takes about 90 seconds. Please note that while ‘Ultra’ detail is used, NVIDIA’s HairWorks is not.
At 3440×1440, The Witcher 3 runs like a dream on the GTX 1080, and it offers ample performance at 4K (not dipping under 45 FPS). Even though the game takes advantage of SLI, the GTX 1080 proves too much for the duoing 980s.
If you think it’s hard to keep track of Tom Clancy games, you sure are not alone. Siege came out just this past winter, and while it focuses heavily on co-op play, solo players are welcomed, too. The game puts a huge emphasis on destructible environments, which could both harm or help a given scenario.
Testing: This game has a suitable built-in benchmark, so I’ve opted to stick with that. After the test is run, the overall results are fetched.
By now, if you’re not a little bit tempted by 3440×1440, then I might have failed at my job. In Siege, the GTX 1080 delivers a staggering 113 FPS, versus 68 FPS at 4K, all while displaying more of the game world. It’s a definite win/win resolution for the high-end gamer.
That aside, this is another game that puts the GTX 1080 ahead of the SLI’d 980s. Even the TITAN X manages to outperform that configuration here.
For strategy fans, the Total War series needs no introduction. ATTILA is the latest in the series, which will remain true for only the next week, as Warhammer is due to launch. Thankfully, any recent Total War game is suitable for benchmarking, and our results are going to prove that.
Testing: ATTILA includes a built-in benchmark, so again, I’ve decided to use that. However, as I do with Metro, I stick to Fraps for framerate capturing as the game’s results page isn’t too convenient.
How does 33 FPS at 4K sound? Awful? That’s correct! It’s a little impressive to see a game quite so hardcore, using a built-in preset (Extreme). Even with dual 1080s utilized to the max, the game would hit just 50 FPS minimum! Fortunately, it won’t take much to get far more playable framerates; the Extreme preset just makes for an easy, but brutal test.
I don’t like to overdo “time demos”, but I do love running some hands-off benchmarks that you at home can run as well (provided you have a license) so that you can accurately compare your performance to ours. It goes without saying that any synthetic testing would have to include Futuremark, and in particular for high-end cards, 3DMark’s Fire Strike test.
3DMark includes a number of different game tests, but today’s graphics cards are so powerful, the Fire Strike test is really the only one that makes sense. At 1080p, even modest GPUs can deliver decent performance. A great thing about Fire Strike is that the official tests encompass three different resolutions, including 4K, making it perfect for our testing.
According to 3DMark, the 980 in SLI is faster than the 1080 regardless of the resolution it’s run at. Compared to the TITAN X, though, the 1080 cleans up. It’s 25% faster than the TITAN X at 4K, and 27% faster at 1440p.
It’s hard to tell at this point if Heaven is ever going to see a new update, as it’s been quite a while since the last one, but what we have today is still a fantastic benchmark to run. That’s thanks to the fact that it’s free, an also because it can still prove so demanding on today’s highest-end GPUs. It’s also a great test for tessellation performance, as it lets you increase or decrease its intensity. For testing, I stick with ‘Normal’ tessellation.
Heaven agrees with 3DMark on the fact that the 980 x 2 is the ultimate configuration of these four. At 4K, the GTX 1080 is 23% faster than the TITAN X.
Meow hear this: there’s a new benchmark in town that promises to be purrfect for testing 4K resolutions. So, that’s just what I’ve used it for. The test consists of a cat innocently roaming a street until chaos ensues. Before long, this feline is mowing down buildings with its laser eyes, destroying GPU performance at the same time.
As it happens, Catzilla agrees with both 3DMark and Heaven: when properly utilized, SLI’d 980s reign supreme, but on the single GPU front, the GTX 1080 dominates the TITAN X, surpassing its score by 32% here.
Considering the fact that we’ve been hearing about DirectX 12 for what feels like forever, it’s a little surprising that the number of DX12 titles out there remain few. Heck, one such game was Fable Legends, and that was shut down last month. We’re definitely in the middle of a waiting game for more DX12 titles to get here, but thankfully, those that do exist now prove great for testing.
Of all the DirectX 12 games out there, Ashes of the Singularity takes the best advantage of its low-level API capabilities. As a strategy game, there could be an enormous number of AI bots on the screen at once, and in those cases, both the CPU and GPU can be used for computation.
I should be clear about one thing: low-level graphics APIs are designed to benefit low-end hardware better, but when we’re dealing with GPUs that cost over $500, that rules that kind of test useless. For that reason, I’ve chosen to benchmark these three games as normal; the results might not be specific to low-level DX12 enhancements, but they’re still fair for comparisons against other high-end graphics cards.
Ashes is a game that AMD hyped up quite a bit as its Radeon graphics cards delivered better DirectX 12 performance than NVIDIA’s GeForce cards could. Proof of that can be seen in the battle between the Nano and TITAN X, where AMD’s lower-spec’d card outperformed NVIDIA’s previous top-dog.
Well, while AMD likes to joke about NVIDIA’s DirectX 12 performance, it might have to stifle its laughter for a bit, as the GTX 1080 proves 25~33% faster than the R9 Nano in this test, and truly blows the TITAN X out of the water.
How about Rise Of The Tomb Raider?
As with Ashes, the GTX 1080’s results here dwarfs those of the TITAN X. However, unlike with Ashes, where AMD’s Radeon R9 Nano performed quite well, RotTR had the opposite effect. The GTX 1080 performs exceptionally well here.
Hitman isn’t the most graphically impressive game, but since it’s one of the few out there to utilize DirectX 12 and has a built-in benchmark, testing with it was an easy choice.
Once again, the GTX 1080 rules the roost here, and just in the nick of time, the R9 Nano manages to redeem itself, coming well ahead of both the TITAN X and SLI’d 980s.
To test graphics cards for both their power consumption and temperature at load, I utilize a couple of different tools. On the hardware side, I rely on a Kill-a-Watt power monitor, which the PC plugs into directly. For software, I use GPU-Z to monitor the core temperature, and 3DMark’s Fire Strike 4K test to push the GPU hard.
To test, the floor area behind the (shut down) PC is tested with a temperature gun, with the average temperature recorded as the room temperature. Once that’s established, the PC is turned on and left to sit idle for ten minutes. It’s at this point when the idle wattage is noted, and 3DMark is run. It’s during the ‘Graphics Test 2’ that the max load wattage is recorded.
Throughout most of our testing, the GTX 1080 has been outperformed by the GTX 980 in SLI. Given that the 980 in itself is a high-end GPU, that’s not too surprising. However, what’s really impressive about the GTX 1080 can be seen in the power graph above. Even though the card can come really close to the performance of SLI’d 980s overall, it draws 185W less at full load.
As for temperatures, we were given a tease during the unveiling earlier this month when a demo showed 67°C in the corner. As it happens, that’s not going to be typical, unless, maybe, you have an amazing cooling setup. Instead, the card is likely to peak at 80°C just as the last-generation did.
While sub-70°C would have been really nice to see, 80°C is still well within safe limits, and ultimately, it runs hot because it’s delivering the best performance, so there’s nothing to stress over. Even so, I am anxious to see what third-party coolers could do for the GTX 1080’s temperatures.
Considering the fact that NVIDIA left little to the imagination when it unveiled the GeForce GTX 1080 in Austin earlier this month, and the fact that the performance seen in our testing largely matches up with what we expected, this is one card that’s seriously easy to write a conclusion for.
There are a couple of different ways to look at the GTX 1080. NVIDIA’s Jen-Hsun gave us a great example during his keynote a few weeks ago: It’s faster than a TITAN X, uses less power, and is cheaper. Through our testing, we’ve been able to establish that those promises were true. Is the card faster than 980s in SLI, though? The answer to that is “sometimes”. If a game takes proper, full advantage of SLI, it’ll likely be a bit faster than a GTX 1080, but not by much. In some cases, the GTX 1080 even wins if SLI is taken advantage of.
As much as NVIDIA likes to talk about the TITAN X in its Pascal comparisons, I think a fairer comparison is the 980 Ti, as that’s the card most people own (out of the two).
The GTX 1080 is at least 25% faster than the TITAN X, so that means it’d be at least 35% faster than a 980 Ti. That card cost $649 a few weeks ago, so with the 1080, NVIDIA delivers a card that’s much faster, still cheaper (SRP $599), and uses far less power. The same applies to the GTX 980; two of those right now would cost more than the GTX 1080, and while it might be faster (in some cases), it’s a much bulkier setup that delivers a much-decreased performance-per-watt rating versus the 1080.
Any way you look at it, the GTX 1080 delivers just what we hoped Maxwell’s successor would. It is unfortunate that we didn’t get all of the candy that comes with full-blown Pascal, like NVLink and HBM2 memory, but thanks to its transition to 16nm FinFET, NVIDIA has proven that the actual need for HBM2 right now is not that great. Although, AMD could punch a hole in this theory if its first Polaris cards do in fact ship with HBM2, and with larger than a 4GB framebuffer.
Outside of boosted performance, NVIDIA introduced a number of cool technologies with Pascal, including Async Compute, multi-GPU support in Windows 10, a collection of comprehensive VR improvements, multi-monitor gaming improvements, and has updated existing ones, like GPU Boost. It’s even improved the overclocking aspect, giving those who want to push their card more dials to play around with.
After taking a look at our performance results, and poring over all that Pascal brings to the table, the GTX 1080 earns, without hesitation, one of our Editor’s Choice awards.
There is one thing I will admit, though. Based on what’s been revealed, I feel like the GTX 1070 is going to be the even more exciting release of the two, as it’s pretty affordable (at $379 SRP), and looks to surpass the performance of the TITAN X (it’s spec’d at 400 GFLOPs higher). The 1070 still only makes use of GDDR5, rather than the new G5X, but since the TITAN X also used GDDR5, this isn’t too much of a concern.
Now consider this: A $600 GTX 1080 is going to deliver 9 TFLOPs of performance, whereas dual GTX 1070s would gain 44% theoretical performance for just $160 more. Of course, if money is of little concern, you’re also free to go the 1080 SLI route to gain 38% over SLI’d 1070s. It’s nice to have choices!
As mentioned earlier, this launch article will not conclude our GTX 1080 coverage. We have more stuff planned, although some of that might wait until the GTX 1070 drops so as to kill two birds with one stone. For further reading, you can now check out our GTX 1080 overclocking guide and best playable settings.
You can (pre)order the new NVIDIA GeForce GTX 1080 from either Amazon or Newegg when available (check back often). We’ll be sure to check out custom cooler and overclocked models soon.
Copyright © 2005-2019 Techgage Networks Inc. - All Rights Reserved.