Date: June 26, 2016
Author(s): Rob Williams
We learned last month that NVIDIA’s latest top-end GeForce is a ‘force’ to be reckoned with, but what about its littler brother, the GTX 1070? It’s been no secret what this card essentially is : a TITAN X successor. While many successors usually perform more than 5% better, though, they don’t usually cost less than half as much at the same time. Intrigued?
Summer might have only just begun, but the PC gaming world has needed no help from a season to heat things up. NVIDIA aided us last month with the unveiling of its first Pascal-based graphics cards, the GeForce GTX 1080 and GTX 1070.
As many anticipated, the GTX 1080 became the world’s fastest single-GPU solution, proving about 30% faster than the green team’s previous champ, TITAN X. Little did we realize at the unveiling of Pascal, TITAN X would be tied into a lot of comparisons – even those involving the 1080’s littler brother: GTX 1070.
It’s not hard to get excited about a new top-end graphics card, but despite all of what the GTX 1080 offers, I still find myself more excited about the GTX 1070. The reason is simple: it delivers the performance of last year’s $999 TITAN X at an SRP of $379. In fact, it could even best it: the GTX 1070’s GDDR5 is clocked a bit higher than the TITAN X, and the GPU itself is spec’d 350 GFLOPs better.
How could that not be considered exciting?
What’s not exciting, though, are the reasons for this article taking so long to get published. I’m not going to get into it too deeply here, but will say that some of the problems I experienced during my work for the GTX 1080 Best Playable article decided to linger. I am hoping at this point that the universe will agree that I’ve endured too much benchmarking nonsense this year and will give me a break. That’s how things work, right?
This article being published right now is a bit odd for the simple fact that AMD will soon be launching its Radeon RX 480. But, this is still a good preparation for that card, since we’ll be able to soon see what NVIDIA’s $379 offering can do against AMD’s $200 one. It should be interesting.
|NVIDIA GeForce Series||Cores||Core MHz||Memory||Mem MHz||Mem Bus||TDP|
|GeForce GTX 1080||2560||1607||8192MB||10000||256-bit||180W|
|GeForce GTX 1070||1920||1506||8192MB||8000||256-bit||150W|
|GeForce GTX TITAN X||3072||1000||12288MB||7000||384-bit||250W|
|GeForce GTX 980 Ti||2816||1000||6144MB||7000||384-bit||250W|
|GeForce GTX 980||2048||1126||4096MB||7000||256-bit||165W|
|GeForce GTX 970||1664||1050||4096MB||7000||256-bit||145W|
|GeForce GTX 960||1024||1126||2048MB||7010||128-bit||120W|
|GeForce GTX 950||768||1024||2048MB||6600||128-bit||90W|
Since this article is overdue and work is piling up, I’ll go a bit lighter on text in this article than usual. If you’re not that familiar with NVIDIA’s latest Pascal GPUs, I’d recommend reading through the dedicated page for its features in the GTX 1080 review.
When we need to build a test PC for performance testing, “no bottleneck” is the name of the game. While we admit that few of our readers are going to be equipped with an Intel 8-core processor clocked to 4GHz, we opt for such a build to make sure our GPU testing is as apples-to-apples as possible, with as little variation as possible. Ultimately, the only thing that matters here is the performance of the GPUs, so the more we can rule out a bottleneck, the better.
That all said, our test PC:
|Graphics Card Test System|
|Processors||Intel Core i7-5960X (8-core) @ 4.0GHz|
|Motherboard||ASUS X99 DELUXE|
|Memory||Kingston HyperX Beast 32GB (4x8GB) – DDR4-2133 11-12-11|
|Graphics||AMD Radeon R9 Nano 4GB – Catalyst 16.5.3
NVIDIA GeForce GTX 980 4GB – GeForce 365.22
NVIDIA GeForce GTX TITAN X 12GB – GeForce 365.22
NVIDIA GeForce GTX 1070 8GB – GeForce 368.19 (Beta)
NVIDIA GeForce GTX 1080 8GB – GeForce 368.25
|Storage||Kingston SSDNow V310 1TB SSD|
|Power Supply||Cooler Master Silent Pro Hybrid 1300W|
|Chassis||Cooler Master Storm Trooper Full-Tower|
|Cooling||Thermaltake WATER3.0 Extreme Liquid Cooler|
|Displays||Acer Predator X34 34″ Ultra-wide G-SYNC
Acer XB280HK 28″ 4K G-SYNC
ASUS 27″ 1440p FreeSync
|Et cetera||Windows 10 Pro (10586) 64-bit|
Framerate information for all tests – with the exception of certain time demos and DirectX 12 tests – are recorded with the help of Fraps. For tests where Fraps use is not ideal, I use the game’s built-in test (the only option for DX12 titles right now). In the past, I’ve tweaked the Windows OS as much as possible to rule out test variations, but over time, such optimizations have proven fruitless. As a result, the Windows 10 installation I use is about as stock as possible, with minor modifications to suit personal preferences.
In all, I use 9 different games for regular game testing, and 3 for DirectX 12 testing. That’s in addition to the use of three synthetic benchmarks. Because some games are sponsored, the list below helps oust potential bias in our testing.
(AMD) – Ashes of the Singularity (DirectX 12)
(AMD) – Battlefield 4
(AMD) – Crysis 3
(AMD) – Hitman (DirectX 12)
(NVIDIA) – Metro: Last Light Redux
(NVIDIA) – Rise Of The Tomb Raider (incl. DirectX 12)
(NVIDIA) – The Witcher 3: Wild Hunt
(NVIDIA) – Tom Clancy’s Rainbow Six Siege
(Neutral) – DOOM
(Neutral) – Grand Theft Auto V
(Neutral) – Total War: ATTILA
If you’re interested in benchmarking your own configuration to compare to our results, you can download this file (5MB) and make sure you’re using the exact same graphics settings. I’ll lightly explain how I benchmark each test before I get into each game’s performance results.
I should also note something that might seem obvious: there is no 1080p testing in this article. That’s because the GTX 1070 is such a powerful card, it’s overkill for that resolution in most cases. That being the case, I’d consider the GTX 1070 to be an ideal card for 1440p gamers, or those rocking ultrawide panels. Even the GTX 1080 doesn’t have quite enough gusto to become an “ultimate” 4K card, but it handles 4K a lot better than the GTX 1070 can.
There is one thing to consider, though: dual GTX 1070s would far surpass the performance of a single GTX 1080, and it’d technically (per SRP) cost just a bit more. I am never quick to recommend SLI or CrossFire nowadays, because it’s no secret that multi-GPU performance isn’t as ideal today as it used to be thanks in large part to the sheer number of “console ports” that hit the PC every week. But, it remains an option for those who don’t mind getting their hands dirty.
Nonetheless, let’s get on with the test results.
Thanks to the fact that DICE cares more about PC gaming than most developers, the Battlefield series continues to give us titles that are well-worth benchmarking. While Battlefield 4 is growing a little long in the tooth, it’s still a great test at high resolutions. Once Battlefield 1 drops, we’re sure to replace BF4.
Testing: The game’s Singapore level is chosen for testing, as it provides a lot of action that can greatly affect the framerate. The saved game we use starts us off on an airboat that we must steer towards shore, at which point a huge firefight commences. After the accompanying tank gets past a hump in the middle of the beach, the test is stopped.
At 1440p, all five of the cards handle Battlefield 4 no problem. At the ultrawide resolution of 3440×1440, things get a bit hairier, but still, none of the cards perform that poorly. You could easily get by in BF4 with 50 FPS on average, although no one should want to turn down 10-30 FPS boosts provided by the top-tier models.
At 4K, only the GTX 1080 can deliver reasonable enough performance at max settings. If you have a lower card, getting 4K to run at 60 FPS in this game will not require many setting changes. The biggest offenders are AA and AO.
Like Battlefield 4, Crysis 3 is getting a little up there in years. Fortunately, though, that doesn’t matter, because the game is still more intensive than most current titles. Even though the game came out in 2013, if you’re able to equip Very High settings at your resolution of choice, you’re in a great spot.
Testing: The game’s Red Star Rising level is chosen for benchmarking here, with the lowest difficulty level chosen (dying during a benchmarking run is a little infuriating!) The level starts us out in a broken-down building and leads us down to a river, where we need to activate an alien device. Once this is done, the player is run back underneath a nearby roof, at which point the benchmark ends.
It’s with a game like Crysis 3 that really helps highlight just how powerful this $379 GTX 1070 is. Despite the game’s Very High detail levels, the GTX 1070 still managed to deliver performance of 61 FPS on average. This changed dramatically at 4K, with halved FPS on the same card. Ultrawide performance isn’t ideal either, but at 47 FPS, it won’t take much tweaking to remedy that situation.
DOOM 3 was released a couple of months before Techgage launched (March 1, 2005, for the record), and it was a game featured in our GPU testing right from the get-go. For this reason, this latest DOOM feels a bit special, even though it follows DOOM 3 up eleven years later. As we hoped, the game proves to be more than suitable for GPU benchmarking.
Testing: Due to time constraints, an ideal level could not be chosen for benchmarking. Instead, our test location starts us off at the bottom of a short set of stairs early on in the game, where we must climb them, open up a door, and then go to a big room where demons are taken care of and the benchmark is stopped.
DOOM at High detail levels will prove no problem for any one of these five GPUs. Even the GTX 1070 handles it well at 3440×1440. Yet again, 4K proves crippling, with performance of the GTX 1070 dropping from 65 FPS to 42 FPS.
Does a game like this even need an introduction? Any Grand Theft Auto game on the PC is a ‘console port’, proven by the fact that it always comes to the PC long after the consoles, but Rockstar has at least done PC gamers a favor here by offering them an almost overwhelming number of graphical options to fine-tune, helping to make it suitable for benchmarking, especially at high resolutions.
Testing: The mission Repossession is chosen for testing here, with the benchmark starting as soon as our character makes his way to an unsuspecting car. The benchmark ends after a not-so-leisurely drive to a parking garage, right before a cutscene kicks in.
If it’s not becoming clear by now why I decided against 1080p testing, I’m not sure what to tell you. At 1440p and full detail, GTA V gets 92 FPS on the GTX 1070. It even manages to breach 70 FPS at our ultrawide resolution of 3440×1440. It also delivers good performance at 4K. It’d only require a couple of settings be toned down to increase the performance at 4K to 60 FPS.
Like a couple of other games in our stable, Metro Last Light might seem like an odd choice give its age. After all, the original version of the game came out in 2013, and its Redux version came out in late 2014. None of that matters, though, as the game is about as hardcore as it can get when it comes to GPU punishment.
Testing: The game’s built-in timedemo is used for testing here, which lasts 2m 40s. While the game can spit out its own results file, it’s horribly inaccurate, so Fraps is still used here.
I’m going to go out on a limb and say that if a card like the GTX 1080 can only achieve 51 FPS in a game at 1440p, the graphical settings might be dialed up a bit too much. Nonetheless, I am glad we use Metro as a timedemo, because sub-20 FPS at 4K would redefine benchmarking tedium.
What we can take away from this though is that the GTX 1070 and TITAN X continue to perform at the same level. In many cases, the 1070 wins easily.
Lara Croft has sure come a long way. The latest Tomb Raider iteration becomes one of the first titles on the market to support DirectX 12, but even without it, the game looks phenomenal at high detail settings (as the below screenshot can attest).
Testing: Geothermal Valley is the location chosen for testing with this title, as it features a lot shadows and a ton of foliage. From the start of our saved game, we merely walk down a fixed path for just over a minute and stop the benchmark once we reach a broken down bridge (the shot below is from the benchmarked area).
Rise of the Tomb Raider is one of the best-looking games out right now, and lo and behold: the GTX 1070 can power it at over 60 FPS at 1440p. At the top-dog ultrawide resolution, it tapers off at 50 FPS, but as mentioned with previous games, it wouldn’t take too much effort to achieve 60 FPS. 4K is much different, though: it’ll require some substantial tweaking to attain a much better framerate.
Since the original The Witcher title came out in 2007, the series has become one of the best RPGs going. Each one of the titles in the series offers deep gameplay, amazing locales, and comprehensive lore. Wild Hunt, the series’ third game, also happens to be one of the best-looking games out there and requires a beefy PC to take great advantage of.
Testing: Our saved game starts us just outside Hierarch Square, where we begin a manual runthrough (literally – the run button is held down as much as possible) through and around the town, to wind up back at a bridge near a watermill (pictured below). The entire runthrough takes about 90 seconds. Please note that while ‘Ultra’ detail is used, NVIDIA’s HairWorks is not.
At this point, we’ve begun to see some obvious trends. Overall, 1440p has proven to be a piece of cake for the GTX 1070, and nothing changes with Wild Hunt. Likewise, the card even manages to push 56 FPS at 3440×1440, requiring almost no effort to get that bumped to 60 FPS. At 4K, the “Ultra” detail profile should be dropped to High.
If you think it’s hard to keep track of Tom Clancy games, you sure are not alone. Siege came out just this past winter, and while it focuses heavily on co-op play, solo players are welcomed, too. The game puts a huge emphasis on destructible environments, which could both harm or help a given scenario.
Testing: This game has a suitable built-in benchmark, so I’ve opted to stick with that. After the test is run, the overall results are fetched.
In the intro, I mentioned that I had a number of issues plague me in advance of this article, and this game is one of them. For whatever reason, I decided to benchmark the game originally with TXAA – an NVIDIA-specific feature. Because I didn’t catch the issue, the game dropped the AA setting to its easiest setting on the AMD card. So, I had to remove the AMD card from these results, as it was tested with different graphics settings, and it was obvious.
That said, we continue to see similar trends as before. The GTX 1070 doesn’t fall short to the TITAN X almost ever. The graphics settings chosen here are rather hardcore, and it’s clear that changes will need to be made at any one of the three resolutions to achieve truly playable framerates. Unless you’re rocking a GTX 1080 with 1440p, that is.
For strategy fans, the Total War series needs no introduction. At this point, ATTILA is the second-last in the series, as WARHAMMER released not too long ago. However, ATTILA is much rougher on the GPU, so there’s no reason to replace the game quite yet (at least, not at this point in time).
Testing: ATTILA includes a built-in benchmark, so again, I’ve decided to use that. However, as I do with Metro, I stick to Fraps for framerate capturing as the game’s results page isn’t too convenient.
As we’ve seen up to this point, the GTX 1070 sits comfortably behind the GTX 1080. The TITAN X, meanwhile, sits behind the GTX 1070.
I don’t like to overdo “time demos”, but I do love running some hands-off benchmarks that you at home can run as well (provided you have a license) so that you can accurately compare your performance to ours. It goes without saying that any synthetic testing would have to include Futuremark, and in particular for high-end cards, 3DMark’s Fire Strike test.
3DMark includes a number of different game tests, but today’s graphics cards are so powerful, the Fire Strike test is really the only one that makes sense. At 1080p, even modest GPUs can deliver decent performance. A great thing about Fire Strike is that the official tests encompass three different resolutions, including 4K, making it perfect for our testing.
3DMark ranks the five GPUs in the same basic order we’ve seen across our regular testing. The GTX 1070 edges out the TITAN X while the GTX 1080 sits comfortably ahead of the 1070.
It’s hard to tell at this point if Heaven is ever going to see a new update, as it’s been quite a while since the last one, but what we have today is still a fantastic benchmark to run. That’s thanks to the fact that it’s free, an also because it can still prove so demanding on today’s highest-end GPUs. It’s also a great test for tessellation performance, as it lets you increase or decrease its intensity. For testing, I stick with ‘Normal’ tessellation.
What’s interesting about this Heaven result is that both the GTX 980 and R9 Nano hit the same general performance. The bigger cards, perhaps with some thanks owed to their increased VRAM, perform much better.
Meow hear this: there’s a new benchmark in town that promises to be purrfect for testing 4K resolutions. So, that’s just what I’ve used it for. The test consists of a cat innocently roaming a street until chaos ensues. Before long, this feline is mowing down buildings with its laser eyes, destroying GPU performance at the same time.
It’d be a CATastrophe at this point if Catzilla’s results decided to disagree with every other one we’ve thrown at you. The GTX 1070 remains ahead of TITAN X.
Considering the fact that we’ve been hearing about DirectX 12 for what feels like forever, it’s a little surprising that the number of DX12 titles out there remain few. Heck, one such game was Fable Legends, and that was shut down last month. We’re definitely in the middle of a waiting game for more DX12 titles to get here, but thankfully, those that do exist now prove great for testing.
Of all the DirectX 12 games out there, Ashes of the Singularity takes the best advantage of its low-level API capabilities. As a strategy game, there could be an enormous number of AI bots on the screen at once, and in those cases, both the CPU and GPU can be used for computation.
I should be clear about one thing: low-level graphics APIs are designed to benefit low-end hardware better, but when we’re dealing with GPUs that cost hundreds of dollars, that rules that kind of test useless. For that reason, I’ve chosen to benchmark these three games as normal; the results might not be specific to low-level DX12 enhancements, but they’re still fair for comparisons against other high-end graphics cards.
After performance testing was originally completed for Ashes, its developer released a patch that overhauled the UI and added a new graphics setting. That caused me to have to retest everything, as it wouldn’t have been possible (that I know of) to choose settings in the new client that would perfectly match the old one. Nonetheless, these results are as fresh as they come.
Yet again, we’re seeing the trend we’ve exhibited up to this point: the GTX 1070 bests the TITAN X. Surprised?
How about Rise Of The Tomb Raider?
For some reason, the GTX 1080 managed to achieve a much greater lead in RotTR in the DirectX 12 test. It of course had a bit of a gain in the DX11 test, but the delta is even greater here – at all three resolutions. The rest of the GPU performance trickles down just as we’d expect.
Aha! We couldn’t wrap up testing without an oddball result, could we? Well, in this case, it’s not an “oddball” result per se, but it does stand out. In this particular test, AMD’s card slips right into the middle, rather than share one of the two bottom spots with the GTX 980. Could it be because the game is a Gaming Evolved title or AMD’s DX12 work just happens to shine through here? I’m not sure, but it’s nice to see a little shake-up once in a while!
To test graphics cards for both their power consumption and temperature at load, I utilize a couple of different tools. On the hardware side, I rely on a Kill-a-Watt power monitor, which the PC plugs into directly. For software, I use GPU-Z to monitor the core temperature, and 3DMark’s Fire Strike 4K test to push the GPU hard.
To test, the floor area behind the (shut down) PC is tested with a temperature gun, with the average temperature recorded as the room temperature. Once that’s established, the PC is turned on and left to sit idle for ten minutes. It’s at this point when the idle wattage is noted, and 3DMark is run. It’s during the ‘Graphics Test 2’ that the max load wattage is recorded.
I appreciate awesome performance, but I also appreciate awesome power efficiency – and the GTX 1070 certainly has that. Despite being an essential successor to the TITAN X, outperforming it in almost all cases, it draws a staggering 121W less at load.
That efficiency even carries itself through to the temperature test, where it matches AMD’s modest-sized Nano almost to a T (no other card could touch Nano’s idle temperature).
Not only does the GTX 1070 shave 121W off of the full system load against the TITAN X, it likewise shaves 14°C off of the load temperature as well. This level of performance at 245W makes me really excited. With each new GPU generation, there’s a reduced chance that I’ll blow fuses while testing. Love it!
Considering the fact that the GTX 1070 performs better than last year’s $999 TITAN X, it probably won’t surprise you to learn that there’s no ideal competition for it right now. That’s not even going to change with AMD’s Polaris, at least not based on what we know about it. The RX 480 is due to be the highest-end Polaris released, and at $200, we can’t expect it to deliver performance on par with the GTX 1070, which costs twice as much.
We could be surprised, though. It could be that AMD has another Polaris card in store that’s higher-end than the RX 480, but at this point I am not too sure about it. AMD has rattled off the same message for a while: high-end will come in the form of Vega, due much later than Polaris.
This is actually a bit of an odd situation we’re in. It’s not often that two GPU vendors decide to launch new series within a month of each other and target entirely different price-points. Right now, NVIDIA handles $379 and $649, while AMD’s upcoming RX 480 settles in at $200 (for the 4GB version; 8GB will likely be $249). The other Polaris cards, RX 470 and RX 460, will be lower than $200 (as common sense would imply).
What this ultimately means is that to match a card like the GTX 1070 with AMD’s upcoming Polaris generation, dual GPUs would be needed to beat out the GTX 1070. While I feel safe in these assumptions, they’re still assumptions until RX 480 reviews go live on Wednesday.
But enough about comparisons that don’t even match up to begin with. What about the GTX 1070? As has been reiterated time and time again, the GTX 1070 is a $379 card that performs like last year’s $999 TITAN X. With its 6.5 TFLOPs of compute power, it’s not even that far behind the GTX 1080, which costs about 70% more for 35% more oomph (and an upgrade to the faster GDDR5X).
That leads me to another important matter: GTX 1070 pricing. At the moment, it’s not easy to find a GTX 1070 that’s being sold at SRP. Both the Founders Editions and third-party models are all selling for at least $449. It’d surprise me if someone could be surprised – we’ve all been through this before. There’s no telling when these inflated prices will drop down, but the fact that the cards are selling like hotcakes doesn’t help. At the time of writing, only one of the sixteen GTX 1070s Newegg lists is available – and it’s $480. That’s unfortunate.
Based on the SRP, the GTX 1070 would be a no-brainer choice for someone planning to game at 1440p or 3440×1440. I’d even consider the slight premium to still make the card worth it (and I am sure etailers agree, which is why the gouging is happening). While the card can handle 4K gaming, it’s going to require a fair bit of tweaking in most titles (I’d imagine MOBA gamers would be safest).
I believe that the GTX 1070 is one of the best cards to have come out in recent years. For its price-point, its capabilities are enormous, and when compared to the previous generation cards, it offers a number of efficiency perks, to boot. As discovered above, the GTX 1070 beats out TITAN X while shaving over 120W off of the load wattage. That’s downright incredible for a single generational transition.
As with the GTX 1080 before it, the GTX 1070 has earned one of our Editor’s Choice awards – rather easily. Next? The RX 480, of course. It might not match the GTX 1070, but it’s going to be no less fun to test. Our report on that card will come soon!
Copyright © 2005-2021 Techgage Networks Inc. - All Rights Reserved.