by Rob Williams on June 26, 2016 in Graphics & Displays
We learned last month that NVIDIA’s latest top-end GeForce is a ‘force’ to be reckoned with, but what about its littler brother, the GTX 1070? It’s been no secret what this card essentially is : a TITAN X successor. While many successors usually perform more than 5% better, though, they don’t usually cost less than half as much at the same time. Intrigued?
Summer might have only just begun, but the PC gaming world has needed no help from a season to heat things up. NVIDIA aided us last month with the unveiling of its first Pascal-based graphics cards, the GeForce GTX 1080 and GTX 1070.
As many anticipated, the GTX 1080 became the world’s fastest single-GPU solution, proving about 30% faster than the green team’s previous champ, TITAN X. Little did we realize at the unveiling of Pascal, TITAN X would be tied into a lot of comparisons – even those involving the 1080’s littler brother: GTX 1070.
It’s not hard to get excited about a new top-end graphics card, but despite all of what the GTX 1080 offers, I still find myself more excited about the GTX 1070. The reason is simple: it delivers the performance of last year’s $999 TITAN X at an SRP of $379. In fact, it could even best it: the GTX 1070’s GDDR5 is clocked a bit higher than the TITAN X, and the GPU itself is spec’d 350 GFLOPs better.
How could that not be considered exciting?
What’s not exciting, though, are the reasons for this article taking so long to get published. I’m not going to get into it too deeply here, but will say that some of the problems I experienced during my work for the GTX 1080 Best Playable article decided to linger. I am hoping at this point that the universe will agree that I’ve endured too much benchmarking nonsense this year and will give me a break. That’s how things work, right?
This article being published right now is a bit odd for the simple fact that AMD will soon be launching its Radeon RX 480. But, this is still a good preparation for that card, since we’ll be able to soon see what NVIDIA’s $379 offering can do against AMD’s $200 one. It should be interesting.
|NVIDIA GeForce Series||Cores||Core MHz||Memory||Mem MHz||Mem Bus||TDP|
|GeForce GTX 1080||2560||1607||8192MB||10000||256-bit||180W|
|GeForce GTX 1070||1920||1506||8192MB||8000||256-bit||150W|
|GeForce GTX TITAN X||3072||1000||12288MB||7000||384-bit||250W|
|GeForce GTX 980 Ti||2816||1000||6144MB||7000||384-bit||250W|
|GeForce GTX 980||2048||1126||4096MB||7000||256-bit||165W|
|GeForce GTX 970||1664||1050||4096MB||7000||256-bit||145W|
|GeForce GTX 960||1024||1126||2048MB||7010||128-bit||120W|
|GeForce GTX 950||768||1024||2048MB||6600||128-bit||90W|
Since this article is overdue and work is piling up, I’ll go a bit lighter on text in this article than usual. If you’re not that familiar with NVIDIA’s latest Pascal GPUs, I’d recommend reading through the dedicated page for its features in the GTX 1080 review.
When we need to build a test PC for performance testing, “no bottleneck” is the name of the game. While we admit that few of our readers are going to be equipped with an Intel 8-core processor clocked to 4GHz, we opt for such a build to make sure our GPU testing is as apples-to-apples as possible, with as little variation as possible. Ultimately, the only thing that matters here is the performance of the GPUs, so the more we can rule out a bottleneck, the better.
That all said, our test PC:
Framerate information for all tests – with the exception of certain time demos and DirectX 12 tests – are recorded with the help of Fraps. For tests where Fraps use is not ideal, I use the game’s built-in test (the only option for DX12 titles right now). In the past, I’ve tweaked the Windows OS as much as possible to rule out test variations, but over time, such optimizations have proven fruitless. As a result, the Windows 10 installation I use is about as stock as possible, with minor modifications to suit personal preferences.
In all, I use 9 different games for regular game testing, and 3 for DirectX 12 testing. That’s in addition to the use of three synthetic benchmarks. Because some games are sponsored, the list below helps oust potential bias in our testing.
(AMD) – Ashes of the Singularity (DirectX 12)
(AMD) – Battlefield 4
(AMD) – Crysis 3
(AMD) – Hitman (DirectX 12)
(NVIDIA) – Metro: Last Light Redux
(NVIDIA) – Rise Of The Tomb Raider (incl. DirectX 12)
(NVIDIA) – The Witcher 3: Wild Hunt
(NVIDIA) – Tom Clancy’s Rainbow Six Siege
(Neutral) – DOOM
(Neutral) – Grand Theft Auto V
(Neutral) – Total War: ATTILA
If you’re interested in benchmarking your own configuration to compare to our results, you can download this file (5MB) and make sure you’re using the exact same graphics settings. I’ll lightly explain how I benchmark each test before I get into each game’s performance results.
I should also note something that might seem obvious: there is no 1080p testing in this article. That’s because the GTX 1070 is such a powerful card, it’s overkill for that resolution in most cases. That being the case, I’d consider the GTX 1070 to be an ideal card for 1440p gamers, or those rocking ultrawide panels. Even the GTX 1080 doesn’t have quite enough gusto to become an “ultimate” 4K card, but it handles 4K a lot better than the GTX 1070 can.
There is one thing to consider, though: dual GTX 1070s would far surpass the performance of a single GTX 1080, and it’d technically (per SRP) cost just a bit more. I am never quick to recommend SLI or CrossFire nowadays, because it’s no secret that multi-GPU performance isn’t as ideal today as it used to be thanks in large part to the sheer number of “console ports” that hit the PC every week. But, it remains an option for those who don’t mind getting their hands dirty.
Nonetheless, let’s get on with the test results.