Date: May 17, 2017
Author(s): Rob Williams
The hype leading up to the launch of AMD’s Radeon RX Vega is hard to ignore. In fact, it’s the kind of hype that every company dreams of. Given that, a release of an RX 500 series that doesn’t contain Vega could come as a surprise, or even a rude awakening. But, if you’ve been in the market for a new GPU that won’t break the bank, both the RX 570 and RX 580 are well worth checking out.
It’s been an interesting 2017 for graphics cards so far, but not for the reasons most of us have been hoping. While we did see the release of NVIDIA’s biggest and baddest gamer GPU in the form of the GeForce GTX 1080 Ti, and also saw AMD update its Radeon Pro Duo, Vega has been MIA, with there still being no clear launch date (though rumor says an announcement should happen soon).
At the start of the year, it was assumed that Vega would arrive before the end of Q1, but that’s come and gone – we’re almost half-way through Q2 at this point. It does seem extremely likely that we’ll see it launch before the end of this quarter, though, as it’s a bit of a do-or-die situation for AMD. Ryzen’s success is taking care of the CPU side; it’d be nice to see the same thing on the GPU side.
This all makes the launch of the Radeon RX 500 series a bit odd, but in reality, it’s not odd at all. RX 500 is to RX 400 what R9 300 was to R9 200 – a clock boost, and not a viable upgrade path for those who own the older versions. AMD is keeping competitive the best way it can, and while it might seem strange to simply overclock a series and give it a new name, it has resulted in a slew of new GPU models to peruse and consider.
When the RX 400 series came out, it catered to mainstream audiences – the top-end RX 480 cost a mere $249 for a card bundling in 8GB of GDDR5. Because there haven’t been higher-end cards than that on the red side, a lot of NVIDIA’s lineup has remained uncontested. The hope is that Vega will change that situation, much like Ryzen did on the CPU side. RX 500, then, caters to the same audience that the RX 400 series did, leaving enthusiasts wanting to go higher-end (and AMD) to hold out for Vega.
If you’re in the market for a new GPU, and don’t plan on going high-end, RX 500 was made for you.
|AMD Radeon Series||Cores||Core MHz||Memory||Mem MHz||Mem Bus||TDP|
|Radeon RX 580||2304||1340||≤8192MB||8000||256-bit||185W|
|Radeon RX 480||2304||1266||≤8192MB||8000||256-bit||150W|
|Radeon RX 570||2048||1244||4096MB||7000||256-bit||150W|
|Radeon RX 470||2048||1206||4096MB||6600||256-bit||120W|
|Radeon RX 560||1024||1275||4096MB||7000||128-bit||80W|
|Radeon RX 460||896||1200||4096MB||7000||128-bit||75W|
|Radeon RX 550||512||1183||4096MB||7000||128-bit||50W|
In addition to the RX 580 and 570, AMD sent along an RX 550, which we’re going to take a look at later (along with the RX 560). As you can see from the table above, this year’s models are not too different from last year’s – as mentioned before, this is a clock boost series, perfect for those needing a card now, but not for those who own the previous version of the respective card.
Because the RX 500 series is clocked higher than RX 400, it means that TDPs have experienced an unfortunate (but expected) boost, with a gain of 35W at the top-end.
Both cards we received come from PowerColor, which makes some of the best-looking models around (it’s hard to go wrong with red + black color schemes). Interestingly, the RX 580 version of the card has a simpler cooler than the RX 570, resulting in an odd situation where the lower-end card draws more power than the higher-end one, as we’ll see in the conclusion (three fans vs. two).
The RX 500 series isn’t the most revolutionary one we’ve seen released in a while (in case it hasn’t been obvious, it’s a clock-boosted RX 400 series), so by this point, you probably know exactly what to expect if you’ve been paying attention to the GPU market over the past year. So, we’ll jump right into testing, for the final time using this particular suite.
Here’s our test PC:
|Graphics Card Test System|
|Processors||Intel Core i7-5960X (8-core) @ 4.0GHz|
|Motherboard||ASUS X99 DELUXE|
|Memory||Kingston HyperX Beast 32GB (4x8GB) – DDR4-2133 11-12-11|
|Graphics||AMD Radeon R9 Nano 4GB – Catalyst 16.5.3|
AMD Radeon RX 460 2GB – Catalyst 16.10.2 Hotfix
AMD Radeon RX 480 8GB – Catalyst 16.9.2
AMD Radeon RX 580 8GB – Catalyst 17.4.2
AMD Radeon RX 570 4GB – Catalyst 17.4.2
NVIDIA GeForce GTX 980 4GB – GeForce 365.22
NVIDIA GeForce GTX TITAN X 12GB – GeForce 365.22
NVIDIA GeForce GTX 1050 4GB – GeForce 375.57 (Beta)
NVIDIA GeForce GTX 1060 6GB – GeForce 368.64 (Beta)
NVIDIA GeForce GTX 1070 8GB – GeForce 368.19 (Beta)
NVIDIA GeForce GTX 1080 8GB – GeForce 368.25
|Storage||Kingston SSDNow V310 1TB SSD|
|Power Supply||Cooler Master Silent Pro Hybrid 1300W|
|Chassis||Cooler Master Storm Trooper Full-Tower|
|Cooling||Thermaltake WATER3.0 Extreme Liquid Cooler|
|Displays||Acer Predator X34 34″ Ultra-wide|
Acer XB280HK 28″ 4K G-SYNC
ASUS MG279Q 27″ 1440p FreeSync
|Et cetera||Windows 10 Pro (10586) 64-bit|
My intention was to test the RX 480 again using the same driver as the RX 580, but after testing, I realized that Windows must have stepped in and replaced that driver with an older one. Both the RX 580 and 570 were tested with the exact same driver – it didn’t need to be reinstalled. When I installed the RX 480, which came right after the 570, it booted up fine with the driver already installed, so I went on my merry testing way.
Unfortunately, despite how seamless Windows made it look, that launch driver apparently didn’t support the RX 480, so Windows stepped in and replaced the driver with the most recent one in Windows Update. Because the RX 480 was being retested just for interest’s sake (I wouldn’t recommend people buy it now that the RX 580 series is out, unless it’s a great deal), I decided to stick with those results.
For this reason, the results of the RX 480 might not be 100% identical to the performance of the current driver, but the driver used still encompasses over half a year worth of updates, which gave it an obvious boost in most tests. The current driver might add a touch of performance, but nothing grand.
That all said, framerate information for all tests – with the exception of certain time demos and DirectX 12 tests – are recorded with the help of Fraps. For tests where Fraps use is not ideal, I use the game’s built-in test (the only option for DX12 titles right now). In the past, I’ve tweaked the Windows OS as much as possible to rule out test variations, but over time, such optimizations have proven fruitless. As a result, the Windows 10 installation I use is about as stock as possible, with minor modifications to suit personal preferences.
In all, I use 8 different games for regular game testing, and 3 for DirectX 12 testing. That’s in addition to the use of three synthetic benchmarks. Because some games are sponsored, the list below helps oust potential bias in our testing.
(AMD) – Ashes of the Singularity (DirectX 12)
(AMD) – Battlefield 4
(AMD) – Crysis 3
(NVIDIA) – Metro: Last Light Redux
(NVIDIA) – Rise Of The Tomb Raider (incl. DirectX 12)
(NVIDIA) – The Witcher 3: Wild Hunt
(Neutral) – DOOM
(Neutral) – Grand Theft Auto V
(Neutral) – Total War: ATTILA
If you’re interested in benchmarking your own configuration to compare to our results, you can download this file (5MB) and make sure you’re using the exact same graphics settings. I’ll lightly explain how I benchmark each test before I get into each game’s performance results.
Thanks to the fact that DICE cares more about PC gaming than most developers, the Battlefield series continues to give us titles that are well-worth benchmarking. While Battlefield 4 is growing a little long in the tooth, it’s still a great test at high resolutions.
Testing: The game’s Singapore level is chosen for testing, as it provides a lot of action that can greatly affect the framerate. The saved game we use starts us off on an airboat that we must steer towards shore, at which point a huge firefight commences. After the accompanying tank gets past a hump in the middle of the beach, the test is stopped.
Right from the get-go, we’re seeing some expected results. The RX 570 is a pinch faster than the 470, and likewise for the 580/480. The RX 470 fell quite short at 1080p compared to the RX 570, although I am not sure why (the results kept in tact after retesting). At 1440p, the 480, 570, and 580 all perform very similarly.
Like Battlefield 4, Crysis 3 is getting a little up there in years. Fortunately, though, that doesn’t matter, because the game is still more intensive than most current titles. Even though the game came out in 2013, if you’re able to equip Very High settings at your resolution of choice, you’re in a great spot.
Testing: The game’s Red Star Rising level is chosen for benchmarking here, with the lowest difficulty level chosen (dying during a benchmarking run is a little infuriating!) The level starts us out in a broken-down building and leads us down to a river, where we need to activate an alien device. Once this is done, the player is run back underneath a nearby roof, at which point the benchmark ends.
Crysis 3 might be getting up there in years, but at high detail levels, the RX 580 can’t even muster 60 FPS at 1440p, falling 15 frames short. Both the current cards and last-gen’s predecessors handle the game no problem at 1080p.
DOOM 3 was released a couple of months before Techgage launched (March 1, 2005, for the record), and it was a game featured in our GPU testing right from the get-go. For this reason, this latest DOOM feels a bit special, even though it follows DOOM 3 up eleven years later. As we hoped, the game proves to be more than suitable for GPU benchmarking.
Testing: Due to time constraints, an ideal level could not be chosen for benchmarking. Instead, our test location starts us off at the bottom of a short set of stairs early on in the game, where we must climb them, open up a door, and then go to a big room where demons are taken care of and the benchmark is stopped.
DOOM not only has great aesthetics, it can run very well on modest hardware. Even the ~$200 RX 570 far exceeds 60 FPS at 1080p. At 1440p, you can expect 50 FPS and higher on any one of the top RX cards.
Does a game like this even need an introduction? Any Grand Theft Auto game on the PC is a ‘console port’, proven by the fact that it always comes to the PC long after the consoles, but Rockstar has at least done PC gamers a favor here by offering them an almost overwhelming number of graphical options to fine-tune, helping to make it suitable for benchmarking, especially at high resolutions.
Testing: The mission Repossession is chosen for testing here, with the benchmark starting as soon as our character makes his way to an unsuspecting car. The benchmark ends after a not-so-leisurely drive to a parking garage, right before a cutscene kicks in.
GTA V might be a pretty enough game, but it doesn’t require a high-end rig to run well. Even at 1440p, the RX 570 delivers nearly 60 FPS. At 1440p, the RX 570 performs close to the RX 580, while it falls a bit behind at 1080p.
Like a couple of other games in our stable, Metro Last Light might seem like an odd choice give its age. After all, the original version of the game came out in 2013, and its Redux version came out in late 2014. None of that matters, though, as the game is about as hardcore as it can get when it comes to GPU punishment.
Testing: The game’s built-in timedemo is used for testing here, which lasts 2m 40s. While the game can spit out its own results file, it’s horribly inaccurate, so Fraps is still used here.
The entire Metro series has become infamous for its demanding graphics, and while much of the world still jokes “Will it run Crysis?”, it’s honestly Metro people should be talking about. Even at 1080p, “High Detail” proves a bit too much for the RX 500 series, although either the RX 480 or RX 580 will offer a decent enough experience (~10 FPS short of 60). Of course, minor adjustments could bring the game to 60 FPS on either one of the new cards.
Lara Croft has sure come a long way. The latest Tomb Raider iteration becomes one of the first titles on the market to support DirectX 12, but even without it, the game looks phenomenal at high detail settings (as the below screenshot can attest).
Testing: Geothermal Valley is the location chosen for testing with this title, as it features a lot shadows and a ton of foliage. From the start of our saved game, we merely walk down a fixed path for just over a minute and stop the benchmark once we reach a broken down bridge (the shot below is from the benchmarked area).
A couple of times so far, we’ve seen the RX 570 fall further behind the RX 580, but the results in Rise of the Tomb Raider take the cake – not so much for the average, but the minimum. The reason for this severe minimum degradation is not clear, but it was very noticeable in the game while testing. However, it only seemed to happen within the first 10 seconds of the run, at which point it’d smooth out. It’s a baffling result that will be retested with the new test suite due out later this month (or at least in advance of Vega).
Since the original The Witcher title came out in 2007, the series has become one of the best RPGs going. Each one of the titles in the series offers deep gameplay, amazing locales, and comprehensive lore. Wild Hunt, the series’ third game, also happens to be one of the best-looking games out there and requires a beefy PC to take great advantage of.
Testing: Our saved game starts us just outside Hierarch Square, where we begin a manual runthrough (literally – the run button is held down as much as possible) through and around the town, to wind up back at a bridge near a watermill (pictured below). The entire runthrough takes about 90 seconds. Please note that while ‘Ultra’ detail is used, NVIDIA’s HairWorks is not.
Like a handful of other titles in this article, Witcher 3 can look beautiful on fairly modest hardware. 1080p is a chore for no RX series card outside of the low-end RX 460. At 1440p, both the RX 570 and RX 580 deliver suitable enough framerates – anything close to 50 is a good alternative to 60 FPS with lower detail.
For strategy fans, the Total War series needs no introduction. ATTILA is the latest in the series, which will remain true for only the next week, as Warhammer is due to launch. Thankfully, any recent Total War game is suitable for benchmarking, and our results are going to prove that.
Testing: ATTILA includes a built-in benchmark, so again, I’ve decided to use that. However, as I do with Metro, I stick to Fraps for framerate capturing as the game’s results page isn’t too convenient.
ATTILA might not look as attractive as some other titles in this article, but it sure is a hardcore test on the GPU and CPU (as our Ryzen testing proves). I should note that “Max” detail is overkill in some ways, so you don’t actually need a GTX 1080 to enjoy playable framerates at 1440p. With this game, little graphics tweaks can have a huge impact.
I don’t like to overdo “time demos”, but I do love running some hands-off benchmarks that you at home can run as well (provided you have a license) so that you can accurately compare your performance to ours. It goes without saying that any synthetic testing would have to include Futuremark, and in particular for high-end cards, 3DMark’s Fire Strike test.
3DMark includes a number of different game tests, but today’s graphics cards are so powerful, the Fire Strike test is really the only one that makes sense. At 1080p, even modest GPUs can deliver decent performance. A great thing about Fire Strike is that the official tests encompass three different resolutions, including 4K, making it perfect for our testing.
Throughout this article, we’ve seen the RX 580 and NVIDIA’s GTX 1060 flip-flop their strengths. In 3DMark, though, the RX 580 edges out the 1060 ever-so-slightly. Now, to be fair, the RX 580 was benchmarked with a brand-new driver while NVIDIA’s was benchmarked last summer, but ultimately, both cards are similar in performance, without one having a real advantage over the other (at least where pure performance is concerned).
It’s hard to tell at this point if Heaven is ever going to see a new update, as it’s been quite a while since the last one, but what we have today is still a fantastic benchmark to run. That’s thanks to the fact that it’s free, an also because it can still prove so demanding on today’s highest-end GPUs. It’s also a great test for tessellation performance, as it lets you increase or decrease its intensity. For testing, I stick with ‘Normal’ tessellation.
Because I hadn’t tested our entire suite of GPUs at 1080p and 1440p in Unigine, I opted to just test the top three RX series cards alone. Interestingly, the RX 570 performed the exact same as the RX 480, but that’s not a result we’ve seen anywhere else. The RX 580 gains a notable 10% lead, but whether that gain will be worth the extra money is up to you (~$30 premium at minimum).
Meow hear this: there’s a new benchmark in town that promises to be purrfect for testing 4K resolutions. So, that’s just what I’ve used it for. The test consists of a cat innocently roaming a street until chaos ensues. Before long, this feline is mowing down buildings with its laser eyes, destroying GPU performance at the same time.
Finally, Catzilla backs up most of the results we’ve seen up to this point. The RX 470 – RX 580 cards scale exactly as we’d expect them to, although unlike with 3DMark, the GTX 1060 managed to best the RX 580 here. Cat fight!
Considering the fact that we’ve been hearing about DirectX 12 for what feels like forever, it’s a little surprising that the number of DX12 titles out there remain few. Heck, one such game was Fable Legends, and that was shut down a while ago. We’re definitely in the middle of a waiting game for more DX12 titles to get here, but thankfully, those that do exist now prove great for testing.
Of all the DirectX 12 games out there, Ashes of the Singularity takes the best advantage of its low-level API capabilities. As a strategy game, there could be an enormous number of AI bots on the screen at once, and in those cases, both the CPU and GPU can be used for computation.
I should be clear about one thing: low-level graphics APIs are designed to benefit low-end hardware better, but when we’re dealing with GPUs that cost hundreds of dollars, that rules that kind of test useless. For that reason, I’ve chosen to benchmark these three games as normal; the results might not be specific to low-level DX12 enhancements, but they’re still fair for comparisons against other high-end graphics cards.
As with some earlier test results, I hadn’t tested DX12 games on all cards at the same resolution, so for this look, I decided to just test 1080p and 1440p with these three cards. As seen in Unigine, the RX 570 performs on par with the RX 480, while the RX 580 edges ahead a wee bit.
What about Rise Of The Tomb Raider?
In this test, the RX 580 separates itself more from the RX 570, and yet again, the RX 570 performs about the same as the RX 480. What’s interesting about this test result is that unlike the DX11 test, which saw the RX 570 fall seriously short with its minimum framerate, no such issue occurred with the synthetic DX12 test.
To test graphics cards for their power consumption, I utilize a couple of different tools. On the hardware side, I rely on a Kill-a-Watt power monitor, which the PC plugs into directly. For software, I use GPU-Z to monitor the core temperature, and 3DMark’s Fire Strike 4K test to push the GPU hard.
Once the PC is turned on and left to sit idle for ten minutes, I monitor the idle wattage (if it’s stable – it shouldn’t vary by more than 1W), and then I open up 3DMark to run its grueling test. It’s during the ‘Graphics Test 2’ that the max load wattage is recorded, by which point the GPU is nicely warmed-up.
Please note, though, that before I got to testing the RX 580 and RX 570, the motherboard in the test PC died, and had to be replaced with a different one on hand. As such, the wattages are not going to be perfectly scalable between these cards and the older ones.
Both the RX 580 and RX 570 idle at the same 77W, which is lower than most (and could be attributed to a more efficient motherboard). At load, though, the RX 570 managed to draw more power than its bigger brother, although “bigger” needs to be explained. As seen on the first page of this review, PowerColor chose a more robust cooler for the RX 570, sporting three fans instead of two. I’d imagine that’s the biggest reason for the increased power draw. Compared to the GTX 1060 that the RX 580 competes best against, AMD’s latest cards draw a lot more power, which is a bit unfortunate.
This is one of the easiest reviews to wrap up, with huge thanks being owed to the fact that I feel like I just reviewed the same cards last summer. Clock-boosted products are not disinteresting by default; it’s ultimately the pricing that helps make sense of things. So what are we dealing with there?
The Radeon RX 580 carries an SRP of $249, although at the moment, most cards sit above that, at around ~$270 for the 8GB version, and ~$229 for the 4GB (one in particular is $219). The RX 570 can be found for ~$200 or less.
That brings us to these cards’ biggest competitor: NVIDIA’s GeForce GTX 1060. This card tends to hover around $250, but some models can be found for a bit less, as of the time of writing.
That leads me to issue a reminder: shop around. Sometimes, you can get a better GPU for cheaper, while other times, you could end up paying more than what a card is worth. So, shop around, and use the savings on MORE GAMES!
At SRP, both the GTX 1060 and RX 580 are equals in performance, so whichever you go with would ultimately be up to your preferences with the featureset.
Speaking of featureset, AMD promoted heavily its “Chill” feature with the RX 500 series, which allows the cards to run with minimal noise and power when there’s no reason for it to ramp up. Take for example a MOBA that uses just a fraction of the entire GPU to hit 60 FPS. If conditions are right, your card will effectively become one of the quietest components in your machine.
It’s also worth highlighting AMD’s ReLive (as in, “I am going to relive this moment.”), which lets you record your gameplay using the power of the GPU. This type of recording isn’t ideal for archival footage, or footage that you consider to be very important, but given the almost nonexistent load it puts on your machine, it’s a great solution. For CPU-based encoding, OBS is the de facto solution (and for good reason).
Ultimately, your purchasing decision should come down to the featureset you want, and the price you want. The RX 570/580 and GTX 1060 perform pretty close to one another overall, to the point where $30 savings could be worth losing a frame or two. If you need specific help in picking out the right GPU, don’t hesitate to leave a comment!
Copyright © 2005-2019 Techgage Networks Inc. - All Rights Reserved.