Techgage logo

AMD’s HD 5550 & 5570 – Now Equipped with GDDR5 Goodness

Date: June 28, 2010
Author(s): Rob Williams

To retain modest pricing, it’s common to see lower-end graphics cards equipped with either DDR2 or DDR3. That design choice, though, can have a major effect on performance, something that’s proven twice over with AMD’s Radeon HD 5550 and HD 5570, both of which have just been upgraded with a move to GDDR5 memory.



Introduction

In the early months of 2010, it became clear that AMD had the intention of not just painting the town red, but keeping it that way for a while. It accomplished this with not just one GPU launch, and not even two… but five, spanning across a month and a half. Of these, most were budget models, from the Radeon HD 5450 up to the HD 5670; the exception being the mainstream HD 5830.

On February 9th in particular, AMD launched the HD 5550 and HD 5570, also known as Redwood LE and PRO, respectively. At that time, the HD 5570 seemed to be the main focus, because for months past that launch, I had forgotten all about the HD 5550. In a rarity, AMD didn’t send out samples for that model, so it itself didn’t seem to value the card that highly.

Last month, I took a look at Sapphire’s version of the card, dubbed “Ultimate”, and I could begin to understand the reason AMD kept so quiet on it. The card for the most part wasn’t that bad from a performance standpoint, but when a card that costs about $20 more is about twice as fast, it makes you scratch your head with regards to its existence.

Closer Look

Things changed a little bit a couple of weeks ago, though, when AMD came to us and said that it would be upgrading both the HD 5550 and HD 5570 to use GDDR5 memory in lieu of the DDR2 and DDR3 the cards were equipped with before. It’s clear to me from personal tests in the past that faster and more efficient memory on a graphics card can make a sizable difference, but its more so an issue on budget cards, as mainstream cards generally already use the best memory available.

The big question of course is whether or not upgrading a sub-$100 GPU to GDDR5 is worth the added expense. After all, I didn’t recommend the original HD 5550 given it was priced just $10 – $15 lower than the HD 5570, but was much slower. And as the cards are “budget”, it’s hard to pack in all this top-end technology and keep things that way.

According to AMD, the pricing for the updated cards isn’t going to change from current market values, so that’s reassuring. This could have been made possible due to GDDR5 price-drops, or the fact that AMD is only packing in 512MB of the stuff, compared to the usually-included 1GB. On budget cards, 1GB is very rarely ever touched, so the move to less but much faster memory is a no-brainer.

With the quoted pricing, it puts the HD 5550 at about $75 – $80 and the HD 5570 at $80 – $90. As far as I’m concerned, both of these are still a hard sell with the much-faster HD 5670 at $100, but that’s something we’ll tackle in the conclusion.

For those looking to pick up either of these two models, the introduction of GDDR5 is a great thing, even if the performance increase is minimal (hint: it isn’t), thanks to the fact that pricing isn’t supposed to budge. With GDDR5, the memory speeds for both cards increases also, which I’ve reflected in our model round-up table below:

Model
Core MHz
Mem MHz
Memory
Bus Width
Processors
Radeon HD 5970
725
1000
2048MB
256-bit
1600 x 2
Radeon HD 5870 Eyefinity 6
850
1200
1024MB
256-bit
1600
Radeon HD 5870
850
1200
1024MB
256-bit
1600
Radeon HD 5850
725
1000
1024MB
256-bit
1440
Radeon HD 5830
800
1000
1024MB
256-bit
1120
Radeon HD 5770
850
1200
1024MB
128-bit
800
Radeon HD 5750
700
1150
512MB – 1GB
128-bit
720
Radeon HD 5670
775
1000
512MB – 1GB
128-bit
400
Radeon HD 5570 (GDDR5)
650
1000
512MB – 1GB
128-bit
400
Radeon HD 5570
650
900
512MB – 1GB
128-bit
400
Radeon HD 5550 (GDDR5)
550
1000
512MB – 1GB
128-bit
320
Radeon HD 5550
550
800
512MB – 1GB
128-bit
320
Radeon HD 5450
650
800
512MB – 1GB
64-bit
80

As you can see, there are no differences in core clock speeds between the old and the new, but the memory of course has seen a healthy bump, especially on the HD 5550. Both of the cards use the exact same GDDR5 and exact same speeds, and while I’m on this “exact same” kick, let’s take a look at the cards we were given, straight from AMD:

AMD Radeon HD 5550 & HD 5570 - GDDR5

The fact that the HD 5550 and HD 5570 were close in design was never a question, but to receive both models looking 99.99% the same was kind of humorous. The absolutely only difference between the two cards was the model code on the back, and it’s not even a human-readable one, so these were like receiving a CPU that had its IHS printing rubbed out. Fun stuff!

A shot of both the GPU itself and GDDR5 IC’s can be seen in the photo below, and is pretty self-explanatory:

AMD Radeon HD 5550 & HD 5570 - GDDR5

The samples we were sent are those of reference design, so vendors that sell AMD’s cards don’t have to stick to them, but can use them as a guideline. Regardless, since I had the cooler off anyway, I took a picture to help show it off. As you’d expect of a budget card, the cooler is rather uneventful, but to be fair, that’s fine. These cards don’t get that hot to begin with, so anything bigger or better would just tack on needless dollars to the price.

AMD Radeon HD 5550 & HD 5570 - GDDR5

Along with this launch, AMD is again stressing the importance of its STREAM technology, especially with commercial applications such as CyberLink MediaShow and ArcSoft MediaConverter, but we’re not going to touch on any of that here. Testing with those applications is something I’m long overdue for, so I plan to test them out, along with others, and dedicate a special article to them. If you have any suggestions of other things to include in such an article, please feel free to let me know.

With our look at the updated cards out of the way, we can move right into testing. Well, after a look at our testing methodologies, of course.

Test System & Methodology

At Techgage, we strive to make sure our results are as accurate as possible. Our testing is rigorous and time-consuming, but we feel the effort is worth it. In an attempt to leave no question unanswered, this page contains not only our testbed specifications, but also a fully-detailed look at how we conduct our testing. For an exhaustive look at our methodologies, even down to the Windows Vista installation, please refer to this article.

Test Machine

The below table lists our testing machine’s hardware, which remains unchanged throughout all GPU testing, minus the graphics card. Each card used for comparison is also listed here, along with the driver version used. Each one of the URLs in this table can be clicked to view the respective review of that product, or if a review doesn’t exist, it will bring you to the product on the manufacturer’s website.

Component
Model
Processor
Intel Core i7-975 Extreme Edition – Quad-Core, 3.33GHz, 1.33v
Motherboard
Gigabyte GA-EX58-EXTREME – X58-based, F7 BIOS (05/11/09)
Memory
Corsair DOMINATOR – DDR3-1333 7-7-7-24-1T, 1.60
ATI Graphics Radeon HD 5870 1GB (Reference) – Catalyst 10.3
Radeon HD 5850 1GB (Sapphire Toxic) – Catalyst 10.2
Radeon HD 5850 1GB (ASUS) – Catalyst 9.10
Radeon HD 5830 1GB (Reference) – Beta Catalyst (02/10/10)
Radeon HD 5770 1GB (Reference) – Beta Catalyst (10/06/09)
Radeon HD 5750 1GB (Sapphire) – Catalyst 9.11
Radeon HD 5670 512MB (Reference) – Beta Catalyst (12/16/09)
Radeon HD 5570 512MB GDDR5 (Reference) – Catalyst 10.6
Radeon HD 5570 1GB (Sapphire) – Catalyst 10.6
Radeon HD 5550 512MB GDDR5 (Reference) – Catalyst 10.6
Radeon HD 5550 1GB (Sapphire) – Catalyst 10.6
NVIDIA Graphics GeForce GTX 480 1536MB (Reference) – GeForce 197.17
GeForce GTX 295 1792MB (Reference) – GeForce 186.18
GeForce GTX 285 1GB (EVGA) – GeForce 186.18
GeForce GTX 275 896MB (Reference) – GeForce 186.18
GeForce GTX 260 896MB (XFX) – GeForce 186.18
GeForce GTS 250 1GB (EVGA) – GeForce 186.18
GeForce GT 240 512MB (ASUS) – GeForce 196.21
Audio
On-Board Audio
Storage
Seagate Barracuda 500GB 7200.11
Power Supply
Corsair HX1000W
Chassis
SilverStone TJ10 Full-Tower
Display
Gateway XHD3000 30″
Cooling
Thermalright TRUE Black 120
Et cetera
Windows Vista Ultimate 64-bit

When preparing our testbeds for any type of performance testing, we follow these guidelines:

To aide with the goal of keeping accurate and repeatable results, we alter certain services in Windows Vista from starting up at boot. This is due to the fact that these services have the tendency to start up in the background without notice, potentially causing slightly inaccurate results. Disabling “Windows Search” turns off the OS’ indexing which can at times utilize the hard drive and memory more than we’d like.

For more robust information on how we tweak Windows, please refer once again to this article.

Game Titles

At this time, we currently benchmark all of our games using three popular resolutions: 1680×1050, 1920×1080 and also 2560×1600. 1680×1050 was chosen as it’s one of the most popular resolutions for gamers sporting ~20″ displays. 1920×1080 might stand out, since we’ve always used 1920×1200 in the past, but we didn’t make this change without some serious thought. After taking a look at the current landscape for desktop monitors around ~24″, we noticed that 1920×1200 is definitely on the way out, as more and more models are coming out as native 1080p. It’s for this reason that we chose it. Finally, for high-end gamers, we also benchmark using 2560×1600, a resolution that’s just about 2x 1080p.

For graphics cards that include less than 1GB of GDDR, we omit Grand Theft Auto IV from our testing, as our chosen detail settings require at least 800MB of available graphics memory. Also, if the card we’re benchmarking doesn’t offer the performance to handle 2560×1600 across most of our titles reliably, only 1680×1050 and 1920×1080 will be utilized.

Because we value results generated by real-world testing, we don’t utilize timedemos whatsoever. The possible exception might be Futuremark’s 3DMark Vantage. Though it’s not a game, it essentially acts as a robust timedemo. We choose to use it as it’s a standard where GPU reviews are concerned, and we don’t want to rid our readers of results they expect to see.

All of our results are captured with the help of Beepa’s FRAPS 2.98, while stress-testing and temperature-monitoring is handled by OCCT 3.1.0 and GPU-Z, respectively.

Call of Duty: Modern Warfare 2

Call of Juarez: Bound in Blood

Race Driver: GRID

World in Conflict: Soviet Assault

Call of Duty: Modern Warfare 2

When the original Call of Duty game launched in 2003, Infinity Ward was an unknown. Naturally… it was the company’s first title. But since then, the series and company alike have become household names. Not only has the series delivered consistently incredible gameplay, it’s pushed the graphics envelope with each successive release, and where Modern Warfare is concerned, it’s also had a rich storyline.

The first two titles might have been built on the already-outdated Quake III engine, but since then, the games have been built with improved graphical features, capable of pushing the highest-end PCs out there. Modern Warfare 2 is the first such exception, as it’s more of a console port than a true PC title. Therefore, the game doesn’t push PC hardware as much as we’d like to see, but despite that, it still looks great, and lacks little in the graphics department. You can read our review of the game here.

Manual Run-through: The level chosen is the 10th mission in the game, “The Gulag”. Our teams fly in helicopters up to an old prison with the intention of getting closer to finding the game’s villain, Vladimir Makarov. Our saved game file begins us at the point when the level name comes on the screen, right before we reach the prison, and it ends after one minute of landing, following the normal progression of the level. The entire run takes around two-and-a-half minutes.

It’s no secret that the main bottleneck on lower-end cards is the memory, but our GDDR5 results prove just how much of a bottleneck it truly is. The HD 5550 in particular saw a 75% performance boost just because of its move to faster memory. The difference between the HD 5570 cards weren’t quite as stark, but was still a nice gain at around 20%.

Graphics Card
Best Playable
Min FPS
Avg. FPS
NVIDIA GTX 480 1.5GB (Reference)
2560×1600 – Max Detail, 4xAA
50
81.669
ATI HD Radeon 5870 1GB (Reference)
2560×1600 – Max Detail, 4xAA
44
81.351
ATI HD 5770 1GB CrossFireX
2560×1600 – Max Detail, 4xAA
40
81.311
ATI HD 5850 1GB (ASUS)
2560×1600 – Max Detail, 4xAA
37
68.563
NVIDIA GTX 285 1GB (EVGA)
2560×1600 – Max Detail, 4xAA
41
66.527
NVIDIA GTX 275 896MB (Reference)
2560×1600 – Max Detail, 4xAA
37
61.937
ATI HD 5830 1GB (Reference)
2560×1600 – Max Detail, 4xAA
30
53.569
NVIDIA GTX 260 896MB (XFX)
2560×1600 – Max Detail, 4xAA
33
53.314
ATI HD 5770 1GB (Reference)
2560×1600 – Max Detail, 0xAA
36
60.337
NVIDIA GTS 250 1GB (EVGA)
2560×1600 – Max Detail, 0xAA
30
53.253
ATI HD 5750 1GB (Sapphire)
2560×1600 – Max Detail, 0xAA
28
50.727
ATI HD 5670 512MB (Reference)
1920×1080 – Max Detail, 4xAA
24
43.96
NVIDIA GT 240 512MB (ASUS)
1920×1080 – Max Detail, 0xAA
30
53.139
ATI HD 5570 512MB (Ref. GDDR5)
1920×1080 – Max Detail, 4xAA
24
41.323
ATI HD 5570 1GB (Sapphire)
1920×1080 – Max Detail, 4xAA
19
34.392
ATI HD 5550 512MB (Ref. GDDR5)
1920×1080 – Medium Detail, 4xAA
24
40.846
ATI HD 5550 1GB (Sapphire)
1920×1080 – Low Detail, 0xAA
25
44.661

The benefit of GDDR5 on these cards is huge, and again reiterated in our best playable results. Again singling out the HD 5550, we were able to not only increase the detail level, but also enable anti-aliasing. In the end, the performance is almost the same, but the detail level is much improved on the GDDR5 card.

Because our graphics levels are already maxed out on the non-GDDR5 card, we couldn’t increase them further there, but we still saw a nice performance boost.

Call of Juarez: Bound in Blood

When the original Call of Juarez was released, it brought forth something unique… a western-styled first-person shooter. That’s simply not something we see too often, so for fans of the genre, its release was a real treat. Although it didn’t really offer the best gameplay we’ve seen from a recent FPS title, its storyline and unique style made it well-worth testing.

After we retired the original title from our suite, we anxiously awaited for the sequel, Bound in Blood, in hopes that the series could be re-introduced into our testing once again. Thankfully, it could, thanks in part to its fantastic graphics, which are based around the Chrome Engine 4, and improved gameplay of the original. It was also well-received by game reviewers, which is always a good sign.

Manual Run-through: The level chosen here is Chapter I, and our starting point is about 15 minutes into the mission, where we stand atop a hill that overlooks a large river. We make our way across the hill and ultimately through a large trench, and we stop our benchmarking run shortly after we blow up a gas-filled barrel.

The performance increases continue here, with almost identical gains to what we saw with Modern Warfare 2. Although it’s not that surprising, it’s still rather incredible to me that with a simple memory change, we boosted our 20 FPS on the HD 5550 to 36 FPS at 1680×1050. Given that these cards are set to cost the same as the older models, that’s quite a nice free gain. Even overclocking isn’t that effective!

Graphics Card
Best Playable
Min FPS
Avg. FPS
ATI HD Radeon 5870 1GB (Reference)
2560×1600 – Max Detail
58
82.863
NVIDIA GTX 480 1.5GB (Reference)
2560×1600 – Max Detail
58
82.711
ATI HD 5770 1GB CrossFireX
2560×1600 – Max Detail
59
87.583
NVIDIA GTX 295 1792MB (Reference)
2560×1600 – Max Detail
37
80.339
ATI HD 5850 1GB (ASUS)
2560×1600 – Max Detail
51
69.165
ATI HD 5830 1GB (Reference)
2560×1600 – Max Detail
35
54.675
NVIDIA GTX 285 1GB (EVGA)
2560×1600 – Max Detail
45
54.428
NVIDIA GTX 275 896MB (Reference)
2560×1600 – Max Detail
41
51.393
ATI HD 5770 1GB (Reference)
2560×1600 – Max Detail
28
45.028
NVIDIA GTX 260 896MB (XFX)
2560×1600 – Max Detail
35
44.023
ATI HD 5750 1GB (Sapphire)
2560×1600 – Max Detail
27
38.686
NVIDIA GTX 250 1GB (EVGA)
2560×1600 – Max Detail
25
33.751
ATI HD 5670 512MB (Reference)
1920×1080 – Max Detail
38
47.23
NVIDIA GT 240 512MB (ASUS)
1920×1080 – Max Detail
29
39.446
ATI HD 5570 512MB (Ref. GDDR5)
1920×1080 – Max Detail
33
41.434
ATI HD 5570 1GB (Sapphire)
1920×1080 – Max Detail
25
32.696
ATI HD 5550 512MB (Ref. GDDR5)
1920×1080 – Medium Detail
29
35.391
ATI HD 5550 1GB (Sapphire)
1680×1050 – Medium Textures / Materials, Low Detail
20
33.362

In another repeat, both the HD 5570 cards shared identical settings, but the GDDR5 version delivered a sweet 9 FPS boost. The GDDR5 HD 5550 gave us the ability again to increase the graphical detail but retain near-identical framerates.

Race Driver: GRID

If you primarily play games on a console, your choices for quality racing games are plenty. On the PC, that’s not so much the case. While there are a good number, there aren’t enough for a given type of racing game, from sim, to arcade. So when Race Driver: GRID first saw its release, many gamers were excited, and for good reason. It’s not a sim in the truest sense of the word, but it’s certainly not arcade, either. It’s somewhere in between.

The game happens to be great fun, though, and similar to console games like Project Gotham Racing, you need a lot of skill to succeed at the game’s default difficulty level. And like most great racing games, GRID happens to look absolutely stellar, and each of the game’s locations look very similar to their real-world counterparts. All in all, no racing fan should ignore this one.

Manual Run-through: For our testing here, we choose the city where both Snoop Dogg and Sublime hit their fame, the LBC, also known as Long Beach City. We choose this level because it’s not overly difficult, and also because it’s simply nice to look at. Our run consists of an entire 2-lap race, with the cars behind us for almost the entire race.

The performance boosts with GRID are not quite as spectacular as what we saw with Modern Warfare 2 and Bound in Blood, but the “free” gains we do see are nonetheless impressive. I can’t see too many people complaining about a 50% performance boost on the HD 5550, if pricing does indeed remain identical as we’re being promised.

Graphics Card
Best Playable
Min FPS
Avg. FPS
ATI HD 5870 1GB (Reference)
2560×1600 – Max Detail, 4xAA
83
103.622
ATI HD 5770 1GB CrossFireX
2560×1600 – Max Detail, 4xAA
81
104.32
NVIDIA GTX 295 1792MB (Reference)
2560×1600 – Max Detail, 4xAA
84
103.958
NVIDIA GTX 480 1.5GB (Reference)
2560×1600 – Max Detail, 4xAA
81
98.578
ATI HD 5850 1GB (ASUS)
2560×1600 – Max Detail, 4xAA
68
84.732
NVIDIA GTX 285 1GB (EVGA)
2560×1600 – Max Detail, 4xAA
54
66.042
ATI HD 5830 1GB (Reference)
2560×1600 – Max Detail, 4xAA
53
65.584
NVIDIA GTX 275 896MB (Reference)
2560×1600 – Max Detail, 4xAA
52
63.617
ATI HD 5770 1GB (Reference)
2560×1600 – Max Detail, 4xAA
45
56.980
NVIDIA GTX 260 896MB (XFX)
2560×1600 – Max Detail, 4xAA
45
54.809
ATI HD 5750 1GB (Sapphire)
2560×1600 – Max Detail, 4xAA
39
47.05
NVIDIA GTX 250 1GB (EVGA)
2560×1600 – Max Detail, 4xAA
35
43.663
ATI HD 5670 512MB (Reference)
1920×1080 – Max Detail, 4xAA
36
47.36
ATI HD 5570 512MB (Ref. GDDR5)
1920×1080 – Max Detail, 4xAA
35
40.495
ATI HD 5570 1GB (Sapphire)
1920×1080 – Max Detail, 4xAA
28
35.689
NVIDIA GT 240 512MB (ASUS)
1920×1080 – Max Detail, 0xAA
33
51.071
ATI HD 5550 512MB (Ref. GDDR5)
1920×1080 – Medium Detail, 0xAA
37
43.209
ATI HD 5550 1GB (Sapphire)
1920×1080 – Low Detail, 0xAA
34
54.569

Both of our HD 5570’s shared the same top-end settings, as the game couldn’t be pushed further, so we again see a simple performance boost. The HD 5550 equipped with GDDR5 allowed us to increase the detail level from low to medium and still retain great framerates.

World in Conflict: Soviet Assault

I admit that I’m not a huge fan of RTS titles, but World in Conflict intrigued me from the get go. After all, so many war-based games continue to follow the same story-lines we already know, and WiC was different. It counteracts the fall of the political and economic situation in the Soviet Union in the late 80’s, and instead provides a storyline that follows it as if the USSR had succeeded by proceeding with war in order to remain in power.

Many RTS games, with their advanced AI, tend to favor the CPU in order to deliver smooth gameplay, but WiC favors both the CPU and GPU, and the graphics prove it. Throughout the game’s missions, you’ll see gorgeous vistas and explore areas from deserts and snow-packed lands, to fields and cities. Overall, it’s a real visual treat for the eyes – especially since you’re able to zoom to the ground and see the action up-close.

Manual Run-through: The level we use for testing is the 7th campaign of the game, called Insurgents. Our saved game plants us towards the beginning of the mission with two squads of five, and two snipers. The run consists of bringing our men to action, and hovering the camera around throughout the duration. The entire run lasts between three and four minutes.

Of all the games we’ve tested with here, World in Conflict stood the greatest chance of seeing gains just because it’s quite hardcore on the system. Thankfully, we did see nice increases with our GDDR5 cards, but because the game is so harsh with our given settings, even GDDR5 isn’t enough to give us good enough framerates. Unless you enjoy sub-20 FPS, that is.

Graphics Card
Best Playable
Min FPS
Avg. FPS
NVIDIA GTX 295 1792MB (Reference)
2560×1600 – Max Detail, 8xAA, 16xAF
40
55.819
NVIDIA GTX 480 1.5GB (Reference)
2560×1600 – Max Detail, 8xAA, 16xAF
39
53.714
ATI HD 5870 1GB (Reference)
2560×1600 – Max Detail, 8xAA, 16xAF
38
45.200
ATI HD 5770 1GB CrossFireX
2560×1600 – Max Detail, 4xAA, 16xAF
38
49.335
ATI HD 5850 1GB (ASUS)
2560×1600 – Max Detail, 4xAA, 16xAF
29
40.581
NVIDIA GTX 285 1GB (EVGA)
2560×1600 – Max Detail, 0xAA, 16xAF
34
49.514
NVIDIA GTX 275 896MB (Reference)
2560×1600 – Max Detail, 0xAA, 16xAF
36
46.186
ATI HD 5830 1GB (Reference)
2560×1600 – Max Detail, 0xAA, 16xAF
31
42.543
NVIDIA GTX 260 896MB (XFX)
2560×1600 – Max Detail, 0xAA, 16xAF
23
39.365
ATI HD 5770 1GB (Reference)
2560×1600 – Max Detail, 0xAA, 16xAF
28
37.389
NVIDIA GTX 250 1GB (EVGA)
2560×1600 – Max Detail, 0xAA, 4xAF
24
32.453
ATI HD 5750 1GB (Sapphire)
2560×1600 – Max Detail, 0xAA, 4xAF
23
31.769
NVIDIA GT 240 512MB (ASUS)
1920×1080 – Max Detail, 0xAA, 4xAF
22
33.788
ATI HD 5670 512MB (Reference)
1920×1080 – Max Detail, 0xAA, 16xAF
21
31.872
ATI HD 5570 512MB (Ref. GDDR5)
1920×1080 – High Detail, 0xAA, 4xAF
27
42.655
ATI HD 5570 1GB (Sapphire)
1920×1080 – High Detail, 0xAA, 4xAF
24
37.162
ATI HD 5550 512MB (Ref. GDDR5)
1920×1080 – Medium Detail, 0xAA, 4xAF
42
60.7
ATI HD 5550 1GB (Sapphire)
1920×1080 – Medium Detail, 0xAA, 4xAF
31
44.527

This game is the only one tested where the GDDR5 didn’t allow us to increase the settings, and it’s mostly because the game just requires high framerates to be enjoyable. I found that I preferred the extra boost in performance over increasing the detail level, but I admit it’s highly a matter of opinion. Some might prefer using a higher detail level and dropping down to ~35 FPS, which is still rather playable.

Futuremark 3DMark Vantage

Although we generally shun automated gaming benchmarks, we do like to run at least one to see how our GPUs scale when used in a ‘timedemo’-type scenario. Futuremark’s 3DMark Vantage is without question the best such test on the market, and it’s a joy to use, and watch. The folks at Futuremark are experts in what they do, and they really know how to push that hardware of yours to its limit.

The company first started out as MadOnion and released a GPU-benchmarking tool called XLR8R, which was soon replaced with 3DMark 99. Since that time, we’ve seen seven different versions of the software, including two major updates (3DMark 99 Max, 3DMark 2001 SE). With each new release, the graphics get better, the capabilities get better and the sudden hit of ambition to get down and dirty with overclocking comes at you fast.

Similar to a real game, 3DMark Vantage offers many configuration options, although many (including us) prefer to stick to the profiles which include Performance, High and Extreme. Depending on which one you choose, the graphic options are tweaked accordingly, as well as the resolution. As you’d expect, the better the profile, the more intensive the test.

Performance is the stock mode that most use when benchmarking, but it only uses a resolution of 1280×1024, which isn’t representative of today’s gamers. Extreme is more appropriate, as it runs at 1920×1200 and does well to push any single or multi-GPU configuration currently on the market – and will do so for some time to come.

There’s nothing too surprising here, as 3DMark pretty much reiterates the same gains we saw with our real games.

Power & Temperatures

To test our graphics cards for both temperatures and power consumption, we utilize OCCT for the stress-testing, GPU-Z for the temperature monitoring, and a Kill-a-Watt for power monitoring. The Kill-a-Watt is plugged into its own socket, with only the PC connect to it.

As per our guidelines when benchmarking with Windows, when the room temperature is stable (and reasonable), the test machine is boot up and left to sit at the Windows desktop until things are completely idle. Once things are good to go, the idle wattage is noted, GPU-Z is started up to begin monitoring card temperatures, and OCCT is set up to begin stress-testing.

To push the cards we test to their absolute limit, we use OCCT in full-screen 2560×1600 mode, and allow it to run for 30 minutes, which includes a one minute lull at the start, and a three minute lull at the end. After about 10 minutes, we begin to monitor our Kill-a-Watt to record the max wattage.

Given that we’re using reference cards, the temperature results here are not all too important since the card you purchase likely will feature a different cooler, but either way, the performance of AMD’s home-brewed cooler is good. Interestingly, Sapphire’s passive cooler on its own version of the HD 5550 is much improved over AMD’s model with a fan.

Performance-wise, GDDR5 made some healthy gains on both of our samples, but as we can see above, it also made a rather interesting increase in power consumption. The difference isn’t huge, but it’s clear that the GDDR5 IC’s draw more power than the DDR2 and DDR3 that the original card models used.

Final Thoughts

When AMD first contacted us regarding these GDDR5-upgraded Radeon’s, I can’t say that I got too excited, especially after almost being put to sleep when taking a look at the HD 5550 last month. But, I figured that if AMD is pumped-up about the change, then there might be some good reason for it. After taking a look for myself, I can understand, and agree with, the company’s enthusiasm.

As I mentioned in the introduction, it’s not uncommon to see lower-end graphics cards throttled by the on-board memory, but the results we’ve seen here really emphasize just how much of a “problem” it is. Imagine purchasing an entry-level car and having the engine throttled, resulting in only half of the available horsepower reaching the wheels. The situation with these cards is not much different.

The GDDR5 benefits were impressive from both cards, but even more so with the HD 5550. In some cases, the performance bonus neared 80%, and no matter how you’re looking at things, that’s incredibly impressive. In the overclocking game, you’d have to really push hardware extremely hard to come close to such a gain. Yet, here we have that gain simply thanks to a memory change.

It can be assumed that the HD 5570 didn’t see quite as much of a gain because we hit a point where the memory wasn’t quite as much a bottleneck. Our original HD 5550 sample used DDR2, while the HD 5570 used DDR3. So as it is, the HD 5570 didn’t gain quite as much from the GDDR5 as the HD 5550 did, but either way, the change in general reaped nice rewards.

AMD Radeon HD 5550 & HD 5570 - GDDR5

What I find most interesting about the GDDR5 upgrade is that it resulted in each model nearing the next one up on the chain. For example, equipped with GDDR5 the HD 5550 came very, very close to the performance of the DDR3 HD 5570, and likewise, the updated HD 5570 fell just behind the HD 5670, a card that has always used GDDR5.

The biggest complaint I had with the HD 5550 I took a look at last month was its pricing. Given the performance, the pricing just didn’t scale at all. With the help of this GDDR5, though, the pricing and performance seems to scale a lot better. The HD 5550 acts as a baseline, at around $75, and then there’s the bit-faster HD 5570 at $85, and then the HD 5670 at $100.

At this pricing, I can’t really disagree with the performance scaling. But as I’m sure most of you are aware, it’s a rare day when the quoted pricing is actually what we see on store shelves. For graphics cards, though, the pricing is often even better than what’s quoted, and it takes only a couple of searches to find that out. So as always, don’t buy the first card you see, but look around and make sure you’re getting the best deal. In a quick search, I see many HD 5670’s that are selling for the quoted price of the HD 5570, and likewise, there are likely to be many HD 5570’s priced at HD 5550 levels.

As it is, AMD’s GDDR5-upgraded cards are rather amazing from more than one standpoint. The most important is the fact that these cards are set to cost the exact same as the older models, but offer fantastic performance gains. These cards are almost like a new generation within themselves, just from this “simple” change. If only our PC’s would see such massive gains from simply upgrading our memory!

Discuss this article in our forums!

Have a comment you wish to make on this article? Recommendations? Criticism? Feel free to head over to our related thread and put your words to our virtual paper! There is no requirement to register in order to respond to these threads, but it sure doesn’t hurt!

Copyright © 2005-2018 Techgage Networks Inc. - All Rights Reserved.