Techgage logo

ASUS Radeon EAH5850

Date: November 16, 2009
Author(s): Rob Williams

If we had an award for the “best bang for the buck”, it would require little thinking to give it to ATI’s Radeon HD 5850. For the price, it offers incredible power, superb power consumption, and of course, DirectX 11 support. We’re taking a look at ASUS’ version here, which along with Dirt 2, includes a surprisingly useful overclocking tool.



Introduction

I hate to admit it, but despite the fact that AMD launched its ATI Radeon HD 5000 series in late September, which included the HD 5870 and HD 5850, we haven’t had a review of the latter card up to this point. Thanks to our friends at ASUS, we received their version of the reference card a couple of weeks ago, and had a blast putting it through its paces. It never ceases to impress me just how much value can be found in GPUs today, and the HD 5850 is no exception.

Take for example the HD 4870, or even the HD 4850. Released last June, these cards were priced at $299 and $199, respectively, and they were fast. So fast, in fact, that the HD 4870 in particular put NVIDIA’s GTX 260, at $100 more, to shame. It was a very interesting time in the graphics card world, to say the least. Today, we have the HD 5870, which costs 33% more than the launch HD 4870, but is twice as powerful.

The benefits of course don’t end there, though. Launch HD 4000 cards had a major issue… heat. Even at idle, the GPU cores could sit at 80°C, and any way you look at it, that’s insane. Those problems are no more when looking at the HD 5000 series, because not only have the temperatures dropped significantly, but the power consumption as well (we’re talking 80W idle to 17W idle… that’s beyond impressive).

If the theme isn’t obvious enough, the ATI HD 5000 series of cards are impressive in almost all regards. They’re so good, that it’s beginning to become a little difficult holding out for NVIDIA’s Fermi, to see what its striking back with. If you’ll recall, AMD trampled on NVIDIA’s parade last summer, and if the green team is to be believed, we might see the opposite thing happen when Fermi gets here – and I can’t wait.

Closer Look

But for now, we have AMD’s line-up, and there’s little reason to complain. At the high-end, as mentioned, is the HD 5870, which carries an SRP of $379. Sadly, most e-tailers are pricing them more towards $400, but even at that price-point, the cards can be considered a relative bargain. The next step down is the HD 5850, one that has slower overall clock speeds and about 9% less stream processors. The differences aren’t stark, but the price is much lower, at $259 SRP.

Both of these cards might be the best out there right now, but both are plagued with availability issues. There seemed to be ample supply at first, but that went downhill fast. Today, you’re lucky to find either in stock, and the only way to actually get one is to keep checking back to various e-tailers every day. I’m hoping the situation changes soon, but it’s unfortunate that AMD didn’t better plan this out. The company wanted to beat NVIDIA, but if gamers can’t purchase the cards, it’s a problem.

As always, below is AMD’s current GPU line-up. Some cards aren’t so relevant today thanks to newer releases, such as some select cards from the 4000 series, but all are still available for purchase. AMD’s next addition will be its “HD 5970”, the X2 Hemlock card.

Model
Core MHz
Mem MHz
Memory
Bus Width
Processors
Radeon HD 5870
850
1200
1024MB
256-bit
1600
Radeon HD 5850
725
1000
1024MB
256-bit
1440
Radeon HD 5770
850
1200
1024MB
128-bit
800
Radeon HD 5750
700
1150
512 – 1024MB
128-bit
720
Radeon HD 4890
850 – 900
975
1024MB
256-bit
800
Radeon HD 4870
750
900
512 – 2048MB
256-bit
800
Radeon HD 4850
625
993
512 – 1024MB
256-bit
800
Radeon HD 4770
750
800
512MB
128-bit
640
Radeon HD 4670
750
900 – 1100
512 – 1024MB
128-bit
320
Radeon HD 4650
600
400 – 500
512 – 1024MB
128-bit
320

The EAH5850 ASUS sent us is based on ATI’s reference model, so there’s no fancier cooler, nor much of anything else. As I touched on in our review of Sapphire’s Vapor-X HD 5870, not many companies have opted to dedicate much time or money to ATI’s latest cards simply because of their availability. It makes no sense to do so when no one can buy them, and it may very well be another month or two before that situation changes.

Like most other launch HD 5000 cards, ASUS’ includes a voucher to download Dirt 2 (via Steam) when it comes out. If that’s a game you want to acquire to begin with, and you factor the price of what it would be in conjunction to what the GPU itself costs, the card ends up being an even greater value at the end of the day, despite the price-gouging by various e-tailers. Another sweet addition to ASUS’ cards is “Smart Doctor”, an overclocking tool that allows control over the voltage, allowing for even greater overclocks. I’ll touch on that a bit in our overclocking section.

As seen below, ASUS’ card here sticks to the reference design (see? I wasn’t lying), and not too much is wrong with that. It might appeal to some, it might not to others. What’s important is that the design works. This is the first generation of graphics cards where I don’t feel like I have to put down the design, because these cards don’t overheat like the older ones, so even as a reference design, it’s going to allow stable operation, and some overclocking headroom.

If you’ve seen any HD 5000 series cards before, the back plate of this one won’t be too surprising. There are two DVI-D ports, along with an HDMI and DisplayPort. Of course, if you’re still using a VGA monitor (I hope not by now), an adapter is included in the box. Of course, like the rest of AMD’s current line-up, this card supports CrossFireX with another HD 5000 series card.

As the card is based on a reference design, there’s not much else to say. Of all the launch cards, what your choice should boil down to is whether or not a game is included, the company’s warranty policy and also whether or not you use the included bundled apps. ASUS’ tools and utilities tend to usually look horrible where UI is concerned, but they’re useful. With that said, let’s get a move on into our look at our test rig and then the performance results.

Test System & Methodology

At Techgage, we strive to make sure our results are as accurate as possible. Our testing is rigorous and time-consuming, but we feel the effort is worth it. In an attempt to leave no question unanswered, this page contains not only our testbed specifications, but also a fully-detailed look at how we conduct our testing. For an exhaustive look at our methodologies, even down to the Windows Vista installation, please refer to this article.

Test Machine

The below table lists our testing machine’s hardware, which remains unchanged throughout all GPU testing, minus the graphics card. Each card used for comparison is also listed here, along with the driver version used. Each one of the URLs in this table can be clicked to view the respective review of that product, or if a review doesn’t exist, it will bring you to the product on the manufacturer’s website.

Component
Model
Processor
Intel Core i7-975 Extreme Edition – Quad-Core, 3.33GHz, 1.33v
Motherboard
Gigabyte GA-EX58-EXTREME – X58-based, F7 BIOS (05/11/09)
Memory
Corsair DOMINATOR – DDR3-1333 7-7-7-24-1T, 1.60
ATI Graphics Radeon HD 5870 1GB (Sapphire) – Catalyst 9.10
Radeon HD 5850 1GB (ASUS) – Catalyst 9.10
Radeon HD 5770 1GB (Reference) – Beta Catalyst (10/06/09)
Radeon HD 4890 1GB (Sapphire) – Catalyst 9.8
Radeon HD 4870 1GB (Reference) – Catalyst 9.8
Radeon HD 4770 512MB (Gigabyte) – Catalyst 9.8
NVIDIA Graphics GeForce GTX 295 1792MB (Reference) – GeForce 186.18
GeForce GTX 285 1GB (EVGA) – GeForce 186.18
GeForce GTX 275 896MB (Reference) – GeForce 186.18
GeForce GTX 260 896MB (XFX) – GeForce 186.18
GeForce GTS 250 1GB (EVGA) – GeForce 186.18
Audio
On-Board Audio
Storage
Seagate Barracuda 500GB 7200.11
Power Supply
Corsair HX1000W
Chassis
SilverStone TJ10 Full-Tower
Display
Gateway XHD3000 30″
Cooling
Thermalright TRUE Black 120
Et cetera
Windows Vista Ultimate 64-bit

When preparing our testbeds for any type of performance testing, we follow these guidelines:

To aide with the goal of keeping accurate and repeatable results, we alter certain services in Windows Vista from starting up at boot. This is due to the fact that these services have the tendency to start up in the background without notice, potentially causing slightly inaccurate results. Disabling “Windows Search” turns off the OS’ indexing which can at times utilize the hard drive and memory more than we’d like.

For more robust information on how we tweak Windows, please refer once again to this article.

Game Titles

At this time, we currently benchmark all of our games using three popular resolutions: 1680×1050, 1920×1080 and also 2560×1600. 1680×1050 was chosen as it’s one of the most popular resolutions for gamers sporting ~20″ displays. 1920×1080 might stand out, since we’ve always used 1920×1200 in the past, but we didn’t make this change without some serious thought. After taking a look at the current landscape for desktop monitors around ~24″, we noticed that 1920×1200 is definitely on the way out, as more and more models are coming out as native 1080p. It’s for this reason that we chose it. Finally, for high-end gamers, we also benchmark using 2560×1600, a resolution that’s just about 2x 1080p.

For graphics cards that include less than 1GB of GDDR, we omit Grand Theft Auto IV from our testing, as our chosen detail settings require at least 800MB of available graphics memory. Also, if the card we’re benchmarking doesn’t offer the performance to handle 2560×1600 across most of our titles reliably, only 1680×1050 and 1920×1080 will be utilized.

Because we value results generated by real-world testing, we don’t utilize timedemos whatsoever. The possible exception might be Futuremark’s 3DMark Vantage. Though it’s not a game, it essentially acts as a robust timedemo. We choose to use it as it’s a standard where GPU reviews are concerned, and we don’t want to rid our readers of results they expect to see.

All of our results are captured with the help of Beepa’s FRAPS 2.98, while stress-testing and temperature-monitoring is handled by OCCT 3.1.0 and GPU-Z, respectively.

Call of Duty: World at War

Call of Juarez: Bound in Blood

Crysis Warhead

F.E.A.R. 2: Project Origin

Grand Theft Auto IV

Race Driver: GRID

World in Conflict: Soviet Assault

Call of Duty: World at War

The Call of Duty series is one that needs no introduction. Although only six years old, CoD has already become a stature where both single-player and multi-player first-person shooters are concerned. From the series’ inception, each game has delivered stellar gameplay that totally engrosses you, thanks in part to creative levels, smart AI and realistic graphics.

World at War is officially the 5th game in the series, and while some hardcore fans claim that Treyarch is simply unable to deliver as high caliber a game as Infinity Ward, the title did well to hold everyone over until Modern Warfare 2 got released. We’ll soon be replacing our World at War benchmark results with Modern Warfare 2, so feel free to get your goodbye’s over with!

Manual Run-through: The level chosen for our testing is “Relentless”, one that depicts the Battle of Peleliu, which has American soldiers advance to capture an airstrip from the Japanese. The level is both exciting to play and incredibly hard on your graphics hardware, making it a perfect choice for our testing.

We’re off to a great start so far, and depending on how you look at things, the results can either be fairly interesting, or a bit lacking. Compared to the bigger brother HD 5870, the HD 5850 fell a fair bit short, but that’s to be expected. What’s impressive is just how in-line the HD 5850 came with NVIDIA’s GTX 285. The latter came out on top, but by an insignifant margin. What’s important is that the GTX 285 is $70 more expensive at the minimum.

Graphics Card
Best Playable
Min FPS
Avg. FPS
NVIDIA GTX 295 1792MB (Reference)
2560×1600 – Max Detail, 4xAA
22
61.988
ATI HD 5870 1GB (Reference)
2560×1600 – Max Detail, 4xAA
29
49.698
NVIDIA GTX 285 1GB (EVGA)
2560×1600 – Max Detail, 4xAA
24
41.563
ATI HD 5850 1GB (ASUS)
2560×1600 – Max Detail, 4xAA
20
41.354
NVIDIA GTX 275 896MB (Reference)
2560×1600 – Max Detail, 4xAA
22
39.187
ATI HD 4890 1GB (Sapphire)
2560×1600 – Max Detail, 0xAA
21
42.778
ATI HD 4870 1GB (Reference)
2560×1600 – Normal Detail, 0xAA
23
42.097
ATI HD 5770 1GB (Reference)
2560×1600 – Normal Detail, 0xAA
19
40.066
NVIDIA GTX 260 896MB (XFX)
2560×1600 – Normal Detail, 0xAA
20
38.685
NVIDIA GTX 250 1GB (EVGA)
2560×1600 – Normal Detail, 0xAA
19
37.054
ATI HD 4770 512MB (Gigabyte)
1920×1080 – Max Detail, 4xAA
19
36.639

While I’m not entirely gung-ho over the minimum FPS of 20, that’s a minor nick when the average FPS is 40+. During my playtime, I didn’t notice any slowdown, and in this game, virtually every card dips far below its average FPS, and part of that could simply be due to how the game or level is designed. In the end, the performance with the HD 5850 was great, so totally maxed out settings it is.

Call of Juarez: Bound in Blood

When the original Call of Juarez was released, it brought forth something unique… a western-styled first-person shooter. That’s simply not something we see too often, so for fans of the genre, its release was a real treat. Although it didn’t really offer the best gameplay we’ve seen from a recent FPS title, its storyline and unique style made it well-worth testing.

After we retired the original title from our suite, we anxiously awaited for the sequel, Bound in Blood, in hopes that the series could be re-introduced into our testing once again. Thankfully, it could, thanks in part to its fantastic graphics, which are based around the Chrome Engine 4, and improved gameplay of the original. It was also well-received by game reviewers, which is always a good sign.

Manual Run-through: The level chosen here is Chapter I, and our starting point is about 15 minutes into the mission, where we stand atop a hill that overlooks a large river. We make our way across the hill and ultimately through a large trench, and we stop our benchmarking run shortly after we blow up a gas-filled barrel.

The HD 5850 might be the little sibling of the HD 5870, but results like these prove that it’s not a problem. At the game’s absolute max settings, the card managed to handle an average 69 FPS, and never dipped below 50 FPS. I’d be hard-pressed to meet someone not impressed by that kind of performance, especially in a great-looking title such as Bound in Blood.

Graphics Card
Best Playable
Min FPS
Avg. FPS
ATI HD 5870 1GB (Reference)
2560×1600 – Max Detail
58
81.945
NVIDIA GTX 295 1792MB (Reference)
2560×1600 – Max Detail
37
80.339
ATI HD 5850 1GB (ASUS)
2560×1600 – Max Detail
51
69.165
NVIDIA GTX 285 1GB (EVGA)
2560×1600 – Max Detail
45
54.428
NVIDIA GTX 275 896MB (Reference)
2560×1600 – Max Detail
41
51.393
ATI HD 4890 1GB (Sapphire)
2560×1600 – Max Detail
36
51.334
ATI HD 4870 1GB (Reference)
2560×1600 – Max Detail
31
46.259
ATI HD 5770 1GB (Reference)
2560×1600 – Max Detail
28
45.028
NVIDIA GTX 260 896MB (XFX)
2560×1600 – Max Detail
35
44.023
NVIDIA GTX 250 1GB (EVGA)
2560×1600 – Max Detail
25
33.751
ATI HD 4770 512MB (Gigabyte)
2560×1600 – Normal Detail
24
35.434

Is this much of a surprise? The game ran like a dream at the game’s top settings, so that becomes our “best playable” without a second thought.

Crysis Warhead

Like Call of Duty, Crysis is another series that doesn’t need much of an introduction. Thanks to the fact that almost any comments section for a PC performance-related article asks, “Can it run Crysis?”, even those who don’t play computer games no doubt know what Crysis is. When Crytek first released Far Cry, it delivered an incredible game engine with huge capabilities, and Crysis simply took things to the next level.

Although the sequel, Warhead, has been available for just about a year, it still manages to push the highest-end systems to their breaking-point. It wasn’t until this past January that we finally found a graphics solution to handle the game at 2560×1600 at its Enthusiast level, but even that was without AA! Something tells me Crysis will be de facto for GPU benchmarking for the next while.

Manual Run-through: Whenever we have a new game in-hand for benchmarking, we make every attempt to explore each level of the game to find out which is the most brutal towards our hardware. Ironically, after spending hours exploring this game’s levels, we found the first level in the game, “Ambush”, to be the hardest on the GPU, so we stuck with it for our testing. Our run starts from the beginning of the level and stops shortly after we reach the first bridge.

If I were a bit more ambitious, I think I’d move Crysis results to the end of the review, because they’re so depressing. While CoD and CoJ delivered impressive FPS reports, Crysis never fails to dampen the mood by showing off sub-30 FPS results at the high-end resolution of 2560×1600. But, while not entirely desirable at the high-end, the game ran great at 1080p. The “Enthusiast” mode is still an elusive choice, however.

Graphics Card
Best Playable
Min FPS
Avg. FPS
NVIDIA GTX 295 1792MB (Reference)
2560×1600 – Gamer, 0xAA
19
40.381
ATI HD 5870 1GB (Reference)
2560×1600 – Gamer, 0xAA
20
32.955
ATI HD 5850 1GB (ASUS)
2560×1600 – Mainstream, 0xAA
28
52.105
NVIDIA GTX 285 1GB (EVGA)
2560×1600 – Mainstream, 0xAA
27
50.073
NVIDIA GTX 275 896MB (Reference)
2560×1600 – Mainstream, 0xAA
24
47.758
NVIDIA GTX 260 896MB (XFX)
2560×1600 – Mainstream, 0xAA
21
40.501
ATI HD 4890 1GB (Sapphire)
2560×1600 – Mainstream, 0xAA
19
39.096
ATI HD 4870 1GB (Reference)
2560×1600 – Mainstream, 0xAA
20
35.257
ATI HD 5770 1GB (Reference)
2560×1600 – Mainstream, 0xAA
20
35.256
NVIDIA GTX 250 1GB (EVGA)
2560×1600 – Mainstream, 0xAA
18
34.475
ATI HD 4770 512MB (Gigabyte)
1920×1080 – Mainstream, 0xAA
19
46.856

Make no mistake… the HD 5850 is one powerful card, but it’s met its match with Crysis Warhead. In order to achieve playable settings at 2560×1600, the profile had to be decreased to the still good-looking Mainstream. With that done, the game was smooth as better – totally enjoyable.

F.E.A.R. 2: Project Origin

Five out of the seven current games we use for testing are either sequels, or titles in an established series. F.E.A.R. 2 is one of the former, following up on the very popular First Encounter Assault Recon, released in fall of 2005. This horror-based first-person shooter brought to the table fantastic graphics, ultra-smooth gameplay, the ability to blow massive chunks out of anything, and also a very fun multi-player mode.

Three-and-a-half years later, we saw the introduction of the game’s sequel, Project Origin. As we had hoped, this title improved on the original where gameplay and graphics were concerned, and it was a no-brainer to want to begin including it in our testing. The game is gorgeous, and there’s much destruction to be had (who doesn’t love blowing expensive vases to pieces?). The game is also rather heavily scripted, which aides in producing repeatable results in our benchmarking.

Manual Run-through: The level used for our testing here is the first in the game, about ten minutes in. The scene begins with a travel up an elevator, with a robust city landscape behind us. Our run-through begins with a quick look at this cityscape, and then we proceed through the level until the point when we reach the far door as seen in the above screenshot.

F.E.A.R. 2 is another one of those games that looks and plays great at the same time, proven by the screenshot above and also the incredible FPS reports. NVIDIA’s dual-GPU GTX 295 was the only card aside from the HD 5870 to beat out the HD 5850, but neither is much of a surprise. The performance in the end is still fantastic, with yet another minimum FPS of 51 at 2560×1600. That result would make a great average FPS, let alone minimum FPS!

Graphics Card
Best Playable
Min FPS
Avg. FPS
NVIDIA GTX 295 1792MB (Reference)
2560×1600 – Max Detail, 4xAA, 16xAF
45
95.767
ATI HD 5870 1GB (Reference)
2560×1600 – Max Detail, 4xAA, 16xAF
65
91.34
ATI HD 5850 1GB (ASUS)
2560×1600 – Max Detail, 4xAA, 16xAF
51
73.647
NVIDIA GTX 285 1GB (EVGA)
2560×1600 – Max Detail, 4xAA, 16xAF
39
62.014
NVIDIA GTX 275 896MB (Reference)
2560×1600 – Max Detail, 4xAA, 16xAF
37
57.266
ATI HD 4890 1GB (Sapphire)
2560×1600 – Max Detail, 4xAA, 16xAF
38
56.726
ATI HD 4870 1GB (Reference)
2560×1600 – Max Detail, 4xAA, 16xAF
34
50.555
NVIDIA GTX 260 896MB (XFX)
2560×1600 – Max Detail, 4xAA, 16xAF
29
48.110
ATI HD 5770 1GB (Reference)
2560×1600 – Max Detail, 4xAA, 16xAF
31
47.411
NVIDIA GTX 250 1GB (EVGA)
2560×1600 – Max Detail, 4xAA, 16xAF
24
36.331
ATI HD 4770 512MB (Gigabyte)
2560×1600 – Normal Detail, 0xAA, 4xAF
30
43.215

Because the game runs without a hitch at the game’s maxed out settings, that’s the best playable here.

Grand Theft Auto: IV

If you look up the definition for “controversy”, Grand Theft Auto should be listed. If it’s not, then that should be a crime, because throughout GTA’s many titles, there’s been more of that than you can shake your fist at. At the series’ beginning, the games were rather simple, and didn’t stir up too much passion in certain opposers. But once GTA III and its successors came along, its developers enjoyed all the controversy that came their way, and why not? It helped spur incredible sales numbers.

Grand Theft Auto IV is yet another continuation in the series, though it follows no storyline from the previous titles. Liberty City, loosely based off of New York City, is absolutely huge, with much to explore. This is so much so the case, that you could literally spend hours just wandering around, ignoring the game’s missions, if you wanted to. It also happens to be incredibly stressful on today’s computer hardware, similar to Crysis.

Manual Run-through: After the first minor mission in the game, you reach an apartment. Our benchmarking run starts from within this room. From here, we run out the door, down the stairs and into an awaiting car. We then follow a specific path through the city, driving for about three minutes total.

Crysis is one of the most gluttonous games on the market today, and GTA IV doesn’t follow too far behind. The game as a whole requires a beefy system to run at all, and if you have the barebones of what it requires, then the gains seen with faster graphics hardware shrinks the higher you can go. Memory is king in this game, and it’s a prime example of benefits that 2GB cards can offer.

The results here are interesting, though. In our lower resolutions, the HD 5850 placed right below the HD 5870, but at 2560×1600, it was knocked down a few notches. I’m uncertain why this is the case, but the result was very repeatable. To add to it, although the FPS is decent in our graph, the game had more noticeable skips during the gameplay. I find this odd as the HD 5870 had no such issue.

Graphics Card
Best Playable
Min FPS
Avg. FPS
NVIDIA GTX 295 1792MB (Reference)
2560×1600, H/H/VH/H/VH Detail
27
52.590
ATI HD 5870 1GB (Reference)
2560×1600, H/H/VH/H/VH Detail
29
45.767
NVIDIA GTX 260 896MB (GBT SOC)
2560×1600 – High Detail
30
46.122
NVIDIA GTX 285 1GB (EVGA)
2560×1600 – High Detail
32
45.573
NVIDIA GTX 275 896MB (Reference)
2560×1600 – High Detail
30
44.703
NVIDIA GTX 260 896MB (XFX)
2560×1600 – High Detail
24
38.492
ATI HD 5850 1GB (ASUS)
1920×1080, High Detail
27
42.102
ATI HD 4890 1GB (Sapphire)
1920×1080 – High Detail
32
50.300
ATI HD 4870 1GB (Reference)
1920×1080 – High Detail
33
48.738
ATI HD 5770 1GB (Reference)
1920×1080 – High Detail
33
47.719
NVIDIA GTX 250 1GB (EVGA)
1920×1080 – High Detail
21
34.257

As much as I hated to do it, I put the best playable as the 1920×1080 resolution. If the game had no visible skips at 2560×1600, the slightly lower performance would have been acceptable, but no one, and I mean no one, would want to play with such skipping during prolonged play.

Race Driver: GRID

If you primarily play games on a console, your choices for quality racing games are plenty. On the PC, that’s not so much the case. While there are a good number, there aren’t enough for a given type of racing game, from sim, to arcade. So when Race Driver: GRID first saw its release, many gamers were excited, and for good reason. It’s not a sim in the truest sense of the word, but it’s certainly not arcade, either. It’s somewhere in between.

The game happens to be great fun, though, and similar to console games like Project Gotham Racing, you need a lot of skill to succeed at the game’s default difficulty level. And like most great racing games, GRID happens to look absolutely stellar, and each of the game’s locations look very similar to their real-world counterparts. All in all, no racing fan should ignore this one.

Manual Run-through: For our testing here, we choose the city where both Snoop Dogg and Sublime hit their fame, the LBC, also known as Long Beach City. We choose this level because it’s not overly difficult, and also because it’s simply nice to look at. Our run consists of an entire 2-lap race, with the cars behind us for almost the entire race.

Despite being quite the powerful card, the GTX 295 doesn’t perform too well in games that doesn’t take advantage of multi-GPU setups. GRID is the obvious exception, and as a result, the HD 5850 falls right behind it. Still, it offers extremely good performance, at trust me, at 80+ FPS in a racing title, that kind of performance is appreciated.

Graphics Card
Best Playable
Min FPS
Avg. FPS
ATI HD 5870 1GB (Reference)
2560×1600 – Max Detail, 4xAA
87
106.43
NVIDIA GTX 295 1792MB (Reference)
2560×1600 – Max Detail, 4xAA
84
103.958
ATI HD 5850 1GB (ASUS)
2560×1600 – Max Detail, 4xAA
68
84.732
ATI HD 4890 1GB (Sapphire)
2560×1600 – Max Detail, 4xAA
57
70.797
NVIDIA GTX 285 1GB (EVGA)
2560×1600 – Max Detail, 4xAA
54
66.042
NVIDIA GTX 275 896MB (Reference)
2560×1600 – Max Detail, 4xAA
52
63.617
ATI HD 4870 1GB (Reference)
2560×1600 – Max Detail, 4xAA
51
63.412
ATI HD 5770 1GB (Reference)
2560×1600 – Max Detail, 4xAA
45
56.980
NVIDIA GTX 260 896MB (XFX)
2560×1600 – Max Detail, 4xAA
45
54.809
NVIDIA GTX 250 1GB (EVGA)
2560×1600 – Max Detail, 4xAA
35
43.663
ATI HD 4770 512MB (Gigabyte)
1920×1080 – Max Detail, 4xAA
55
69.403

GRID offers anti-aliasing options above 4xAA, but in all my tests in the past, they’re extremely buggy, so they’re a non-option. Given that, and the performance of our 2560×1600 setting, that remains our best playable.

World in Conflict: Soviet Assault

I admit that I’m not a huge fan of RTS titles, but World in Conflict intrigued me from the get go. After all, so many war-based games continue to follow the same story-lines we already know, and WiC was different. It counteracts the fall of the political and economic situation in the Soviet Union in the late 80’s, and instead provides a storyline that follows it as if the USSR had succeeded by proceeding with war in order to remain in power.

Many RTS games, with their advanced AI, tend to favor the CPU in order to deliver smooth gameplay, but WiC favors both the CPU and GPU, and the graphics prove it. Throughout the game’s missions, you’ll see gorgeous vistas and explore areas from deserts and snow-packed lands, to fields and cities. Overall, it’s a real visual treat for the eyes – especially since you’re able to zoom to the ground and see the action up-close.

Manual Run-through: The level we use for testing is the 7th campaign of the game, called Insurgents. Our saved game plants us towards the beginning of the mission with two squads of five, and two snipers. The run consists of bringing our men to action, and hovering the camera around throughout the duration. The entire run lasts between three and four minutes.

The performance here shows that WiC is great with multi-GPU cards, but it again shows the HD 5850 beating out the much more expensive GTX 285.

Graphics Card
Best Playable
Min FPS
Avg. FPS
NVIDIA GTX 295 1792MB (Reference)
2560×1600 – Max Detail, 8xAA, 16xAF
40
55.819
ATI HD 5870 1GB (Reference)
2560×1600 – Max Detail, 4xAA, 16xAF
35
47.195
ATI HD 5850 1GB (ASUS)
2560×1600 – Max Detail, 4xAA, 16xAF
29
40.581
NVIDIA GTX 285 1GB (EVGA)
2560×1600 – Max Detail, 0xAA, 16xAF
34
49.514
NVIDIA GTX 275 896MB (Reference)
2560×1600 – Max Detail, 0xAA, 16xAF
36
46.186
ATI HD 4890 1GB (Sapphire)
2560×1600 – Max Detail, 0xAA, 16xAF
31
46.175
ATI HD 4870 1GB (Reference)
2560×1600 – Max Detail, 0xAA, 16xAF
28
40.660
NVIDIA GTX 260 896MB (XFX)
2560×1600 – Max Detail, 0xAA, 16xAF
23
39.365
ATI HD 5770 1GB (Reference)
2560×1600 – Max Detail, 0xAA, 16xAF
28
37.389
NVIDIA GTX 250 1GB (EVGA)
2560×1600 – Max Detail, 0xAA, 4xAF
24
32.453
ATI HD 4770 512MB (Gigabyte)
1920×1080 – Max Detail, 4xAA, 16xAF
22
31.561

40 FPS on average might seem like ideal performance for an RTS, but this game thrives on higher FPS. Without it, there are occasional tears or skips, and while not all too noticeable, or game-breaking, some might prefer lower graphics settings, or the removal of anti-aliasing. In my case, I find 40 FPS to be a general minimum, and the HD 5850 hits that mark almost spot-on. My personal preference might differ from yours, and if that’s the case, a simple drop to 2xAA or even 0xAA will speed things back up, with very little noticeable change to the graphics.

Futuremark 3DMark Vantage

Although we generally shun automated gaming benchmarks, we do like to run at least one to see how our GPUs scale when used in a ‘timedemo’-type scenario. Futuremark’s 3DMark Vantage is without question the best such test on the market, and it’s a joy to use, and watch. The folks at Futuremark are experts in what they do, and they really know how to push that hardware of yours to its limit.

The company first started out as MadOnion and released a GPU-benchmarking tool called XLR8R, which was soon replaced with 3DMark 99. Since that time, we’ve seen seven different versions of the software, including two major updates (3DMark 99 Max, 3DMark 2001 SE). With each new release, the graphics get better, the capabilities get better and the sudden hit of ambition to get down and dirty with overclocking comes at you fast.

Similar to a real game, 3DMark Vantage offers many configuration options, although many (including us) prefer to stick to the profiles which include Performance, High and Extreme. Depending on which one you choose, the graphic options are tweaked accordingly, as well as the resolution. As you’d expect, the better the profile, the more intensive the test.

Performance is the stock mode that most use when benchmarking, but it only uses a resolution of 1280×1024, which isn’t representative of today’s gamers. Extreme is more appropriate, as it runs at 1920×1200 and does well to push any single or multi-GPU configuration currently on the market – and will do so for some time to come.

3DMark Vantage is a highly scalable benchmark, taking full advantage of all available shaders and literal GPU cores, along with copious amounts of memory. Given that, our results above fairly accurately scale each card with its real-world performance.

Overclocking ASUS’ Radeon HD 5850

Before tackling our overclocking results, let’s first clear up what we consider to be a real overclock and how we go about achieving it. If you read our processor reviews, you might already be aware that we don’t care too much for an unstable overclock. It might look good on paper, but if it’s not stable, then it won’t be used. Very few people purchase a new GPU for the sole purpose of finding the maximum overclock, which is why we focus on finding what’s stable and usable.

To find the max stable overclock on an ATI card, we stick to using ATI’s Catalyst Overdrive tool. Compared to what’s available on the NVIDIA side, it’s quite limited in the top-end, but it’s the most robust and simplest solution to use. For NVIDIA, we use EVGA’s Precision, which allows us to reach heights that are in no way sane – a good thing.

Once we find what we believe might be a stable overclock, the card is put through 30 minutes of torture with the help of OCCT 3.0’s GPU stress-test, which we find to push any graphics card harder than any other stress-tester we’ve ever used. If the card passes there, we then further verify by running the card through a 2x run of 3DMark Vantage’s Extreme setting. Finally, games are quickly loaded and tested out to assure we haven’t introduced any side-effects.

If all these tests pass without issue, we consider the overclock to be stable.

Overclocking ASUS’ Radeon HD 5850

In my review of Sapphire’s Radeon HD 5870 last week, I complained that ATI’s Control Center deprives users of real overclocking ability. The limits are low, and in turn, people can’t overclock their cards very easily. So color me surprised when I saw that the overclocking panel for the HD 5850 was wide open, with limits so high, they’d even likely be impossible under exotic cooling. This of course is a good thing, since it actually allows you to be a little flexible.

So unlike the HD 5870, I was excited to benchmark the card, since I knew I’d be able to (or would be likely to) hit an impressive overclock. Better yet, because ASUS bundles in an application called “Smart Doctor”, even the voltage can be increased, making the possibility of acheiving an incredible overclock a much better one.

Because not everyone is thrilled over the idea of cranking up their GPU’s voltage levels, I found the maximum stable overclock for both with and without the voltage increase. I didn’t gather performance data for the voltage increased overclock, but I did verify that it was stable. The stock clocks for the HD 5850 are 725MHz Core and 1000MHz Memory, and the max stable overclock without the voltage increase was 785MHz Core and 1000MHz Memory.

That’s right… there was no memory overclock whatsoever. Even raising it just 5MHz would crash after a couple of minutes with our OCCT stress test, so I had to focus on the Core only. A 60MHz boost might not be the most impressive overclock in the world, but it’s still significant enough to benchmark with:

The real-world gains seen when overclocking a GPU tends to be rather slim, and this one above is no different. There are gains, and the boost required minimal effort, so depending on how bad you want to wring every last bit of performance out of the card, the overclock might not be worth it. What might be worth it though, for those willing to play around with the voltages, is to run ASUS’ Smart Doctor tool, which allows you to push the settings far beyond what’s going to be considered stable.

I didn’t expect it so much, but increasing the voltage even a wee bit on this card made a huge difference in overclocking potential. I hate to admit it, but that was so much the case, that I ended up having to stop overclocking it because I was spending so much time on it. So what I hit, I don’t even believe is the “max”, because after spending about eight hours on overclocking (I did other things at the same time, of course), I just had to stop.

In the end, what I found to be stable is what’s seen in the screenshot above, 803MHz Core and 1200MHz Memory. Without a voltage boost, I was unable to go 5MHz above 1000MHz on the memory, but with a minor bump from 1.087v to 1.12v, we gained at least 200MHz. For those interested in a 3DMark Score at that overclock, it achieved 7332 3DMarks on the Extreme profile, with a GPU Score of 7084.

Power & Temperatures

To test our graphics cards for both temperatures and power consumption, we utilize OCCT for the stress-testing, GPU-Z for the temperature monitoring, and a Kill-a-Watt for power monitoring. The Kill-a-Watt is plugged into its own socket, with only the PC connect to it.

As per our guidelines when benchmarking with Windows, when the room temperature is stable (and reasonable), the test machine is boot up and left to sit at the Windows desktop until things are completely idle. Once things are good to go, the idle wattage is noted, GPU-Z is started up to begin monitoring card temperatures, and OCCT is set up to begin stress-testing.

To push the cards we test to their absolute limit, we use OCCT in full-screen 2560×1600 mode, and allow it to run for 30 minutes, which includes a one minute lull at the start, and a three minute lull at the end. After about 10 minutes, we begin to monitor our Kill-a-Watt to record the max wattage.

Impressive results here, as expected. The room temperature was a bit lower when we tested the HD 5850, but even then, it’s interesting to see just how effective Sapphire’s Vapor-X cooler is, as it was in a hotter room, and should be the hotter card, but proved 1°C cooler overall. But that in itself doesn’t matter, because the other comparisons are more important, such as the one pitting the HD 5850 to the lowly GTS 250. The former is much, much more powerful, yet retains modest temperatures. The same goes for power consumption, proving to draw 33W less at max load than the GTS 250.

Final Thoughts

It may have been a few months since AMD released its latest HD 5000 series, but I still haven’t gotten over the initial “Damn, what a great card” feeling. There’s no real competition at the high-end right now, and in some ways, it’s unfortunate. The main reason for that is thanks to none other than availability. All four of the released HD 5000 cards have some serious muscle, but when the top two are impossible to find in stores, it’s not just a problem… it’s a major problem.

I hope to see both the HD 5850 and HD 5870 reach greater availability soon, and I’m sure AMD itself is thinking of nothing else. After all, NVIDIA’s not going to have real competition to Cypress until early next year, so between now and then, the lack of availability is hurting AMD’s bottom-line, and bad. After all, AMD isn’t exactly cleaning house with its CPUs (though the situation is improving), so its graphics cards are the perfect opportunity to increase revenue. It’s an unfortunate situation all-around.

Despite the lack of available cards, the HD 5850 is an incredible offering. Even at the price-gouged price of $300 (regularly $259), it offers a great bang for the buck. The closest competitor is NVIDIA’s GTX 285, and it costs at least $70 more, but rarely surpasses the performance of the HD 5850, so a purchase of ATI’s card at this point in time is a no-brainer. We’re not just talking better performance here, but the addition of improved power consumption, lower temperatures, and of course, technologies such as DirectX 11, Eyefinity and so on.

I’m sure I don’t have to repeat myself, but I will. The HD 5850 is a great card, and while ASUS hasn’t done much special to this one in particular, I’m sure models with after-market coolers will begin to trickle out as soon as availability improves. It’ll be at that point that overclocking might become even more impressive, although it’s hard to conclude on that since heat didn’t really prove to be a real factor in limiting ours (that I know of).

At $300, this card offers tons of performance, cool additional features including DirectX 11 support (this will be more important next year), runs cool, has superb power consumption, and comes with a free $40 DirectX 11 game. There’s nothing to dislike here. Well, except for the availability.


ASUS Radeon HD 5850 (EAH5850)

Discuss this article in our forums!

Have a comment you wish to make on this article? Recommendations? Criticism? Feel free to head over to our related thread and put your words to our virtual paper! There is no requirement to register in order to respond to these threads, but it sure doesn’t hurt!

Copyright © 2005-2020 Techgage Networks Inc. - All Rights Reserved.