Techgage logo

ATI Radeon HD 5770 – DirectX 11 for the Masses

Date: October 13, 2009
Author(s): Rob Williams

AMD may have released its first Evergreen GPUs mere weeks ago, but don’t think it’s slowing down for anybody. The company has followed-up with its first mid-range parts, belonging to the HD 5700 series. Performance is much more modest on these new cards, but no features have been scrapped. It’s all here… DirectX 11, Eyefinity and more.



Introduction

When AMD released its RV700 series of graphics cards last summer, the world had no choice but to be wowed. The company saw a relative drought for the few years prior to that launch, and some began to wonder if the acquisition by AMD ended up doing more harm than good for ATI. Well, that launch proved the opposite, because you have to imagine that even NVIDIA was taken by surprise. Most of us were proven wrong, and it couldn’t have been more sweet.

Since both AMD’s and NVIDIA’s GPU launches last summer, though, the graphics card scene hasn’t been too interesting. We’ve seen many launches, but for the most part, it’s been a rehash after rehash… downclocked or overclocked parts. But last month, AMD shook things up once again with the unveiling of its Evergreen architecture, and the first cards to use it, codenamed Cypress.

I won’t rehash what many of the launch articles said, but both the HD 5870 and HD 5850 could be summed up into a single word: “Whoa”.

It’s true. As enthusiasts, we dealt with a lack of exciting GPU launches for almost a year-and-a-half, and then AMD delivered a feature-rich architecture and cards about twice as fast as the previous generation. Thanks to the Terascale 2 engine, both launch cards are capable of huge compute performance… upwards of 2.72 TFLOPS. To put that into perspective, when the HD 4000-series launched last summer, we finally saw the 1.0 TFLOP barrier broken on GPUs… and here, we’re almost triple that.

Compared directly to ATI’s last-generation cards, Cypress offers twice the number of stream processors, ROPs and texture units, which in layman’s terms essentially means twice the compute power. In addition, restructuring was done to maximize the efficiency between all of the components, from how the texture units fetch their units from the SIMD, while also seeing the addition of multiple memory controllers to reduce bottlenecks as much as possible.

Of course, what ATI boasts most about on its Evergreen cards is the addition of DirectX 11 support, which the company is highly optimistic about. Although I feel like it took forever for DX10 to gain any traction at all in the marketplace, I have high hopes for DX11 based on the demos I’ve seen in recent weeks. Tessellation in particular is one addition to DirectX that’s rather impressive, as it allows for more realistic models without the expected performance hit.

Other key DX11 features include DirectCompute 11 (including support for OpenCL), improved shadowing techniques, HDR texture compression, some physics acceleration, multi-threading, and one of my favorites, order-independent transparency. OIT, simply put, allows for semi-transparent objects to be overlapped without being sorted first, which means performance won’t be affected. Scenes will also appear more realistic, with the best example I can give being a scenario where you are looking through a glass window and also the growing smoke cloud outside it. Typically, game engines will need to sort these layers to figure out the best end-result, but with OIT, no sorting is required… it’s done automatically, and the result you see is hopefully ideal.

Why am I bothering to rehash all of this? It’s because both the HD 5770 and HD 5750 support every-single feature that the high-end launch cards do, so aside from performance, you lose nothing. It’s also interesting to note the fact that AMD is releasing these cards so fast. It’s rare to see any company release a product, then release a scaled-down version mere weeks later, but that’s just what happened here. It’s not as simple as tweaking something and releasing it as a new product… these are essentially altered Cypress chips, so the turn-around time is incredible.

This is one thing that AMD enjoys boasting about, and arguably, they have the right to. The company’s Evergreen-based launches aren’t quite over, as Hemlock, the “X2” card, is due within the next few weeks as well, and the lower-end Redwood and Cedar (fairly difficult to figure out this codename theme, huh?) are due in Q1 2010. AMD has been busy, it’s as simple as that. So, with that said, let’s take a closer look at what the HD 5770 and HD 5750 brings to the table.

Closer Look at ATI’s Radeon HD 5700 Series

It’s time to get into what this article’s all about… our look at ATI’s Radeon HD 5770. This is of course the company’s brand-new mid-range graphics card, and it will carry an SRP of $159, putting it right below NVIDIA’s GTX 260 in terms of retail value. Over the past few months, there have been many rumors that the HD 5770 would be comparable in performance to the HD 4870, and for the most part, that’s true. Both cards share a similar number of steam processors, but both also trade various other pros and cons, with the overall nod going towards the HD 4870.

The easiest way to look at the HD 5770 is to consider it as being half of an HD 5870, because in most regards, that’s exactly the case. The block diagram for the HD 5770 is seen above, and as you can see, it is indeed almost like the HD 5870’s, but cut in half. There are some minor detail changes, and the end result is that the HD 5870 has slightly more than double the number of transistors, but nothing is lacking from the HD 5770 from a features standpoint… only performance.

To help prove this “half” metric, take a look at the table below. Rather than comparing all four of the current HD 5000-series cards, I’d opted to single out the HD 5870 and HD 5770, as they’re both the “top card” in their respective category (mid-range vs. high-end). As you can see, the HD 5870 pretty much doubles everything, from the number of stream processors to the number of texture units and ROPs. Of course, the various performance measurements are doubled as well. The exception is the GDDR5 data rate, which remains the same as both cards feature 1GB of GDDR5, and at the same memory clock of 1200MHz. The GPU core clock is also the same on both cards, at 850MHz.

Specification
Radeon HD 5770
Radeon HD 5870
Process
40nm
40nm
Transistors
1.04B
2.15B
Engine Clock
850 MHz
850 MHz
Stream Processors
800
1600
Compute Performance
1.36 TFLOPS
2.72 TFLOPS
Texture Units
40
80
Texture Fillrate
34.0 GTexel/s
68.0 GTexel/s
ROPs
16

32

Pixel Fillrate
13.6 GPixel/s

27.2 GPixel/s

Z/Stencil
54.4 GSamples/s

108.8 GSamples/s

Memory Type
GDDR5

GDDR5

Memory Clock
1200 MHz

1200 MHz

Memory Data Rate
4.8 Gbps

4.8 Gbps

Memory Bus
128-bit

256-bit

Memory Bandwidth
76.8 GB/s

153.6 GB/s

Maximum Board Power
108W

188W

Idle Board Power
18W

27W

There’s one “loss” suffered by the HD 5770 that’s worth noting, though… the power consumption. The given TDP of the HD 5770 isn’t exactly half, nor would we expect it to be, but it’s impressive nonetheless… just 18W idle. Finally, we’re seeing GPU companies taking power consumption seriously. Even if you’re the furthest thing from an enviro-nut, I’m confident most would agree that a GPU idling at 18W is far better than one that idling at 80W.

I can’t stress how important these gains (or rather, losses) are, because just imagine how much power is consumed by the idle GPUs the world over, compared to what it could be like with GPUs such as these. The differences are huge. To put it in perspective, the HD 4870, which offers similar performance as the HD 5770, idles at 80W. ATI has effectively decreased idle power consumption by 77.5% in just one generation.

Wow.

Below is AMD’s current GPU line-up, including the HD 5750, also released today. You’ll notice an absolute lack of X2 cards, and that’s not an accident. In looking around the web, it’s become clear that those have been phased out, and it’s no surprise, as the HD 5870 X2 (Hemlock) is due very, very soon. Earlier, I equated the HD 5770 to being one-half of an HD 5870, and the same could almost be said when comparing the HD 5750 to the HD 5850. The primary difference is that the HD 5770 has a slower GPU clock (-25MHz), but faster memory clock (+150MHz).

AMD claims immediate availability of both of today’s launches, but I’d personally expect the HD 5770 to be much easier to acquire over the course of the next few weeks based on what I’m hearing from some vendors. We didn’t receive an HD 5750 in time for this article, but you can expect our look at it in the weeks to come.

Model
Core MHz
Mem MHz
Memory
Bus Width
Processors
Radeon HD 5870
850
1200
1024MB
256-bit
1600
Radeon HD 5850
725
1000
1024MB
256-bit
1440
Radeon HD 5770
850
1200
1024MB
128-bit
800
Radeon HD 5750
700
1150
512 – 1024MB
128-bit
720
Radeon HD 4890
850 – 900
975
1024MB
256-bit
800
Radeon HD 4870
750
900
512 – 2048MB
256-bit
800
Radeon HD 4850
625
993
512 – 1024MB
256-bit
800
Radeon HD 4770
750
800
512MB
128-bit
640
Radeon HD 4670
750
900 – 1100
512 – 1024MB
128-bit
320
Radeon HD 4650
600
400 – 500
512 – 1024MB
128-bit
320

The HD 5770 features a near-identical shroud as the already-released HD 5870 and HD 5850, and while it does a sufficient job of keeping the GPU cool, I can’t help but think of it as looking like a toy. Who can blame me? It’s a good thing the performance proves it’s the furthest thing from one.

While the HD 5870 and HD 5850 cards have two PCI-E power connectors, both the HD 5770 and HD 5750 have just one, situated inside of the top portion of the card’s chassis. Note that the similar-performing HD 4870 of last summer required two power connectors, so the power consumption differences are noticeable from a physical standpoint as well.

Just because the HD 5700 series are AMD’s mid-range cards, it doesn’t mean that the company cheapened out and removed Eyefinity support. It’s actually the opposite, with two DVI ports ready to go, along with both HDMI and DisplayPort connections as well. What’s most exciting about the company’s inclusion of this, to me, is with the HD 5750. That card is to sell for $110, and gaming performance aside, that’s one heck of a deal for a card that’s able to power 3x 2560×1600 displays. For all we know, it might be financial gurus that are picking these cards up in droves, as they typically have robust multi-monitor setups.

Before we dive into our testing results, I’ll reiterate that given the specs, we can expect the HD 5770 to fare quite nicely when compared to the HD 4870 (which retailed for $400 at its launch last summer). It’s hard to say that the HD 5770 is only a lesser-expensive HD 4870, though, because the older card lacks DirectX 11 support, has a far higher power consumption, greater temperatures, lacks Eyefinity support, nor does it offer Dolby True HD via HDMI. Overall, the HD 5770 is a win/win any way you look at it.

On the following page, we’ll tackle our system specifications and testing methods, and then we’ll kick off our results with Call of Duty: World of War.

Test System & Methodology

At Techgage, we strive to make sure our results are as accurate as possible. Our testing is rigorous and time-consuming, but we feel the effort is worth it. In an attempt to leave no question unanswered, this page contains not only our testbed specifications, but also a fully-detailed look at how we conduct our testing. For an exhaustive look at our methodologies, even down to the Windows Vista installation, please refer to this article.

Test Machine

The below table lists our testing machine’s hardware, which remains unchanged throughout all GPU testing, minus the graphics card. Each card used for comparison is also listed here, along with the driver version used. Each one of the URLs in this table can be clicked to view the respective review of that product, or if a review doesn’t exist, it will bring you to the product on the manufacturer’s website.

Component
Model
Processor
Intel Core i7-975 Extreme Edition – Quad-Core, 3.33GHz, 1.33v
Motherboard
Gigabyte GA-EX58-EXTREME – X58-based, F7 BIOS (05/11/09)
Memory
Corsair DOMINATOR – DDR3-1333 7-7-7-24-1T, 1.60v
ATI Graphics Radeon HD 5770 1GB (Reference) – Beta Catalyst (10/06/09)
Radeon HD 4890 1GB (Sapphire) – Catalyst 9.8
Radeon HD 4870 1GB (Reference) – Catalyst 9.8
Radeon HD 4770 512MB (Gigabyte) – Catalyst 9.8
NVIDIA Graphics GeForce GTX 295 1792MB (Reference) – GeForce 186.18
GeForce GTX 285 1GB (EVGA) – GeForce 186.18
GeForce GTX 275 896MB (Reference) – GeForce 186.18
GeForce GTX 260 896MB (Gigabyte Super OC) – GeForce 190.62
GeForce GTX 260 896MB (XFX) – GeForce 186.18
GeForce GTS 250 1GB (EVGA) – GeForce 186.18
Audio
On-Board Audio
Storage
Seagate Barracuda 500GB 7200.11
Power Supply
Corsair HX1000W
Chassis
SilverStone TJ10 Full-Tower
Display
Gateway XHD3000 30″
Cooling
Thermalright TRUE Black 120
Et cetera
Windows Vista Ultimate 64-bit

When preparing our testbeds for any type of performance testing, we follow these guidelines:

To aide with the goal of keeping accurate and repeatable results, we alter certain services in Windows Vista from starting up at boot. This is due to the fact that these services have the tendency to start up in the background without notice, potentially causing slightly inaccurate results. Disabling “Windows Search” turns off the OS’ indexing which can at times utilize the hard drive and memory more than we’d like.

For more robust information on how we tweak Windows, please refer once again to this article.

Game Titles

At this time, we currently benchmark all of our games using three popular resolutions: 1680×1050, 1920×1080 and also 2560×1600. 1680×1050 was chosen as it’s one of the most popular resolutions for gamers sporting ~20″ displays. 1920×1080 might stand out, since we’ve always used 1920×1200 in the past, but we didn’t make this change without some serious thought. After taking a look at the current landscape for desktop monitors around ~24″, we noticed that 1920×1200 is definitely on the way out, as more and more models are coming out as native 1080p. It’s for this reason that we chose it. Finally, for high-end gamers, we also benchmark using 2560×1600, a resolution that’s just about 2x 1080p.

For graphics cards that include less than 1GB of GDDR, we omit Grand Theft Auto IV from our testing, as our chosen detail settings require at least 800MB of available graphics memory. Also, if the card we’re benchmarking doesn’t offer the performance to handle 2560×1600 across most of our titles reliably, only 1680×1050 and 1920×1080 will be utilized.

Because we value results generated by real-world testing, we don’t utilize timedemos whatsoever. The possible exception might be Futuremark’s 3DMark Vantage. Though it’s not a game, it essentially acts as a robust timedemo. We choose to use it as it’s a standard where GPU reviews are concerned, and we don’t want to rid our readers of results they expect to see.

All of our results are captured with the help of Beepa’s FRAPS 2.98, while stress-testing and temperature-monitoring is handled by OCCT 3.1.0 and GPU-Z, respectively.

Call of Duty: World at War

Call of Juarez: Bound in Blood

Crysis Warhead

F.E.A.R. 2: Project Origin

Grand Theft Auto IV

Race Driver: GRID

World in Conflict: Soviet Assault

Call of Duty: World at War

The Call of Duty series is one that needs no introduction. Although only six years old, CoD has already become a stature where both single-player and multi-player first-person shooters are concerned. From the series’ inception, each game has delivered stellar gameplay that totally engrosses you, thanks in part to creative levels, smart AI and realistic graphics.

World at War is officially the 5th game in the series, and while some hardcore fans claim that Treyarch is simply unable to deliver as high caliber a game as Infinity Ward, the title does do well to hold everyone over until Modern Warfare 2 hits (November 10, 2009). One perk is that World at War focuses on battles not exhausted in other war games, which helps to keep things fresh.

Manual Run-through: The level chosen for our testing is “Relentless”, one that depicts the Battle of Peleliu, which has American soldiers advance to capture an airstrip from the Japanese. The level is both exciting to play and incredibly hard on your graphics hardware, making it a perfect choice for our testing.

From a price/FPS perspective, the HD 5770 hasn’t blown our socks off, but the performance seen so far isn’t too bad. It’s much slower than the GTX 260, which retails for between $10 – $20 higher, and also slower than the HD 4870, which retails for about the same, if not $5 – $10 less. We’ll see if this continues throughout the rest of our performance testing, but for now, how’s our best playable setting?

Graphics Card
Best Playable
Min FPS
Avg. FPS
NVIDIA GTX 295 1792MB (Reference)
2560×1600 – Max Detail, 4xAA
22
61.988
NVIDIA GTX 285 1GB (EVGA)
2560×1600 – Max Detail, 4xAA
24
41.563
NVIDIA GTX 275 896MB (Reference)
2560×1600 – Max Detail, 4xAA
22
39.187
ATI HD 4890 1GB (Sapphire)
2560×1600 – Max Detail, 0xAA
21
42.778
ATI HD 4870 1GB (Reference)
2560×1600 – Normal Detail, 0xAA
23
42.097
ATI HD 5770 1GB (Reference)
2560×1600 – Normal Detail, 0xAA
19
40.066
NVIDIA GTX 260 896MB (XFX)
2560×1600 – Normal Detail, 0xAA
20
38.685
NVIDIA GTX 250 1GB (EVGA)
2560×1600 – Normal Detail, 0xAA
19
37.054
ATI HD 4770 512MB (Gigabyte)
1920×1080 – Max Detail, 4xAA
19
36.639

When disabling our anti-aliasing, and also dropping the texture levels to normal, the HD 5770 begins to catch up to the HD 4870, falling short by only 2FPS, compared to ~5 FPS in our three main test settings above.

Call of Juarez: Bound in Blood

When the original Call of Juarez was released, it brought forth something unique… a western-styled first-person shooter. That’s simply not something we see too often, so for fans of the genre, its release was a real treat. Although it didn’t really offer the best gameplay we’ve seen from a recent FPS title, its storyline and unique style made it well-worth testing.

After we retired the original title from our suite, we anxiously awaited for the sequel, Bound in Blood, in hopes that the series could be re-introduced into our testing once again. Thankfully, it could, thanks in part to its fantastic graphics, which are based around the Chrome Engine 4, and improved gameplay of the original. It was also well-received by game reviewers, which is always a good sign.

Manual Run-through: The level chosen here is Chapter I, and our starting point is about 15 minutes into the mission, where we stand atop a hill that overlooks a large river. We make our way across the hill and ultimately through a large trench, and we stop our benchmarking run shortly after we blow up a gas-filled barrel.

ATI cards tend to excel in this title, and as a result, the HD 5770 managed to overtake the GTX 260 by a rather fair margin – although the results were quite close at 2560×1600. Interestingly enough, while the HD 4870 kept its distance from the HD 5770 in Call of Duty, the two cards were very close in performance here at each resolution.

Graphics Card
Best Playable
Min FPS
Avg. FPS
NVIDIA GTX 295 1792MB (Reference)
2560×1600 – Max Detail
37
80.339
NVIDIA GTX 285 1GB (EVGA)
2560×1600 – Max Detail
45
54.428
NVIDIA GTX 275 896MB (Reference)
2560×1600 – Max Detail
41
51.393
ATI HD 4890 1GB (Sapphire)
2560×1600 – Max Detail
36
51.334
ATI HD 4870 1GB (Reference)
2560×1600 – Max Detail
31
46.259
ATI HD 5770 1GB (Reference)
2560×1600 – Max Detail
28
45.028
NVIDIA GTX 260 896MB (XFX)
2560×1600 – Max Detail
35
44.023
NVIDIA GTX 250 1GB (EVGA)
2560×1600 – Max Detail
25
33.751
ATI HD 4770 512MB (Gigabyte)
2560×1600 – Normal Detail
24
35.434

Despite the fact that CoJ: Bound in Blood has fantastic graphics, it doesn’t require a powerhouse machine like certain other titles do (Crysis) to look good, and as a result, running the game with 4xAA at our top-end resolution of 2560×1600 turned out to be just fine, delivering a respectible 45 FPS on average.

Crysis Warhead

Like Call of Duty, Crysis is another series that doesn’t need much of an introduction. Thanks to the fact that almost any comments section for a PC performance-related article asks, “Can it run Crysis?”, even those who don’t play computer games no doubt know what Crysis is. When Crytek first released Far Cry, it delivered an incredible game engine with huge capabilities, and Crysis simply took things to the next level.

Although the sequel, Warhead, has been available for just about a year, it still manages to push the highest-end systems to their breaking-point. It wasn’t until this past January that we finally found a graphics solution to handle the game at 2560×1600 at its Enthusiast level, but even that was without AA! Something tells me Crysis will be de facto for GPU benchmarking for the next while.

Manual Run-through: Whenever we have a new game in-hand for benchmarking, we make every attempt to explore each level of the game to find out which is the most brutal towards our hardware. Ironically, after spending hours exploring this game’s levels, we found the first level in the game, “Ambush”, to be the hardest on the GPU, so we stuck with it for our testing. Our run starts from the beginning of the level and stops shortly after we reach the first bridge.

ATI might excel in Call of Juarez, but NVIDIA holds onto its crown in titles such as Crysis. Here, our HD 5770 came close once again to the HD 4870, but the GTX 260 kept a 5 – 10% performance lead.

Graphics Card
Best Playable
Min FPS
Avg. FPS
NVIDIA GTX 295 1792MB (Reference)
2560×1600 – Gamer, 0xAA
19
40.381
NVIDIA GTX 285 1GB (EVGA)
2560×1600 – Mainstream, 0xAA
27
50.073
NVIDIA GTX 275 896MB (Reference)
2560×1600 – Mainstream, 0xAA
24
47.758
NVIDIA GTX 260 896MB (XFX)
2560×1600 – Mainstream, 0xAA
21
40.501
ATI HD 4890 1GB (Sapphire)
2560×1600 – Mainstream, 0xAA
19
39.096
ATI HD 4870 1GB (Reference)
2560×1600 – Mainstream, 0xAA
20
35.257
ATI HD 5770 1GB (Reference)
2560×1600 – Mainstream, 0xAA
20
35.256
NVIDIA GTX 250 1GB (EVGA)
2560×1600 – Mainstream, 0xAA
18
34.475
ATI HD 4770 512MB (Gigabyte)
1920×1080 – Mainstream, 0xAA
19
46.856

Crysis is one of those unique gems that, even when run on a super high-end machine, still invokes the “Why won’t it run faster?!” spirit. So, for all configurations except the GTX 295 in SLI, we’re forced to drop the detail settings to “Mainstream”, which still happens to look quite good. And no, there is no typo. The HD 5770 really did average out to be 0.001 FPS slower than the HD 4870!

F.E.A.R. 2: Project Origin

Five out of the seven current games we use for testing are either sequels, or titles in an established series. F.E.A.R. 2 is one of the former, following up on the very popular First Encounter Assault Recon, released in fall of 2005. This horror-based first-person shooter brought to the table fantastic graphics, ultra-smooth gameplay, the ability to blow massive chunks out of anything, and also a very fun multi-player mode.

Three-and-a-half years later, we saw the introduction of the game’s sequel, Project Origin. As we had hoped, this title improved on the original where gameplay and graphics were concerned, and it was a no-brainer to want to begin including it in our testing. The game is gorgeous, and there’s much destruction to be had (who doesn’t love blowing expensive vases to pieces?). The game is also rather heavily scripted, which aides in producing repeatable results in our benchmarking.

Manual Run-through: The level used for our testing here is the first in the game, about ten minutes in. The scene begins with a travel up an elevator, with a robust city landscape behind us. Our run-through begins with a quick look at this cityscape, and then we proceed through the level until the point when we reach the far door as seen in the above screenshot.

The results seen here are quite similar to what we saw with Call of Duty. While the HD 5770 once again fails to keep up to the GTX 260 and HD 4870, it’s the latter card that comes out on top overall, although that lead shrinks with higher resolutions.

Graphics Card
Best Playable
Min FPS
Avg. FPS
NVIDIA GTX 295 1792MB (Reference)
2560×1600 – Max Detail, 4xAA, 16xAF
45
95.767
NVIDIA GTX 285 1GB (EVGA)
2560×1600 – Max Detail, 4xAA, 16xAF
39
62.014
NVIDIA GTX 275 896MB (Reference)
2560×1600 – Max Detail, 4xAA, 16xAF
37
57.266
ATI HD 4890 1GB (Sapphire)
2560×1600 – Max Detail, 4xAA, 16xAF
38
56.726
ATI HD 4870 1GB (Reference)
2560×1600 – Max Detail, 4xAA, 16xAF
34
50.555
NVIDIA GTX 260 896MB (XFX)
2560×1600 – Max Detail, 4xAA, 16xAF
29
48.110
ATI HD 5770 1GB (Reference)
2560×1600 – Max Detail, 4xAA, 16xAF
31
47.411
NVIDIA GTX 250 1GB (EVGA)
2560×1600 – Max Detail, 4xAA, 16xAF
24
36.331
ATI HD 4770 512MB (Gigabyte)
2560×1600 – Normal Detail, 0xAA, 4xAF
30
43.215

Like Call of Juarez, F.E.A.R. 2 runs well on a variety of hardware, and any current mid-range card will handle the game fine at its absolute top graphics settings and resolution.

Grand Theft Auto: IV

If you look up the definition for “controversy”, Grand Theft Auto should be listed. If it’s not, then that should be a crime, because throughout GTA’s many titles, there’s been more of that than you can shake your fist at. At the series’ beginning, the games were rather simple, and didn’t stir up too much passion in certain opposers. But once GTA III and its successors came along, its developers enjoyed all the controversy that came their way, and why not? It helped spur incredible sales numbers.

Grand Theft Auto IV is yet another continuation in the series, though it follows no storyline from the previous titles. Liberty City, loosely based off of New York City, is absolutely huge, with much to explore. This is so much so the case, that you could literally spend hours just wandering around, ignoring the game’s missions, if you wanted to. It also happens to be incredibly stressful on today’s computer hardware, similar to Crysis.

Manual Run-through: After the first minor mission in the game, you reach an apartment. Our benchmarking run starts from within this room. From here, we run out the door, down the stairs and into an awaiting car. We then follow a specific path through the city, driving for about three minutes total.

Crysis is one of the most gluttonous games on the market today, and GTA IV doesn’t follow too far behind. The game as a whole requires a beefy system to run at all, and if you have the barebones of what it requires, then the gains seen with faster graphics hardware shrinks the higher you can go. Memory is king in this game, and it’s a prime example of benefits that 2GB cards can offer.

Graphics Card
Best Playable
Min FPS
Avg. FPS
NVIDIA GTX 295 1792MB (Reference)
2560×1600, H/H/VH/H/VH Detail
27
52.590
NVIDIA GTX 260 896MB (GBT SOC)
2560×1600 – High Detail
30
46.122
NVIDIA GTX 285 1GB (EVGA)
2560×1600 – High Detail
32
45.573
NVIDIA GTX 275 896MB (Reference)
2560×1600 – High Detail
30
44.703
NVIDIA GTX 260 896MB (XFX)
2560×1600 – High Detail
24
38.492
ATI HD 4890 1GB (Sapphire)
1920×1080 – High Detail
32
50.300
ATI HD 4870 1GB (Reference)
1920×1080 – High Detail
33
48.738
ATI HD 5770 1GB (Reference)
1920×1080 – High Detail
33
47.719
NVIDIA GTX 250 1GB (EVGA)
1920×1080 – High Detail
21
34.257

Like the HD 4890 and HD 4870, this game just doesn’t seem to run too well on the HD 5770 at 2560×1600. This is overall a stange game, because even if you are to decrease the detail settings drastically at 2560×1600, it still may not run smooth. So in the case of this particular title, it’s best to stick with 1080p.

Race Driver: GRID

If you primarily play games on a console, your choices for quality racing games are plenty. On the PC, that’s not so much the case. While there are a good number, there aren’t enough for a given type of racing game, from sim, to arcade. So when Race Driver: GRID first saw its release, many gamers were excited, and for good reason. It’s not a sim in the truest sense of the word, but it’s certainly not arcade, either. It’s somewhere in between.

The game happens to be great fun, though, and similar to console games like Project Gotham Racing, you need a lot of skill to succeed at the game’s default difficulty level. And like most great racing games, GRID happens to look absolutely stellar, and each of the game’s locations look very similar to their real-world counterparts. All in all, no racing fan should ignore this one.

Manual Run-through: For our testing here, we choose the city where both Snoop Dogg and Sublime hit their fame, the LBC, also known as Long Beach City. We choose this level because it’s not overly difficult, and also because it’s simply nice to look at. Our run consists of an entire 2-lap race, with the cars behind us for almost the entire race.

GRID is one game that favors ATI cards, but that didn’t matter too much here as the GTX 260 still managed to beat it out, except in the top-end. But despite that, the results from all of our tested cards is good, and with the HD 5770 achieving almost 60 FPS at 2560×1600 with 4xAA, we’re doubtful too many people will be complaining.

Graphics Card
Best Playable
Min FPS
Avg. FPS
NVIDIA GTX 295 1792MB (Reference)
2560×1600 – Max Detail, 4xAA
82
101.690
ATI HD 4890 1GB (Sapphire)
2560×1600 – Max Detail, 4xAA
57
70.797
NVIDIA GTX 285 1GB (EVGA)
2560×1600 – Max Detail, 4xAA
54
66.042
NVIDIA GTX 275 896MB (Reference)
2560×1600 – Max Detail, 4xAA
52
63.617
ATI HD 4870 1GB (Reference)
2560×1600 – Max Detail, 4xAA
51
63.412
ATI HD 5770 1GB (Reference)
2560×1600 – Max Detail, 4xAA
45
56.980
NVIDIA GTX 260 896MB (XFX)
2560×1600 – Max Detail, 4xAA
45
54.809
NVIDIA GTX 250 1GB (EVGA)
2560×1600 – Max Detail, 4xAA
35
43.663
ATI HD 4770 512MB (Gigabyte)
1920×1080 – Max Detail, 4xAA
55
69.403

If it works, don’t mess with it, and that’s a good mantra to live by with GRID. The game runs well regardless of the settings you use (assuming you don’t push anti-aliasing higher than 4x), so 2560×1600 with maxed settings is our best playable.

World in Conflict: Soviet Assault

I admit that I’m not a huge fan of RTS titles, but World in Conflict intrigued me from the get go. After all, so many war-based games continue to follow the same story-lines we already know, and WiC was different. It counteracts the fall of the political and economic situation in the Soviet Union in the late 80’s, and instead provides a storyline that follows it as if the USSR had succeeded by proceeding with war in order to remain in power.

Many RTS games, with their advanced AI, tend to favor the CPU in order to deliver smooth gameplay, but WiC favors both the CPU and GPU, and the graphics prove it. Throughout the game’s missions, you’ll see gorgeous vistas and explore areas from deserts and snow-packed lands, to fields and cities. Overall, it’s a real visual treat for the eyes – especially since you’re able to zoom to the ground and see the action up-close.

Manual Run-through: The level we use for testing is the 7th campaign of the game, called Insurgents. Our saved game plants us towards the beginning of the mission with two squads of five, and two snipers. The run consists of bringing our men to action, and hovering the camera around throughout the duration. The entire run lasts between three and four minutes.

Similar to most of our other results, the HD 5770 fell just behind the HD 4870 here, but even further behind the GTX 260.

Graphics Card
Best Playable
Min FPS
Avg. FPS
NVIDIA GTX 295 1792MB (Reference)
2560×1600 – Max Detail, 8xAA, 16xAF
40
55.819
NVIDIA GTX 285 1GB (EVGA)
2560×1600 – Max Detail, 0xAA, 16xAF
34
49.514
NVIDIA GTX 275 896MB (Reference)
2560×1600 – Max Detail, 0xAA, 16xAF
36
46.186
ATI HD 4890 1GB (Sapphire)
2560×1600 – Max Detail, 0xAA, 16xAF
31
46.175
ATI HD 4870 1GB (Reference)
2560×1600 – Max Detail, 0xAA, 16xAF
28
40.660
NVIDIA GTX 260 896MB (XFX)
2560×1600 – Max Detail, 0xAA, 16xAF
23
39.365
ATI HD 5770 1GB (Reference)
2560×1600 – Max Detail, 0xAA, 16xAF
28
37.389
NVIDIA GTX 250 1GB (EVGA)
2560×1600 – Max Detail, 0xAA, 4xAF
24
32.453
ATI HD 4770 512MB (Gigabyte)
1920×1080 – Max Detail, 4xAA, 16xAF
22
31.561

Although the game is playable with 4xAA at 2560×1600, the gameplay is made much more smooth, and enjoyable, with anti-aliasing removed entirely. This is also one game where AA isn’t too noticeable in the heat of battle, so it’s hardly even missed.

Futuremark 3DMark Vantage

Although we generally shun automated gaming benchmarks, we do like to run at least one to see how our GPUs scale when used in a ‘timedemo’-type scenario. Futuremark’s 3DMark Vantage is without question the best such test on the market, and it’s a joy to use, and watch. The folks at Futuremark are experts in what they do, and they really know how to push that hardware of yours to its limit.

The company first started out as MadOnion and released a GPU-benchmarking tool called XLR8R, which was soon replaced with 3DMark 99. Since that time, we’ve seen seven different versions of the software, including two major updates (3DMark 99 Max, 3DMark 2001 SE). With each new release, the graphics get better, the capabilities get better and the sudden hit of ambition to get down and dirty with overclocking comes at you fast.

Similar to a real game, 3DMark Vantage offers many configuration options, although many (including us) prefer to stick to the profiles which include Performance, High and Extreme. Depending on which one you choose, the graphic options are tweaked accordingly, as well as the resolution. As you’d expect, the better the profile, the more intensive the test.

Performance is the stock mode that most use when benchmarking, but it only uses a resolution of 1280×1024, which isn’t representative of today’s gamers. Extreme is more appropriate, as it runs at 1920×1200 and does well to push any single or multi-GPU configuration currently on the market – and will do so for some time to come.

3DMark Vantage is a highly scalable benchmark, taking full advantage of all available shaders and literal GPU cores, along with copious amounts of memory. Given that, our results above fairly accurately scale each card with its real-world performance.

Overclocking ATI’s Radeon HD 5770

Before tackling our overclocking results, let’s first clear up what we consider to be a real overclock and how we go about achieving it. If you read our processor reviews, you might already be aware that we don’t care too much for an unstable overclock. It might look good on paper, but if it’s not stable, then it won’t be used. Very few people purchase a new GPU for the sole purpose of finding the maximum overclock, which is why we focus on finding what’s stable and usable.

To find the max stable overclock on an ATI card, we stick to using ATI’s Catalyst Overdrive tool. Compared to what’s available on the NVIDIA side, it’s quite limited in the top-end, but it’s the most robust and simplest solution to use. For NVIDIA, we use EVGA’s Precision, which allows us to reach heights that are in no way sane – a good thing.

Once we find what we believe might be a stable overclock, the card is put through 30 minutes of torture with the help of OCCT 3.0’s GPU stress-test, which we find to push any graphics card harder than any other stress-tester we’ve ever used. If the card passes there, we then further verify by running the card through a 2x run of 3DMark Vantage’s Extreme setting. Finally, games are quickly loaded and tested out to assure we haven’t introduced any side-effects.

If all these tests pass without issue, we consider the overclock to be stable.

Overclocking ATI’s Radeon HD 5770

Most ATI cards I’ve dealt with over the past year have overclocked quite well, but I didn’t have too much luck with the HD 5770. Although I reached some amazing clocks that I thought were stable (1+ hour OCCT stable), actually playing a game would show me otherwise. It’s actually rather frustrating to see an awesome overclock fly through OCCT’s brutal GPU test, only to have a game lock up within 5 – 10 minutes of gameplay. On the other hand, it really does prove that you can’t only trust one source for stability.

After much tinkering, the max stable clocks I hit was 910MHz to the Core (+60MHz) and 1225MHz for the Memory (+25MHz). Again, not too impressive of an overclock, but how does the extra oomph translate into real performance?

The answer is… not too well. The gains in most of the games are so minor, that unless you manage to reach a higher stable clock than what I hit, it can be argued that it’s not worth overclocking at all. What’s the point of putting additional stress on the card, and increasing the power consumption, for a mere 1 – 3 FPS?

Power & Temperatures

To test our graphics cards for both temperatures and power consumption, we utilize OCCT for the stress-testing, GPU-Z for the temperature monitoring, and a Kill-a-Watt for power monitoring. The Kill-a-Watt is plugged into its own socket, with only the PC connect to it.

As per our guidelines when benchmarking with Windows, when the room temperature is stable (and reasonable), the test machine is boot up and left to sit at the Windows desktop until things are completely idle. Once things are good to go, the idle wattage is noted, GPU-Z is started up to begin monitoring card temperatures, and OCCT is set up to begin stress-testing.

To push the cards we test to their absolute limit, we use OCCT in full-screen 2560×1600 mode, and allow it to run for 30 minutes, which includes a one minute lull at the start, and a three minute lull at the end. After about 10 minutes, we begin to monitor our Kill-a-Watt to record the max wattage.

Can I hear a “Whoa.” just one more time? The HD 5770 deserves it. Compared to the HD 4870, the HD 5770 runs a staggering 22°C cooler at idle, and 9°C cooler at max load. Likewise for the power consumption, the HD 5770 in the same comparison shaves the idle power down 41W and its load by 93W. These are hardly small differences.

Final Thoughts

Given the fact that AMD released its first Evergreen cards just a few weeks ago, I still find it a little hard to believe that now, I’m writing a final thoughts section for the company’s mid-range line-up. The turn-around was ultra-fast, and in the end, it works out best for both AMD itself, and also end-consumers. The holiday season is here, and there’s going to be rather large demand for these, and also the already-released cards, so AMD was smart to allow as much time as possible for gamers to pick up the latest offerings.

Although AMD’s turn-around time is fast, I can’t help but wonder if the company didn’t just release these cards now because it was possible, but because the first DirectX 11 titles are right around the corner. After all, even though NVIDIA will support DX11 on its upcoming Fermi architecture, no cards will be available for purchase prior to 2010, with a slim chance of that changing. So is AMD hoping for mad success with Dirt 2 (first title to support DX11) and sell even more cards as a result?

It doesn’t really matter, because in the end, a release now is possible, and it works to the favor of the consumer. It’s not a bad thing. All we can hope for now is that Dirt 2 doesn’t take too long to launch, and other titles that follow it. S.T.A.L.K.E.R.: Call of Pripyat is the second title to boast great DX11 support, and it’s supposed to be released in Q4, but the details of that are still scarce. All I hope is that it supports DX11 better than Clear Sky supported DX10. If you ran that game in DX10 mode, then you know what I’m talking about.

DirectX 11 is a rather large part of what makes the HD 5000-series interesting, but there’s also Eyefinity support, which I expect to to grow in popularity as time goes on. I have limited experience with it personally, but from what I’ve seen, I can’t help but wish I had room for a triple-monitor setup, because the result is drool-worthy. The best part is that it’s not only for gaming, but it’s beneficial for simply using your PC. The multi-tasking capabilities is amazing, and with the capability to power three 30″ displays off of one card (even the lowly HD 5750)… it’s nothing but impressive.

There’s also ATI Stream support, which AMD talks about at every opportunity. Although NVIDIA’s CUDA technology hasn’t exactly taken off to the extent that the company has hoped for (we’re still in the infancy of GPGPU), ATI’s Stream has had even less real-world success. Aside from Adobe Photoshop CS4, I can’t think of any other applications that make use of ATI Stream, except for SANDRA, but that’s only a benchmarking tool. Both AMD and NVIDIA are very confident about the success of GPGPU in the near-future, though, so we’ll just have to sit back and wait for it to happen.

From a gaming performance perspective, I admit that I expected a little bit more from the HD 5770. The performance is far from bad, so I feel foolish for even saying it, given that the HD 4870 has very similar performance, yet cost 2.5x more just last summer. But, technology has improved, and now more than ever, we’re really getting our money’s worth when purchasing any current GPU, or CPU for that matter. All things considered, the HD 5770 is a fantastic card. It might be 5 – 10% slower than the HD 4870, but it piles on other features, such as faster compute power, far lower power consumption, DirectX 11 support and more.

Upgrading to this card is a tough proposition, unless you own a model that’s much slower. There’s little purpose in upgrading from an HD 4850 or HD 4870, for example, because the difference is in the features alone. If you were to upgrade, you’d see more for your money if you picked up at least an HD 5850. At least then, you’d not only get the features, but a sweet performance boost as well.

For a new PC, the HD 5770 is an absolute winner. It’s perfectly-suited for both mid-range and high-end gamers, depending on titles, as it supports most games at 2560×1600 just fine, with the leading exception being Grand Theft Auto. The card also happens to make a good HTPC solution, thanks in in part to its lower overall temperatures and fantastic power consumption. If your HTPC is on 24/7, you’re the perfect market for such a card, as the power draw is about 22.5% of the previous generation.

If I had to choose between the HD 5770 and the HD 4870 or even the HD 4890, I’d choose the newer card without hesitation. The performance is a bit lower, but everything else that’s packed in more than makes up for it.

Discuss this article in our forums!

Have a comment you wish to make on this article? Recommendations? Criticism? Feel free to head over to our related thread and put your words to our virtual paper! There is no requirement to register in order to respond to these threads, but it sure doesn’t hurt!

Copyright © 2005-2019 Techgage Networks Inc. - All Rights Reserved.