Date: July 11, 2011
Author(s): Rob Williams
It’s not often that we see a new graphics card get released that’s no different than its predecessor, but with the Radeon HD 6770, that’s just the case. We’re already familiar with HD 5770 performance, but let’s take a look at how this $140 card compares to its similarly-priced competition, including the recently-launched HD 6790.
This past April, AMD helped fill the Radeon HD 6000 series mainstream void with the HD 6790. Compared to the HD 5770 it was essentially succeeding, I felt the card should have been called HD 6830, as its architectural benefits clearly separated it from the HD 5700/6700 series. Nonetheless, there did exist an HD 6700 series at that time, though it was OEM only.
As the title of this article suggests, that’s no longer the case. At some point over the past couple of months, AMD decided to allow vendors to release both the HD 6750 and HD 6770 cards to the public, which now gives us a total of seven mainstream and higher HD 6000 GPUs to choose from, ranging from $100 – $700.
The Radeon HD 6770 we’re taking a look at here today, courtesy of Sapphire, retails for about $140 USD, whereas ordinary models retail for closer to $120. Encompassed within that $20 premium is a custom cooler and the ability to run Eyefinity (3 monitors) right out of the box thanks to its inclusion of an active DVI to HDMI cable and a built-in DisplayPort chipset.
But let’s not jump too far ahead of ourselves… what about the reference HD 6770? What does it offer here in relation to the HD 5770? Ah, it’s times like these when I just love writing about something, because I can feel lazy yet efficient. The answer: nothing. Well, unless you want to count the name change, which I guess is something.
If HD 6700 == HD 5770, then what’s the point? Also easy to answer: marketing. For the sake of it being easier to sell a “current” model than a last year model, the simple change had to be made for the sake of OEMs. And, it seems the same mentality carried over to the consumer market as well. Currently, Newegg lists very few HD 5770s, and all share the same pricing as the HD 6770.
|Radeon HD 6990|
|Radeon HD 6970|
|Radeon HD 6950|
|Radeon HD 6870|
|Radeon HD 6850|
|Radeon HD 6790|
|Radeon HD 6770|
|Radeon HD 6750|
Like the HD 5770, the HD 6770 is a Juniper XT, built upon a 40nm process and consists of 1.04 million transistors. It features the same number of cores as the HD 6790, though because that card has a wider memory bus, the performance still lies in its favor.
As mentioned before, Sapphire’s “FleX” edition HD 6770 features a non-reference cooler and includes a DisplayPort processor on-board that negates the requirement for a DP monitor in the event you want to go with a 3×1 configuration. For those with older or budget displays that don’t feature the connection, this feature is a hugely welcomed one.
At the back of the card are dual DVI ports, and HDMI and also a DisplayPort. If you’re planning to go with an Eyefinity setup, you can use both DVI ports here as normal, and plug the third DVI monitor into the active cable Sapphire has provided, which then plugs into the HDMI port.
Do I see an ATI logo on this card? Tsk tsk, Sapphire. Well, let’s be fair… AMD still hasn’t totally removed ATI branding from its own GPU driver, so I guess we can’t pick on Sapphire too much.
As the HD 6770 isn’t any different than an HD 5770, nor is this card overclocked, we can already take a guess at how the card will fare in our results ahead. We recently removed the HD 5770 from our charts due to the fact that they were getting too large, but we can still compare the card to the slightly more expensive HD 6790.
At Techgage, we strive to make sure our results are as accurate as possible. Our testing is rigorous and time-consuming, but we feel the effort is worth it. In an attempt to leave no question unanswered, this page contains not only our testbed specifications, but also a detailed look at how we conduct our testing.
The below table lists our testing machine’s hardware, which remains unchanged throughout all GPU testing, minus the graphics card. Each card used for comparison is also listed here, along with the driver version used. Each one of the URLs in this table can be clicked to view the respective category on our site for that product.
Intel Core i7-975 Extreme Edition – Quad-Core @ 4.05GHz – 1.40v
Gigabyte GA-EX58-EXTREME – F13j BIOS (08/02/2010)
Corsair DOMINATOR – 12GB DDR3-1333 7-7-7-24-1T, 1.60v
|AMD Graphics|| Radeon HD 6990 4GB (Reference) – Catalyst 11.4 Beta|
Radeon HD 6970 2GB (Reference) – Catalyst 10.12 Beta
Radeon HD 6950 2GB (Reference) – Catalyst 11.1
Radeon HD 6950 1GB (Reference) – Catalyst 11.1
Radeon HD 6870 1GB (Reference) – Catalyst October 5, 2010 Beta
Radeon HD 6850 1GB (Reference) – Catalyst October 5, 2010 Beta
Radeon HD 6790 1GB (Reference) – Catalyst March 23, 2011 Beta
Radeon HD 6770 1GB (Reference) – Catalyst 11.6
Radeon HD 6670 1GB (Sapphire) – Catalyst March 23, 2011 Beta
Radeon HD 6570 1GB (Sapphire) – Catalyst March 23, 2011 Beta
Radeon HD 6450 1GB (Reference) – Catalyst March 23, 2011 Beta
|NVIDIA Graphics|| GeForce GTX 580 1536MB (Reference) – GeForce 262.99|
GeForce GTX 570 1280MB (Reference) – GeForce 263.09
GeForce GTX 560 Ti 1024MB (Reference) – GeForce 266.56
GeForce GTX 560 1024MB (MSI) – GeForce 275.20
GeForce GTX 550 Ti 1024MB (MSI) – GeForce 267.59
GeForce GTX 460 1GB (EVGA) – GeForce 260.63
Gateway XHD3000 30″
When preparing our testbeds for any type of performance testing, we follow these guidelines:
To aide with the goal of keeping accurate and repeatable results, we alter certain services in Windows 7 from starting up at boot. This is due to the fact that these services have the tendency to start up in the background without notice, potentially causing inaccurate test results. For example, disabling “Windows Search” turns off the OS’ indexing which can at times utilize the hard drive and memory more than we’d like.
The most important services we disable are:
The full list of Windows services we assure are disabled is large, but for those interested in perusing it, please look here. Most of the services we disable are mild, but we go to such an extent to have the PC as highly optimized as possible.
At this time, we benchmark with three resolutions that represent three popular monitor sizes available today, 20″ (1680×1050), 24″ (1920×1080) and 30″ (2560×1600). Each of these resolutions offers enough of a variance in raw pixel output to warrant testing with it, and each properly represent a different market segment: mainstream, mid-range and high-end.
Because we value results generated by real-world testing, we don’t utilize timedemos. The possible exceptions might be Futuremark’s 3DMark Vantage and Unigine’s Heaven 2.1. Though neither of these are games, both act as robust timedemos. We choose to use them as they’re a standard where GPU reviews are concerned.
All of our results are captured with the help of Beepa’s FRAPS 3.2.3, while stress-testing and temperature-monitoring is handled by OCCT 3.1.0 and GPU-Z, respectively.
For those interested in the exact settings we use for each game, direct screenshots can be seen below:
It’s not that often that faithful PC gamers get a proper racing game for their platform of choice, but Dirt 2 is one of those. While it is a “console port”, there’s virtually nothing in the game that will make that point stand out. The game as a whole takes good advantage of our PC’s hardware, and it’s as challenging as it is good-looking.
Manual Run-through: The race we chose to use in Dirt 2 is the first one available in the game, as it’s easily accessible and features a lot of GPU-pounding effects that the game has become known for, such as realistic dust and water effects, a large on-looking crowd of people and fine details on and off the track. Each run-through lasts the entire two laps, which comes out to about 2.5 minutes.
Given the beefier memory bus of the HD 6790, the performance of the HD 6770 is quite impressive. I honestly expected to see a higher delta here, and this a game that does tend to scale exceptionally well with Radeon cards. Let’s see if the rest of our line-up shows similar results.
Just Cause 2 might not belong to a well-established series of games, but with its launch, it looks like that might not be the case for long. The game offers not only superb graphics, but an enormous world to explore, and for people like me, a countless number of hidden items to find around it. During the game, you’ll be scaling skyscrapers, racing through jungles and fighting atop snow-drenched mountains. What’s not to like?
Manual Run-through: The level chosen here is part of the second mission in the game, “Casino Bust”. Our runthrough begins at the second-half of the level, which requires us to situate ourselves on top of a car and have our driver, Karl Blaine, speed us through part of the island to safety. This is a great mission for benchmarking as we get to see a lot of the landmass, even if some of it is at a distance.
The HD 6770 continues to perform well, and has in fact surpassed the technologically more impressive HD 6790. There must have been some wicked tweaks made in very recent Radeon drivers for this to happen, given the results from the HD 6790 were acquired just three months ago.
For fans of the original Mafia game, having to wait an incredible eight years for a sequel must’ve been tough. But as we found out in our review, the wait might be forgotten as the game is quite good. It doesn’t feature near as much depth as say, Grand Theft Auto IV, but it does a masterful job of bringing you back to the 1940’s and letting you experience the Mafia lifestyle.
Manual Run-through: Because this game doesn’t allow us to save a game in the middle of a level, we chose to use chapter 7, “In Loving Memory…”, to do our runthrough. That chapter begins us on a street corner with many people around, and from there, we run to our garage, get in our car, and speed out to the street. Our path ultimately leads us to the park, and takes close to two minutes to accomplish.
Taking back the crown, the HD 6790 gives us results comparable to what we saw with Dirt 2. The HD 6770 does manage to keep right up to it, however, falling a mere 4 FPS behind at 1080p.
One of the more popular Internet memes for the past couple of years has been, “Can it run Crysis?”, but as soon as Metro 2033 launched, that’s a meme that should have died. Metro 2033 is without question one of the beefiest games on the market, and though it supports DirectX 11, it’s almost a feature worth ignoring, because the extent you’ll need to go to in order to see playable framerates isn’t likely going to be worth it.
Manual Run-through: The level we use for testing is part of chapter 4, called “Child”, where we must follow a linear path through multiple corridors until we reach our end point, which takes a total of about 90 seconds. Please note that due to the reason mentioned above, we test this game in DX10 mode, as DX11 simply isn’t that realistic from a performance standpoint.
Leave it to a game like Metro 2033 to really show us where a GPU bottleneck can lie. Because the HD 6770 includes only a 128-bit memory bus, the HD 6790’s 256-bit variant gives a hardcore game like Metro 2033 more breathing room, and it shows.
Of all the games we test, it might be this one that needs no introduction. Back in 1998, Blizzard unleashed what was soon to be one of the most successful RTS titles on the planet, and even as of today, the original is still heavily played all around the world – even in actual competitions. StarCraft II of course had a lot of hype to live up to, and it did, thanks to its intense gameplay and superb graphics.
Manual Run-through: The portion of the game we use for testing is part of the Zero Hour mission, which has us holding fort until we’re able to evacuate. Our saved game starts us in the middle of the mission, and from the get-go, we build a couple of buildings and concurrently move our main units up and around the map. Total playtime lasts about two minutes.
It appears that StarCraft II, like Metro 2033, is also memory bottlenecked, as the HD 6790 delivers at least 10 FPS more at both of these resolutions.
Although we generally shun automated gaming benchmarks, we do like to run at least one to see how our GPUs scale when used in a ‘timedemo’-type scenario. Futuremark’s 3DMark 11 is without question the best such test on the market, and it’s a joy to use, and watch. The folks at Futuremark are experts in what they do, and they really know how to push that hardware of yours to its limit.
Similar to a real game, 3DMark 11 offers many configuration options, although many (including us) prefer to stick to the profiles which include Performance, and Extreme. Depending on which one you choose, the graphic options are tweaked accordingly, as well as the resolution. As you’d expect, the better the profile, the more intensive the test. The benchmark doesn’t natively support 2560×1600, so to benchmark with that, we choose the Extreme profile and simply change the resolution.
According to 3DMark 11, the HD 6770 performs best at modest resolutions, with 1080p proving to be a bit much (even with the HD 6790 that’s the case).
While Futuremark is a well-established name where PC benchmarking is concerned, Unigine is just beginning to become exposed to people. The company’s main focus isn’t benchmarks, but rather its cross-platform game engine which it licenses out to other developers, and also its own games, such as a gorgeous post-apocalytic oil strategy game. The company’s benchmarks are simply a by-product of its game engine.
The biggest reason that the company’s “Heaven” benchmark grew in popularity rather quickly is that both AMD and NVIDIA promoted it for its heavy use of tessellation, a key DirectX 11 feature. Like 3DMark Vantage, the benchmark here is overkill by design, so results here aren’t going to directly correlate with real gameplay. Rather, they showcase which card models can better handle both DX11 and its GPU-bogging features.
The HD 6790 once again takes a rather demanding lead here, and again, at 1080p or higher, the HD 6770 begins to struggle.
To test our graphics cards for both temperatures and power consumption, we utilize OCCT for the stress-testing, GPU-Z for the temperature monitoring, and a Kill-a-Watt for power monitoring. The Kill-a-Watt is plugged into its own socket, with only the PC connect to it.
As per our guidelines when benchmarking with Windows, when the room temperature is stable (and reasonable), the test machine is boot up and left to sit at the desktop until things are completely idle. Because we are running such a highly optimized PC, this normally takes one or two minutes. Once things are good to go, the idle wattage is noted, GPU-Z is started up to begin monitoring card temperatures, and OCCT is set up to begin stress-testing.
To push the cards we test to their absolute limit, we use OCCT in full-screen 2560×1600 mode, and allow it to run for 15 minutes, which includes a one minute lull at the start, and a four minute lull at the end. After about 5 minutes, we begin to monitor our Kill-a-Watt to record the max wattage.
In the case of dual-GPU configurations, we measure the temperature of the top graphics card, as in our tests, it’s usually the one to get the hottest. This could depend on GPU cooler design, however.
Note: Due to changes AMD and NVIDIA made to the power schemes of their respective current-gen cards, we were unable to run OCCT on them. Rather, we had to use a less-strenuous run of 3DMark Vantage. We will be retesting all of our cards using this updated method the next time we overhaul our suite.
Unfortunately, at the time this graphics card was tested, GPU-Z did not support it, and thus we have no temperature information to share. Given the cooler and our knowledge of the HD 5770, however, I’d surmise that it’d top out at about 70°C and idle at ~35°C. This is simply based off of our last FleX product review and also the fact that the HD 5770 with a reference cooler idled at 41°C and loaded at 78°C.
Power-wise, the HD 6770 surprisingly surpassed the HD 6790 at both idle and load. We’re not totally sure of the reasons behind this, seeing as the HD 5770 had far better ratings, but it could be that the DisplayPort built-in functionality does play a role. I had wanted to haul out the HD 5770 again and do a head-to-head, but I’m suffering a brain freeze and can’t recall where I stored it (the life of a tech reviewer).
The Radeon HD 6770 is quite an interesting little card. Alright, it’s not interesting at all – it’s exactly the same as the HD 5770, after all. We pretty well knew what we were in for, but with recent driver improvements, it was still interesting to see how the card fared against the HD 6790 – a card that’s technologically much more capable.
At Newegg, the HD 6770 can be had for about $120, with Sapphire’s model in particular selling for $140. That premium gets you the ability to setup an Eyefinity setup without a DisplayPort monitor, and also a custom cooler that’s sure to be more effective than the reference. Is that premium worth it? Only you can really decide that one.
By comparison, the HD 6790 retails for $140, and that $20 premium does actually get you a noticeable performance gain. While in some games, both of these cards will perform similarly, in others, the HD 6790 has a clear advantage. If I were in the market for either of these two cards, I’d easily pay that premium. If I didn’t have a DisplayPort monitor, however, and wanted Eyefinity, the only real choice is Sapphire’s FleX.
Another card worth some consideration is NVIDIA’s GeForce GTX 460, as it performs a bit better than the HD 6790 on average. However, that card is a lot more difficult to find at or even below the $140 price-point. If you don’t mind taking mail-in rebates into consideration, you could easily spend some time finding the absolute best deal.
While I’d recommend Sapphire’s FleX for those who want Eyefinity, I couldn’t recommend it if you just wanted an HD 6770. Better suited for that is Sapphire’s own reference design, which currently costs $118 at Newegg and can be brought down to $98 after a mail-in rebate. While it doesn’t feature a special cooler, this is a GPU that won’t get too hot anyway, so my vote would be to save some money and get the same performance.
Have a comment you wish to make on this article? Recommendations? Criticism? Feel free to head over to our related thread and put your words to our virtual paper! There is no requirement to register in order to respond to these threads, but it sure doesn’t hurt!
Copyright © 2005-2020 Techgage Networks Inc. - All Rights Reserved.