If it feels like we just took a look at a brand-new ATI graphics card mere weeks ago, don’t worry… we did. From the middle of last month to the middle of this month, AMD has a schedule to round-out its entire budget line-up, and although the Radeon HD 5670 sits comfortably at $100, the card we’re looking at today, and the HD 5570 due out next week, both retail for well under that price point.
The HD 5670 was a bit of a strange beast, because while it’s not what anyone would consider a mainstream gaming card, it delivered more than enough performance to handle all of today’s games at up to 1080p. For a $100 offering, the card had a great performance/watt ratio, and with after-market coolers, as we saw with Sapphire’s version of the card, the temperatures are ideal for any sort of use, including HTPC.
We’ll be tackling the HD 5570 later, but both it and the HD 5450 have a different focus than the HD 5670, as both cards cost well under $100, with the HD 5450 priced at around $50 – $60. I suspect that the latter pricing would be for 1GB cards, while the 512MB versions will stick close to an even $50. The main competition to the HD 5450 is NVIDIA’s GeForce 210 which currently retails for around $45 (mail-in rebates bring this price way down).
With its modest specs, the HD 5450 isn’t going to “wow” people with its gaming performance, but at its given price, it should appease all those who are looking for a quality HTPC card that will allow them to do a bit of light gaming, or take advantage of all of the popular casual games on the market, such as Spore, Sims 3, World of Warcraft and Solitaire. The best part of the card might be its low-profile nature, and its completely passive heatsink design.
As you can see in the photograph above, we received three different versions of the Radeon HD 5450. The one in the middle, with the unique red heatsink, is AMD’s reference. The one on the left is Sapphire’s version, which is even thinner than AMD’s, and on the right is Gigabyte’s fanned model, which is also much thinner than the reference design.
To get this explanation over with, I won’t be benchmarking all three of these cards today, but instead will focus on just the reference. The reason is just because I find little value in benchmarking a handful of cards for such a low-end model, where the performance isn’t going to change all too much. This is where things become a little confusing, and I’m forced to explain the oddities.
For some reason, the reference card we received from AMD wasn’t completely reference. According to official press documents, the clock speed is 650MHz, while the memory is 800MHz. Our sample had a 900MHz memory clock, which means almost a 13% boost. I contacted AMD about the fact that what we received was essentially overclocked, and the person I talked to was surprised and couldn’t explain it. Somehow, all of the press received these slightly overclocked cards.
To add to the confusion just a bit more, our performance didn’t exactly match what the press deck we were given showed, either. Our card performed a lot better. But after talking to AMD, it stated that with that 100MHz boost to the memory, it would equate to about a 5.5% performance boost on average, so nothing major. It also mentioned that it suspects that most launch cards from other companies will not be 650/800, but would typically be pre-overclocked, like with up to a 900MHz memory clock.
Of course, I noticed the discrepancy after all of our benchmarking was completed. Like the reference card, Gigabyte’s model was also overclocked (+50MHz to the core) and I thought Sapphire’s was, but it turned out later on that I was wrong. For some reason there, GPU-Z showed that the card had a default clock of 650/900, but I mis-read it and thought those were the actual clocks.
Rather, Sapphire’s card was actually following the reference clocks. I’m uncertain as to the reason that there was ever a reference to a 900MHz memory clock, but if I had to guess, the HD 5450 was supposed to have a 900MHz clock, but for some reason, AMD backed it down to 800MHz. I’m unsure why, however, as the card at 900MHz memory was absolutely rock-stable throughout all of our tests.
Whew, well there’s five paragraphs I wish I didn’t have to add to the front-page of the review. Once again, we are benchmarking with the reference card, but it has a 100MHz boost to the memory. We’ll benchmark the other two cards, including Sapphire’s reference using 3DMark Vantage only, for the sake of ease.
|Radeon HD 5970|
1600 x 2
|Radeon HD 5870|
|Radeon HD 5850|
|Radeon HD 5770|
|Radeon HD 5750|
512MB – 1GB
|Radeon HD 5670|
512MB – 1GB
|Radeon HD 5570|
|Radeon HD 5450|
512MB – 1GB
The HD 5450 is in all regards a very low-end card. It has 5% of the total number of cores as the HD 5870, a 64-bit memory bus, and the slowest clocks of the bunch. Unfortunately, I can’t reveal the HD 5570 clocks at this time, but I can assure you that the card will be faster than the HD 5450, and slower than the HD 5670 (it’s reasons like these that I lack friends).
I admit, that for a card that’s to sell for around $50, the reference version straight out of AMD has one of the coolest (no pun) passive coolers I’ve ever seen, if not the coolest. It’s unfortunate, though, because I’m not quite sure how adopted this will be. All I’ve seen of HD 5450’s from other vendors so far have been their own cooler designs. Whether they are cool, or do a good job at cooling, we’ll soon see.
The reference card includes HDMI, DVI and VGA ports, and Gigabyte’s card below follows that. The Sapphire card sways from this just a wee bit by offering a DisplayPort connector in lieu of the HDMI.
Like AMD itself, Sapphire has also opted to go with a completely passive design, except its card is much, much thinner, and doesn’t at all block other slots. The cooler seen below extends just a wee bit onto the back, about an inch in, but is still thin enough that it won’t bother a PCI card behind it, if there is one there.
Gigabyte follows similar goals as Sapphire by offering the thinnest card of all three. Here, the cooler doesn’t at all extend on the back, but the trade-off is that a minuscule fan is included. The downside to this implementation is obvious… noise. But during our tests, even when stressing the GPU with OCCT, we couldn’t hear much more than enough noise to know that the fan was even working… it was almost silent, with no whine whatsoever.
Let’s move right into a look at our test methodology, and then get right to the results.
Support our efforts! With ad revenue at an all-time low for written websites, we're relying more than ever on reader support to help us continue putting so much effort into this type of content. You can support us by becoming a Patron, or by using our Amazon shopping affiliate links listed through our articles. Thanks for your support!