Latest Forum Posts

Latest News Posts
Coming Soon!
Social
Go Back   Techgage.com > Archives > Reviews and Articles

Reviews and Articles Discussion for Techgage content is located here. Only staff can create topics, but everyone is welcome to post.

Reply
 
Thread Tools
Old 02-03-2011, 02:11 AM   #1
Rob Williams
Editor-in-Chief
 
Rob Williams's Avatar
 
Join Date: Jan 2005
Location: Atlantic Canada
Posts: 13,347
Default AMD HD 6950 1GB vs. NVIDIA GTX 560 Ti Overclocking

AMD and NVIDIA released $250 GPUs last week, and both proved to deliver a major punch for modest cash. After testing, we found AMD to have a slight edge in overall performance, so to see if things change when OCing is brought into the picture, we pushed both cards hard, and then pit the results against our usual suite.

You can take a look at our full OCing results and then discuss the article here!
__________________
Intel Core i7-3960X, GIGABYTE G1.Assassin 2, Kingston 16GB DDR3-2133, NVIDIA GeForce GTX 770 2GB
Kingston HyperX 3K 240GB SSD (OS, Apps), WD VR 1TB (Games), Corsair 1000HX, Corsair H70 Cooler
Corsair 800D, Dell 2408WFP 24", ASUS Xonar Essence STX, Gentoo (KDE 4.11. 3.12 Kernel)

"Take care to get what you like, or you will be forced to like what you get!" - H.P. Baxxter
<Toad772> I don't always drink alcohol, but when I do, I take it too far.


Rob Williams is online now   Reply With Quote
Old 02-03-2011, 04:18 AM   #2
Relayer
E.M.I.
 
Join Date: Mar 2008
Location: New Zealand
Posts: 67
Default

Nice job, Rob. Interesting results, as well. Especially, the power consumption. I was fully expecting the 6950, once O/C to draw more power than the 560. I also thought there would be a bigger difference in temps with the current trend of nVidia's improved cooling.

Overall performance changes from one reviewer to the next depending on preferences and setup tendencies. Like you said, the features between the two are really the biggest differences.

Maybe, in the future you could spend a bit more time covering those differences in a bit more depth? Everyone mentions PhysX, but nobody actually tests with it on. Same with CUDA. Nice feature, if you have software that uses it. Eyefinity? What kind of frame rates and what level of eyecandy can you expect to be able to use effectively. Video encoding, etc...

P.S. Did you try and unlock the 6950? Not to use the figures in the review. Just curious if it seems like current models of the cards are still coming through with virtually all of them unlockable, or if they are starting to fuse them off physically, as some have rumored they are going to do? Strike that. Forgot that the 1gig card doesn't have the dual bios setup. My bad.

Last edited by Relayer; 02-03-2011 at 04:23 AM.
Relayer is offline   Reply With Quote
Old 02-06-2011, 05:17 PM   #3
Kougar
Techgage Staff
 
Kougar's Avatar
 
Join Date: Mar 2008
Location: Texas
Posts: 2,653
Default

The phsical hardware aspect of Physx/CUDA aren't features that get "turned on" and cause the card to draw additional current. They are parts of the core that are already active and already always on even if not being utilized in software due to how they are built into the core's processing units itself. Only a small part of each shader group is devoted to processing them, so in Physx/CUDA related tasks it won't amount to anything more than what the card uses for full 3D gaming.
__________________
Core i7 4770k 4.2Ghz
Gigabyte Z87X-UD5H
Crucial Ballistix Sport LP 1600MHz 32GB
EVGA GTX 480 HydroCopper FTW
ASUS Xonar DX
Corsair Neutron GTX 240GB | Windows 7 64-bit
Apogee XT + MCP655 & Thermochill Triple 140mm Radiator
Corsair AX1200 PSU | Cooler Master HAF-X

Kougar is offline   Reply With Quote
Old 02-07-2011, 12:38 AM   #4
Relayer
E.M.I.
 
Join Date: Mar 2008
Location: New Zealand
Posts: 67
Default

Quote:
Originally Posted by Kougar View Post
The phsical hardware aspect of Physx/CUDA aren't features that get "turned on" and cause the card to draw additional current. They are parts of the core that are already active and already always on even if not being utilized in software due to how they are built into the core's processing units itself. Only a small part of each shader group is devoted to processing them, so in Physx/CUDA related tasks it won't amount to anything more than what the card uses for full 3D gaming.
I mean using apps that use CUDA and show the improved productivity possibilities. Game benchmarks with PhysX on to see frame rates. Two cards (when available, of course) in 3dSurround with PhysyX. Is there enough GPU power to actually use those features together (3D, multi-monitor, and PhysyX)? They are touted as reasons to buy nVidia cards, but rarely, if ever, are they tested (Never have I seen them tested together). For AMD, Eyefinity benchmarks. GPGPU tasks their cards can run.

I'm not expecting the cards to use more power when using these features.
Relayer is offline   Reply With Quote
Old 02-16-2011, 11:38 AM   #5
Kougar
Techgage Staff
 
Kougar's Avatar
 
Join Date: Mar 2008
Location: Texas
Posts: 2,653
Default

My apologies then as I misunderstood your question!

Probably the biggest reason you don't see 3D, multi-display, and PhysX combined is because 3D capable monitors are uncommon and expensive... and I've yet to hear of any games that actually do 3D well. Although that may just be me, as I wasn't that impressed with most 3D TV & movies either.. But in any case, are rare as they are, multiple displays in 3D would require more than one 3D capable monitor. I may be mistaken but 3D displays don't require additional GPU processing overhead either, they simply change how the final picture is displayed. At most this might result in a slight tweak in CPU driver overhead, but I don't believe so.

As for PhysX, it originally required a separate GPU dedicated for PhysX computing. As NVIDIA's drivers matured this is generally isn't a requirement anymore, but it still requires 512mb of video RAM with 1GB recommended if using a single GPU concurrently for both purposes. But the real reason it isn't tested is simply because the gains resulting form overclocking wouldn't be noticed. The PhysX games I am aware of put a fairly low upper limit on how many physics objects that get created, basically with a high-end GPU there are still only going to be so many boxes or explosion effects regardless of what the card is capable of. I am not aware of any games where PhysX is a bottleneck unless it's an old or very cut-down GPU in question...?

Having conducted GPU overclocking tests that utilized a CUDA-based folding program, the increase in performance for raw computations scaled around the same as FPS would. The actual GPU core architecture (what is enabled/disabled, number of shaders, shader core design, etc) per model is still going to have by far the largest impact on performance regardless, any GPU overclocking will just be a little extra headroom in the end.
__________________
Core i7 4770k 4.2Ghz
Gigabyte Z87X-UD5H
Crucial Ballistix Sport LP 1600MHz 32GB
EVGA GTX 480 HydroCopper FTW
ASUS Xonar DX
Corsair Neutron GTX 240GB | Windows 7 64-bit
Apogee XT + MCP655 & Thermochill Triple 140mm Radiator
Corsair AX1200 PSU | Cooler Master HAF-X

Kougar is offline   Reply With Quote
Old 02-19-2011, 05:23 PM   #6
Relayer
E.M.I.
 
Join Date: Mar 2008
Location: New Zealand
Posts: 67
Default

I wasn't aware that there was no performance hit for 3D. I thought that each frame had to be rendered twice, and, therefore would incur a performance hit. Thanks for clearing that up for me. PhysX does incur a performance hit, and I think it would be useful to see what kind of frame rates you should expect with it on, running a single card, seeing as that's how ~90% of people would use it. I don't understand the reference to O/C'ing as being a reason to not test certain features? Typically, except for one page, most reviews concentrate on stock clock tests.

I guess I feel like if it's not worth testing these features, then we shouldn't tell people to base their buying decisions on whether a card has them or not. Don't tout CUDA as a benefit and then not show a single app that you can expect a performance increase with, due to it.
Relayer is offline   Reply With Quote
Old 02-20-2011, 02:44 AM   #7
Tharic-Nar
Techgage Staff
 
Tharic-Nar's Avatar
 
Join Date: Nov 2009
Location: UK
Posts: 1,166
Default

Actually, Kougar is incorrect this time (sorry). 3D does incur a performance loss, of ~50%, sometimes a little more. This is due to the fact it has to render a scene twice from 2 different angles + overhead.

Rob can't see in 3D, so he can't test it as part of the suite, since he wouldn't know if it was working properly. Additionally, there are only 15 officially supported 3D vision games, the rest are tricked into working using the NVIDIA drivers, so results will vary. Full list available here.

As for CUDA processing, this is a very complicated problem. We've constantly been going back and forth over inclusion of tests, but there are a number of roadblocks as it were. First of all, the technology is still immature. There are a number of apps that can make use of GPGPU processing, but often it's hard to benchmark.

Something like video encoding, you are restricted by the codec that can be used, and the quality results are terrible in comparison to x86 (as shown by Anand with their QuickSync test on Sandy Bridge). RayTrace rendering engines using CUDA have very limited feature support, they lack various processing methods and are completely useless for final render. Photoshop doesn't use CUDA, but OpenGL for it's processing. There's folding, but again, rather limited and constantly changing, which is the second major problem.

For reliable benchmarks, we need fixed metrics and tests. Since CUDA is in a constant state of flux, it's very hard to get accurate and predictable results. Software patches and performance increases and efficiencies would always make our results null. What accounted for the increase in performance, the card, the drivers, or the software?, so we would need to retest all cards with new drivers on the latest software - largely for the benefit of a very small niche.

Tests could be done, but they would be very intermittent, and probably not part of the regular test suite. Which is how we normally test them. PhysX we have done tests with, but again, as separate articles. Such as with Mafia 2. Sure, we could do more tests in the future, but it's finding a game that makes real use of PhysX instead of just the odd extra bit of flying gravel... (Crysis 2 perhaps?).
Tharic-Nar is offline   Reply With Quote
Old 03-26-2011, 11:28 PM   #8
Unregistered
Guest Poster
 
Posts: n/a
Default brands used

Im interested in knowing what brand and version 6950 1gb was used in that review? After overclocking it seems it was able to keep pace with the 580gtx, which is very impressive.
  Reply With Quote
Old 03-27-2011, 01:41 AM   #9
Rob Williams
Editor-in-Chief
 
Rob Williams's Avatar
 
Join Date: Jan 2005
Location: Atlantic Canada
Posts: 13,347
Default

Quote:
Originally Posted by Unregistered View Post
Im interested in knowing what brand and version 6950 1gb was used in that review? After overclocking it seems it was able to keep pace with the 580gtx, which is very impressive.
Unless I mention a brand directly, you can assume that the board being tested is reference :-) I'll make sure this is obvious in the future.

What's nice is that the cooler that came with the reference HD 6950 was rather standard, so anything third-party on the market should be able to deliver much improved cooling and overclocking abilities. That's the reason I didn't even include a photo of the card... the cooler used was generic and not too attractive ;-)
__________________
Intel Core i7-3960X, GIGABYTE G1.Assassin 2, Kingston 16GB DDR3-2133, NVIDIA GeForce GTX 770 2GB
Kingston HyperX 3K 240GB SSD (OS, Apps), WD VR 1TB (Games), Corsair 1000HX, Corsair H70 Cooler
Corsair 800D, Dell 2408WFP 24", ASUS Xonar Essence STX, Gentoo (KDE 4.11. 3.12 Kernel)

"Take care to get what you like, or you will be forced to like what you get!" - H.P. Baxxter
<Toad772> I don't always drink alcohol, but when I do, I take it too far.


Rob Williams is online now   Reply With Quote
Reply

Tags
None

Thread Tools

Posting Rules
You may not post new threads
You may post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is On

Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
NVIDIA's Kepler-based GPUs Rumored to Feature Dynamic Overclocking Rob Williams Video Cards and Displays 3 03-19-2012 03:16 PM
AMD Radeon HD 6950 1GB Rob Williams Reviews and Articles 14 02-15-2011 09:09 AM
Radeon HD 6950 1GB and GTX 560 Ti Announced Tharic-Nar Video Cards and Displays 0 01-25-2011 06:16 PM
Softmod an AMD Radeon HD 6950 to an HD 6970 Tharic-Nar Video Cards and Displays 12 01-02-2011 01:45 PM
AMD Radeon HD 6950 & HD 6970 2GB Rob Williams Reviews and Articles 4 12-21-2010 12:00 AM


All times are GMT -4. The time now is 07:45 PM.