Techgage logo

Gigabyte GeForce GTX 260 Super Overclock

Date: October 6, 2009
Author(s): Rob Williams

NVIDIA’s GeForce GTX 260 is not a new card. In fact, it’s been available for over a year in its 216 Core form. So is it even worth a look at today? Where Gigabyte’s “Super Overclock” version is concerned, yes. Although it costs less than a stock GTX 275, this new card beat it out in almost every single game and setting we put it through.



Introduction

Earlier this year, I posted a review of Sapphire’s Radeon HD 4870 Vapor-X graphics card, and at the time, I felt weird for doing it. The reason was simple. At that point in time, the HD 4870 wasn’t a new card. In fact, the HD 4890 was released just a month prior. So what was the point? At the time, ATI’s pricing structure was varied and Sapphire’s card featured 2GB of GDDR5, so, despite being based on a ten-month-old design, it was still relevant.

I had a bit of deja vu when Gigabyte contacted me a few weeks ago to take their latest GeForce GTX 260 for a spin. I immediately thought, “What the…”, because let’s face it, the GTX 260 is not a new card. The first iteration of the card came to market last June, while the revision, featuring 216 Shader cores, came in early October. So as it stands, the card we’re looking at today is essentially based on a model which was launched almost exactly a year ago.

So how is a review of this year-old card justified today? Like the aforementioned Sapphire review, the biggest part of the card’s relevancy again comes down to pricing. This particular model sells for $199, which puts it between all other GTX 260 cards and the GTX 275. Gigabyte justifies the price hike by offering the highest-clocked GTX 260 on the market, with 680MHz Core, 1500MHz Shader and 2500MHz memory clock speeds. With clocks such as these, the card looks to overtake, or at least match, the performance of the more-expensive GTX 275.

Closer Look at Gigabyte’s GeForce GTX 260 Super Overclock

Welcome to Gigabyte’s “Super Overclock” series. As the name suggests, cards belonging to this series will be shipped with higher clock speeds than what NVIDIA sets as reference. In the case of the reference GTX 260, the clocks are 576MHz Core, 1242MHz Shader and 1998MHz Memory. As you can tell by the specs mentioned above, Gigabyte has increased the clocks enough to have it become real competition to the next model up in NVIDIA’s line-up, the GTX 275, despite that card featuring more overall Shader cores (240 vs. 216).

In addition to featuring higher-than-reference clock speeds, Super Overclock cards go through extra care before earning their name. In what the company calls the “GPU Gauntlet” – essentially a cherry-picking process – Super Overclock cards are put through stringent testing to assure that they are the best available, not only from a performance perspective, but also to verify top-rate thermal efficiency and power switching capabilities.

As Gigabyte’s highest-end graphics series, these cards in turn include the company’s “Ultra Durable VGA” set of high-grade components. These include careful choosing of Samsung or Hynix memory chips, Japanese solid capacitors, Ferrite core metal chokes, low RDS(ON) MOSFETs, and not surprisingly, a PCB built using 2oz of copper for its inner layer. All of this might not mean much to most people, but it all comes down to a high-quality product – one that’s stable, runs cool and is very, very overclockable.

Currently, NVIDIA’s graphics card line-up is rather slim, with six real choices overall. The last generation 9800 GT is at or under $100, putting it in a different category from the others. The GTS 250 card starts off at around $125, and we move up to the GTX 285, which sells for just over $300. At this point in time, we wouldn’t recommend considering a GTX 295 for purchase, as AMD’s Radeon HD 5870 X2 is right around the corner and will likely have a similar cost. Even if that’s not the case, given the sheer improvement that current HD 5000 cards show over previous generations, it should theoretically be twice as fast as the GTX 295.

Model
Core MHz
Shader MHz
Mem MHz
Memory
Bus Width
Processors
GeForce GTX 295
576
1242
1000
1792MB
448-bit
480
GeForce GTX 285
648
1476
1242
1GB
512-bit
240
GeForce GTX 275
633
1404
1134
896MB
448-bit
240
GeForce GTX 260
576
1242
999
896MB
448-bit
216
GeForce GTS 250
738
1836
1100
1GB
256-bit
128
GeForce 9800 GT
600
1500
900
512MB
256-bit
112

Enough about the other cards though, and more about the one in our test bench today. As you can see below, although Gigabyte has bumped each one of the clocks on the card far beyond reference speeds, it’s opted to stick with the reference cooler. Although this cooler design is more than capable of keeping cards cool and allow for good overclocking-ability, it would have been nice to see a custom design given this is a special series.

At the back, we have the option to use DVI, HDMI or VGA. If you want to use more than one monitor but neither uses an HDMI port, Gigabyte includes an HDMI-to-DVI adapter in the box that should remedy the problem.

Just like the rest of the current mid-range cards from both ATI and NVIDIA, the GTX 260 sticks to a 2x PCI-E 6-pin power setup. As the card carries a TDP of 182W, NVIDIA recommends a power supply with of at least 500W, although for a card pre-overclocked such as this, its recommendation would likely increase to 550W.

Throughout our testing results, one thing we want to pay attention to is how this card compares to the slightly more expensive (~$25) GTX 275. Our GTX 260 here has 24 less Shaders, but our ultra-high clocks may negate the loss of those. On the following page, we’ll go over our testing methodology and machine specs, and then we’ll get right into our results, kicking things off with Call of Duty: World at War.

Test System & Methodology

At Techgage, we strive to make sure our results are as accurate as possible. Our testing is rigorous and time-consuming, but we feel the effort is worth it. In an attempt to leave no question unanswered, this page contains not only our testbed specifications, but also a fully-detailed look at how we conduct our testing. For an exhaustive look at our methodologies, even down to the Windows Vista installation, please refer to this article.

Test Machine

The below table lists our testing machine’s hardware, which remains unchanged throughout all GPU testing, minus the graphics card. Each card used for comparison is also listed here, along with the driver version used. Each one of the URLs in this table can be clicked to view the respective review of that product, or if a review doesn’t exist, it will bring you to the product on the manufacturer’s website.

Component
Model
Processor
Intel Core i7-975 Extreme Edition – Quad-Core, 3.33GHz, 1.33v
Motherboard
Gigabyte GA-EX58-EXTREME – X58-based, F7 BIOS (05/11/09)
Memory
Corsair DOMINATOR – DDR3-1333 7-7-7-24-1T, 1.60v
ATI Graphics Radeon HD 4890 1GB (Sapphire) – Catalyst 9.8
Radeon HD 4870 1GB (Reference) – Catalyst 9.8
Radeon HD 4770 512MB (Gigabyte) – Catalyst 9.8
NVIDIA Graphics GeForce GTX 295 1792MB (Reference) – GeForce 186.18
GeForce GTX 285 1GB (EVGA) – GeForce 186.18
GeForce GTX 275 896MB (Reference) – GeForce 186.18
GeForce GTX 260 896MB (Gigabyte Super OC) – GeForce 190.62
GeForce GTX 260 896MB (XFX) – GeForce 186.18
GeForce GTS 250 1GB (EVGA) – GeForce 186.18
Audio
On-Board Audio
Storage
Seagate Barracuda 500GB 7200.11
Power Supply
Corsair HX1000W
Chassis
SilverStone TJ10 Full-Tower
Display
Gateway XHD3000 30″
Cooling
Thermalright TRUE Black 120
Et cetera
Windows Vista Ultimate 64-bit

When preparing our testbeds for any type of performance testing, we follow these guidelines:

To aide with the goal of keeping accurate and repeatable results, we alter certain services in Windows Vista from starting up at boot. This is due to the fact that these services have the tendency to start up in the background without notice, potentially causing slightly inaccurate results. Disabling “Windows Search” turns off the OS’ indexing which can at times utilize the hard drive and memory more than we’d like.

For more robust information on how we tweak Windows, please refer once again to this article.

Game Titles

At this time, we currently benchmark all of our games using three popular resolutions: 1680×1050, 1920×1080 and also 2560×1600. 1680×1050 was chosen as it’s one of the most popular resolutions for gamers sporting ~20″ displays. 1920×1080 might stand out, since we’ve always used 1920×1200 in the past, but we didn’t make this change without some serious thought. After taking a look at the current landscape for desktop monitors around ~24″, we noticed that 1920×1200 is definitely on the way out, as more and more models are coming out as native 1080p. It’s for this reason that we chose it. Finally, for high-end gamers, we also benchmark using 2560×1600, a resolution that’s just about 2x 1080p.

For graphics cards that include less than 1GB of GDDR, we omit Grand Theft Auto IV from our testing, as our chosen detail settings require at least 800MB of available graphics memory. Also, if the card we’re benchmarking doesn’t offer the performance to handle 2560×1600 across most of our titles reliably, only 1680×1050 and 1920×1080 will be utilized.

Because we value results generated by real-world testing, we don’t utilize timedemos whatsoever. The possible exception might be Futuremark’s 3DMark Vantage. Though it’s not a game, it essentially acts as a robust timedemo. We choose to use it as it’s a standard where GPU reviews are concerned, and we don’t want to rid our readers of results they expect to see.

All of our results are captured with the help of Beepa’s FRAPS 2.98, while stress-testing and temperature-monitoring is handled by OCCT 3.1.0 and GPU-Z, respectively.

Call of Duty: World at War

Call of Juarez: Bound in Blood

Crysis Warhead

F.E.A.R. 2: Project Origin

Grand Theft Auto IV

Race Driver: GRID

World in Conflict: Soviet Assault

Call of Duty: World at War

The Call of Duty series is one that needs no introduction. Although only six years old, CoD has already become a stature where both single-player and multi-player first-person shooters are concerned. From the series’ inception, each game has delivered stellar gameplay that totally engrosses you, thanks in part to creative levels, smart AI and realistic graphics.

World at War is officially the 5th game in the series, and while some hardcore fans claim that Treyarch is simply unable to deliver as high caliber a game as Infinity Ward, the title does do well to hold everyone over until Modern Warfare 2 hits (November 10, 2009). One perk is that World at War focuses on battles not exhausted in other war games, which helps to keep things fresh.

Manual Run-through: The level chosen for our testing is “Relentless”, one that depicts the Battle of Peleliu, which has American soldiers advance to capture an airstrip from the Japanese. The level is both exciting to play and incredibly hard on your graphics hardware, making it a perfect choice for our testing.

Given the pre-overclocked specs of Gigabyte’s card, it seemed obvious that it could keep up to the GTX 275, but so far, it’s actually managed to surpass its performance as well. Compared to a stock-clocked GTX 260, the Super Overclock proved 20% faster at 2560×1600. So what about it’s best-playable?

Graphics Card
Best Playable
Min FPS
Avg. FPS
NVIDIA GTX 295 1792MB (Reference)
2560×1600 – Max Detail, 4xAA
22
61.988
NVIDIA GTX 285 1GB (EVGA)
2560×1600 – Max Detail, 4xAA
24
41.563

NVIDIA GTX 260 896MB (GBT SOC)

2560×1600 – Max Detail, 4xAA
22
39.849
NVIDIA GTX 275 896MB (Reference)
2560×1600 – Max Detail, 4xAA
22
39.187
ATI HD 4890 1GB (Sapphire)
2560×1600 – Max Detail, 0xAA
21
42.778
ATI HD 4870 1GB (Reference)
2560×1600 – Normal Detail, 0xAA
23
42.097
NVIDIA GTX 260 896MB (XFX)
2560×1600 – Normal Detail, 0xAA
20
38.685
NVIDIA GTX 250 1GB (EVGA)
2560×1600 – Normal Detail, 0xAA
19
37.054
ATI HD 4770 512MB (Gigabyte)
1920×1080 – Max Detail, 4xAA
19
36.639

Well, not surprisingly, 40FPS is completely playable, so 4xAA at 2560×1600 is our choice. On the stock-clocked card, the highest detail settings are a bit too much, so Gigabyte managed to take us up to a new tier… something rarely seen by pre-overclocked cards.

Call of Juarez: Bound in Blood

When the original Call of Juarez was released, it brought forth something unique… a western-styled first-person shooter. That’s simply not something we see too often, so for fans of the genre, its release was a real treat. Although it didn’t really offer the best gameplay we’ve seen from a recent FPS title, its storyline and unique style made it well-worth testing.

After we retired the original title from our suite, we anxiously awaited for the sequel, Bound in Blood, in hopes that the series could be re-introduced into our testing once again. Thankfully, it could, thanks in part to its fantastic graphics, which are based around the Chrome Engine 4, and improved gameplay of the original. It was also well-received by game reviewers, which is always a good sign.

Manual Run-through: The level chosen here is Chapter I, and our starting point is about 15 minutes into the mission, where we stand atop a hill that overlooks a large river. We make our way across the hill and ultimately through a large trench, and we stop our benchmarking run shortly after we blow up a gas-filled barrel.

Similar to our Call of Duty: World at War results on the previous page, Gigabyte’s Super Overclock card kept in front of the GTX 275. Just barely, mind you, but successfully.

Graphics Card
Best Playable
Min FPS
Avg. FPS
NVIDIA GTX 295 1792MB (Reference)
2560×1600 – Max Detail
37
80.339
NVIDIA GTX 285 1GB (EVGA)
2560×1600 – Max Detail
45
54.428

NVIDIA GTX 260 896MB (GBT SOC)

2560×1600 – Max Detail
42
51.982
NVIDIA GTX 275 896MB (Reference)
2560×1600 – Max Detail
41
51.393
ATI HD 4890 1GB (Sapphire)
2560×1600 – Max Detail
36
51.334
ATI HD 4870 1GB (Reference)
2560×1600 – Max Detail
31
46.259
NVIDIA GTX 260 896MB (XFX)
2560×1600 – Max Detail
35
44.023
NVIDIA GTX 250 1GB (EVGA)
2560×1600 – Max Detail
25
33.751
ATI HD 4770 512MB (Gigabyte)
2560×1600 – Normal Detail
24
35.434

CoJ: Bound in Blood runs well on pretty much all hardware, so you’ll have no problem cranking up the detail settings on this card, or for that matter, even the GTS 250, which is impressive in itself. Generally speaking, the better the card, the smoother the gameplay.

Crysis Warhead

Like Call of Duty, Crysis is another series that doesn’t need much of an introduction. Thanks to the fact that almost any comments section for a PC performance-related article asks, “Can it run Crysis?”, even those who don’t play computer games no doubt know what Crysis is. When Crytek first released Far Cry, it delivered an incredible game engine with huge capabilities, and Crysis simply took things to the next level.

Although the sequel, Warhead, has been available for just about a year, it still manages to push the highest-end systems to their breaking-point. It wasn’t until this past January that we finally found a graphics solution to handle the game at 2560×1600 at its Enthusiast level, but even that was without AA! Something tells me Crysis will be de facto for GPU benchmarking for the next while.

Manual Run-through: Whenever we have a new game in-hand for benchmarking, we make every attempt to explore each level of the game to find out which is the most brutal towards our hardware. Ironically, after spending hours exploring this game’s levels, we found the first level in the game, “Ambush”, to be the hardest on the GPU, so we stuck with it for our testing. Our run starts from the beginning of the level and stops shortly after we reach the first bridge.

I sense a theme brewing here. Like clockwork, the Super Overclock card pulled just ahead of the GTX 275. The differences are unbelievably minor, but they’re very consistent thus far.

Graphics Card
Best Playable
Min FPS
Avg. FPS
NVIDIA GTX 295 1792MB (Reference)
2560×1600 – Gamer, 0xAA
19
40.381
NVIDIA GTX 285 1GB (EVGA)
2560×1600 – Mainstream, 0xAA
27
50.073

NVIDIA GTX 260 896MB (GBT SOC)

2560×1600 – Mainstream, 0xAA
26
48.789
NVIDIA GTX 275 896MB (Reference)
2560×1600 – Mainstream, 0xAA
24
47.758
NVIDIA GTX 260 896MB (XFX)
2560×1600 – Mainstream, 0xAA
21
40.501
ATI HD 4890 1GB (Sapphire)
2560×1600 – Mainstream, 0xAA
19
39.096
ATI HD 4870 1GB (Reference)
2560×1600 – Mainstream, 0xAA
20
35.257
NVIDIA GTX 250 1GB (EVGA)
2560×1600 – Mainstream, 0xAA
18
34.475
ATI HD 4770 512MB (Gigabyte)
1920×1080 – Mainstream, 0xAA
19
46.856

Crysis is one of those few games that tend to work poorly on any machine, and until ATI adds CrossfireX support for its HD 5870’s, running the game really well at 2560×1600 at Enthusiast profile is tough to pull off. So, to retain gameplay that’s actually enjoyable, we bump the profile down to Mainstream, and still manage to pull ahead of the GTX 275.

F.E.A.R. 2: Project Origin

Five out of the seven current games we use for testing are either sequels, or titles in an established series. F.E.A.R. 2 is one of the former, following up on the very popular First Encounter Assault Recon, released in fall of 2005. This horror-based first-person shooter brought to the table fantastic graphics, ultra-smooth gameplay, the ability to blow massive chunks out of anything, and also a very fun multi-player mode.

Three-and-a-half years later, we saw the introduction of the game’s sequel, Project Origin. As we had hoped, this title improved on the original where gameplay and graphics were concerned, and it was a no-brainer to want to begin including it in our testing. The game is gorgeous, and there’s much destruction to be had (who doesn’t love blowing expensive vases to pieces?). The game is also rather heavily scripted, which aides in producing repeatable results in our benchmarking.

Manual Run-through: The level used for our testing here is the first in the game, about ten minutes in. The scene begins with a travel up an elevator, with a robust city landscape behind us. Our run-through begins with a quick look at this cityscape, and then we proceed through the level until the point when we reach the far door as seen in the above screenshot.

Gigabyte’s Super Overclock so far as consistently remained ahead of the GTX 275 in each game and resolution so far, but a noticeable trend is that the overall gain becomes less as the resolution increases. Still, Gigabyte’s card costs less than a stock-clock GTX 275. Faster performance for less money? No reasonable person would complain about that.

Graphics Card
Best Playable
Min FPS
Avg. FPS
NVIDIA GTX 295 1792MB (Reference)
2560×1600 – Max Detail, 4xAA, 16xAF
45
95.767
NVIDIA GTX 285 1GB (EVGA)
2560×1600 – Max Detail, 4xAA, 16xAF
39
62.014
NVIDIA GTX 260 896MB (GBT SOC)
2560×1600 – Max Detail, 4xAA, 16xAF
18
57.418
NVIDIA GTX 275 896MB (Reference)
2560×1600 – Max Detail, 4xAA, 16xAF
37
57.266
ATI HD 4890 1GB (Sapphire)
2560×1600 – Max Detail, 4xAA, 16xAF
38
56.726
ATI HD 4870 1GB (Reference)
2560×1600 – Max Detail, 4xAA, 16xAF
34
50.555
NVIDIA GTX 260 896MB (XFX)
2560×1600 – Max Detail, 4xAA, 16xAF
29
48.110
NVIDIA GTX 250 1GB (EVGA)
2560×1600 – Max Detail, 4xAA, 16xAF
24
36.331
ATI HD 4770 512MB (Gigabyte)
2560×1600 – Normal Detail, 0xAA, 4xAF
30
43.215

Once again, the best playable setting is the same that we used for our above 2560×1600 setting, at 4xAA and 16xAF. An oddity is the fact that Gigabyte’s card lacked a bit in the minimum FPS area, which was caused due to a longer pause during an automatic save. There was no issue in our previous GPU runs, so I’m unsure if this issue was tied to the card, or the game. Either way, it only occurred during this save point, and isn’t really noticeable as a lag spike is usually expected during an auto game save.

Grand Theft Auto: IV

If you look up the definition for “controversy”, Grand Theft Auto should be listed. If it’s not, then that should be a crime, because throughout GTA’s many titles, there’s been more of that than you can shake your fist at. At the series’ beginning, the games were rather simple, and didn’t stir up too much passion in certain opposers. But once GTA III and its successors came along, its developers enjoyed all the controversy that came their way, and why not? It helped spur incredible sales numbers.

Grand Theft Auto IV is yet another continuation in the series, though it follows no storyline from the previous titles. Liberty City, loosely based off of New York City, is absolutely huge, with much to explore. This is so much so the case, that you could literally spend hours just wandering around, ignoring the game’s missions, if you wanted to. It also happens to be incredibly stressful on today’s computer hardware, similar to Crysis.

Manual Run-through: After the first minor mission in the game, you reach an apartment. Our benchmarking run starts from within this room. From here, we run out the door, down the stairs and into an awaiting car. We then follow a specific path through the city, driving for about three minutes total.

Despite the fact that GTA IV is an absolutely glutton of a game when it comes to system resources, increasing the resolution makes little difference in the overall performance, except on cards with lower amounts of memory. This game is one in particular that could actually see an improvement with 2GB of GDDR, as choosing the High or Very High texture settings really demand lots of memory.

Graphics Card
Best Playable
Min FPS
Avg. FPS
NVIDIA GTX 295 1792MB (Reference)
2560×1600, H/H/VH/H/VH Detail
27
52.590

NVIDIA GTX 260 896MB (GBT SOC)

2560×1600 – High Detail
30
46.122
NVIDIA GTX 285 1GB (EVGA)
2560×1600 – High Detail
32
45.573
NVIDIA GTX 275 896MB (Reference)
2560×1600 – High Detail
30
44.703
NVIDIA GTX 260 896MB (XFX)
2560×1600 – High Detail
24
38.492
ATI HD 4890 1GB (Sapphire)
1920×1080 – High Detail
32
50.300
ATI HD 4870 1GB (Reference)
1920×1080 – High Detail
33
48.738
NVIDIA GTX 250 1GB (EVGA)
1920×1080 – High Detail
21
34.257

Although this game feels somewhat sluggish even when it’s running well, I found the gameplay to be more than playable with our 2560×1600 settings. The only card so far able to go even higher has been the dual-GPU GTX 295.

Race Driver: GRID

If you primarily play games on a console, your choices for quality racing games are plenty. On the PC, that’s not so much the case. While there are a good number, there aren’t enough for a given type of racing game, from sim, to arcade. So when Race Driver: GRID first saw its release, many gamers were excited, and for good reason. It’s not a sim in the truest sense of the word, but it’s certainly not arcade, either. It’s somewhere in between.

The game happens to be great fun, though, and similar to console games like Project Gotham Racing, you need a lot of skill to succeed at the game’s default difficulty level. And like most great racing games, GRID happens to look absolutely stellar, and each of the game’s locations look very similar to their real-world counterparts. All in all, no racing fan should ignore this one.

Manual Run-through: For our testing here, we choose the city where both Snoop Dogg and Sublime hit their fame, the LBC, also known as Long Beach City. We choose this level because it’s not overly difficult, and also because it’s simply nice to look at. Our run consists of an entire 2-lap race, with the cars behind us for almost the entire race.

GRID is the first game in our line-up where the GTX 275 managed to pull ahead in performance – except at 2560×1600, where the Super Overclock regained its crown. ATI’s HD 4890, costing roughly the same as the Super Overclock, put both cards to shame, however.

Graphics Card
Best Playable
Min FPS
Avg. FPS
NVIDIA GTX 295 1792MB (Reference)
2560×1600 – Max Detail, 4xAA
82
101.690
ATI HD 4890 1GB (Sapphire)
2560×1600 – Max Detail, 4xAA
57
70.797
NVIDIA GTX 285 1GB (EVGA)
2560×1600 – Max Detail, 4xAA
54
66.042
NVIDIA GTX 260 896MB (Reference)
2560×1600 – Max Detail, 4xAA
47
63.897
NVIDIA GTX 275 896MB (Reference)
2560×1600 – Max Detail, 4xAA
52
63.617
ATI HD 4870 1GB (Reference)
2560×1600 – Max Detail, 4xAA
51
63.412
NVIDIA GTX 260 896MB (XFX)
2560×1600 – Max Detail, 4xAA
45
54.809
NVIDIA GTX 250 1GB (EVGA)
2560×1600 – Max Detail, 4xAA
35
43.663
ATI HD 4770 512MB (Gigabyte)
1920×1080 – Max Detail, 4xAA
55
69.403

If the GTX 275 could talk, it would no doubt whine that the GTX 260 Super Overclock managed to beat it out by a meager 0.28 FPS at 2560×1600, but it is what it is. As long as you have a decent GPU, max detail settings are fine in this game, and the Super Overclock is no exception… it runs flawlessly at 2560×1600.

World in Conflict: Soviet Assault

I admit that I’m not a huge fan of RTS titles, but World in Conflict intrigued me from the get go. After all, so many war-based games continue to follow the same story-lines we already know, and WiC was different. It counteracts the fall of the political and economic situation in the Soviet Union in the late 80’s, and instead provides a storyline that follows it as if the USSR had succeeded by proceeding with war in order to remain in power.

Many RTS games, with their advanced AI, tend to favor the CPU in order to deliver smooth gameplay, but WiC favors both the CPU and GPU, and the graphics prove it. Throughout the game’s missions, you’ll see gorgeous vistas and explore areas from deserts and snow-packed lands, to fields and cities. Overall, it’s a real visual treat for the eyes – especially since you’re able to zoom to the ground and see the action up-close.

Manual Run-through: The level we use for testing is the 7th campaign of the game, called Insurgents. Our saved game plants us towards the beginning of the mission with two squads of five, and two snipers. The run consists of bringing our men to action, and hovering the camera around throughout the duration. The entire run lasts between three and four minutes.

What a way to wrap up our real-world game results… the exact same way most of our results have been up to this point!

Graphics Card
Best Playable
Min FPS
Avg. FPS
NVIDIA GTX 295 1792MB (Reference)
2560×1600 – Max Detail, 8xAA, 16xAF
40
55.819
NVIDIA GTX 285 1GB (EVGA)
2560×1600 – Max Detail, 0xAA, 16xAF
34
49.514

NVIDIA GTX 260 896MB (GBT SOC)

2560×1600 – Max Detail, 0xAA, 16xAF
37
48.101
NVIDIA GTX 275 896MB (Reference)
2560×1600 – Max Detail, 0xAA, 16xAF
36
46.186
ATI HD 4890 1GB (Sapphire)
2560×1600 – Max Detail, 0xAA, 16xAF
31
46.175
NVIDIA GTX 260 896MB (XFX)
2560×1600 – Max Detail, 0xAA, 16xAF
23
39.365
ATI HD 4870 1GB (Reference)
2560×1600 – Max Detail, 0xAA, 16xAF
22
30.027
NVIDIA GTX 250 1GB (EVGA)
2560×1600 – Max Detail, 0xAA, 4xAF
24
32.453
ATI HD 4770 512MB (Gigabyte)
1920×1080 – Max Detail, 4xAA, 16xAF
22
31.561

Although the game is playable with 4xAA at 2560×1600, the gameplay is made much more smooth, and enjoyable, with anti-aliasing removed entirely. This is also one game where AA isn’t too noticeable in the heat of battle, so it’s hardly even missed.

Futuremark 3DMark Vantage

Although we generally shun automated gaming benchmarks, we do like to run at least one to see how our GPUs scale when used in a ‘timedemo’-type scenario. Futuremark’s 3DMark Vantage is without question the best such test on the market, and it’s a joy to use, and watch. The folks at Futuremark are experts in what they do, and they really know how to push that hardware of yours to its limit.

The company first started out as MadOnion and released a GPU-benchmarking tool called XLR8R, which was soon replaced with 3DMark 99. Since that time, we’ve seen seven different versions of the software, including two major updates (3DMark 99 Max, 3DMark 2001 SE). With each new release, the graphics get better, the capabilities get better and the sudden hit of ambition to get down and dirty with overclocking comes at you fast.

Similar to a real game, 3DMark Vantage offers many configuration options, although many (including us) prefer to stick to the profiles which include Performance, High and Extreme. Depending on which one you choose, the graphic options are tweaked accordingly, as well as the resolution. As you’d expect, the better the profile, the more intensive the test.

Performance is the stock mode that most use when benchmarking, but it only uses a resolution of 1280×1024, which isn’t representative of today’s gamers. Extreme is more appropriate, as it runs at 1920×1200 and does well to push any single or multi-GPU configuration currently on the market – and will do so for some time to come.

3DMark Vantage is a highly scalable benchmark, taking full advantage of all available shaders and literal GPU cores, along with copious amounts of memory. Given that, our results above fairly accurately scale each card with its real-world performance.

Overclocking the GTX 260 Super Overclock

Before tackling our overclocking results, let’s first clear up what we consider to be a real overclock and how we go about achieving it. If you read our processor reviews, you might already be aware that we don’t care too much for an unstable overclock. It might look good on paper, but if it’s not stable, then it won’t be used. Very few people purchase a new GPU for the sole purpose of finding the maximum overclock, which is why we focus on finding what’s stable and usable.

To find the max stable overclock on an ATI card, we stick to using ATI’s Catalyst Overdrive tool. Compared to what’s available on the NVIDIA side, it’s quite limited in the top-end, but it’s the most robust and simplest solution to use. For NVIDIA, we use EVGA’s Precision, which allows us to reach heights that are in no way sane – a good thing.

Once we find what we believe might be a stable overclock, the card is put through 30 minutes of torture with the help of OCCT 3.0’s GPU stress-test, which we find to push any graphics card harder than any other stress-tester we’ve ever used. If the card passes there, we then further verify by running the card through a 2x run of 3DMark Vantage’s Extreme setting. Finally, games are quickly loaded and tested out to assure we haven’t introduced any side-effects.

If all these tests pass without issue, we consider the overclock to be stable.

Overclocking Gigabyte’s GeForce GTX 260 Super Overclock

The Super Overclock isn’t only Gigabyte’s highest-clocked GTX 260, but it’s the highest-clocked GTX 260 I’ve found anywhere on the web being sold. There are a few others that come close, but Gigabyte has really pushed the bar high. With such high clocks to begin with, I wasn’t expecting too much more juice to be pushed out of it, but I was a little surprised, as 720MHz Core, 1550MHz Shader and 2600MHz Memory proved to be completely stable.

Oddly enough, the card seemed completely stable at 730/1600/2700, and it even managed to pass through OCCT 3.0 for a full hour without an issue. But, the major overclock wasn’t a match for real gameplay, as games such as Crysis would crash within five minutes of playing. So, with your own card, you may very well be able to push the clocks higher, but for me, what’s listed above is all I could manage while retaining full stability.

As you can see, the differences with the overclocked settings isn’t too staggering. Rather, it’s expected, and you have to ask yourself if it’s really worth the time, extra heat and additional stress on the card for so little gain. Personally, I’d say no. This card is super-fast as is.

Power & Temperatures

To test our graphics cards for both temperatures and power consumption, we utilize OCCT for the stress-testing, GPU-Z for the temperature monitoring, and a Kill-a-Watt for power monitoring. The Kill-a-Watt is plugged into its own socket, with only the PC connect to it.

As per our guidelines when benchmarking with Windows, when the room temperature is stable (and reasonable), the test machine is boot up and left to sit at the Windows desktop until things are completely idle. Once things are good to go, the idle wattage is noted, GPU-Z is started up to begin monitoring card temperatures, and OCCT is set up to begin stress-testing.

To push the cards we test to their absolute limit, we use OCCT in full-screen 2560×1600 mode, and allow it to run for 30 minutes, which includes a one minute lull at the start, and a three minute lull at the end. After about 10 minutes, we begin to monitor our Kill-a-Watt to record the max wattage.

The GTX 260 Super Overclock shared a top temperature of 92C with the GTX 275, although the former had a slightly higher temperature at idle. The room temperature was admittedly 0.8C higher, but I’m not entirely sure if that’d result in a 4C difference for the GPU. As far as power consumption goes, the Super Overclock, not surprisingly, uses much more power at full load over the stock card, but surprisingly, it falls behind the GTX 275, despite having improved performance.

Final Thoughts

When I first received this card, I wasn’t that impressed… without even touching it. After all, this is a year-old card that has simply been overclocked. What’s there to get excited about? In this case, excited isn’t the proper word to use, that’s for sure, but Gigabyte has really done well with their Super Overclock series, and this card is proof of that. It might be a year-old design, but given today’s current pricing from NVIDIA, it proves to be a deal, even at $199.

At that price, it costs about $20 more than a stock GTX 260, but on the other side of the token, it costs at least $20 less than a GTX 275. The key is that despite falling in the middle price-wise, it beat out the GTX 275 in over 90% of our game tests. So, given this card here, there looks to be little reason to even bat an eye at the GTX 275 at all. This is less expensive, and faster. It’s not too difficult to figure out what makes sense here.

The exception might be for overclockers, because it could be that the GTX 275’s higher bins and improved Shader core count would beat out this card when you are talking overclocked vs. the Super Overclock. That’s not surprising, but it again comes down to price, and what you want to spend. Then of course, if you want to overclock the Super Overclock even further, you may just hit GTX 285 performance – which we did in a couple of the games.

Is Gigabyte’s card a hands-down winner? At this point in time, it is, without question. That simply can’t be argued. Their card is faster than a more expensive GTX 275, and anything else at $200 for that matter. The problem lies in the fact that ATI’s HD 5700 series cards are right around the corner, and they may very well change the value of this particular card up quite a bit. It’s hard to say right now, not having tested the card (nor being able to disclose information due to NDA), but it’s something you’ll want to keep in mind.

To be safe, I’d recommend waiting another week or two and see where things stand then. Hopefully it won’t take too long after the HD 5700 series launch to see Gigabyte release some Super Overclocked versions of those cards. Speaking of “Super Overclock”, though, Gigabyte actually does do well to have these cards live up to the name. So far, their GTX 260 and GTX 275 cards are the highest overclocked currently being sold (and they are available). Hopefully we’ll see this series continue.


Gigabyte GeForce GTX 260 Super Overclock

Discuss this article in our forums!

Have a comment you wish to make on this article? Recommendations? Criticism? Feel free to head over to our related thread and put your words to our virtual paper! There is no requirement to register in order to respond to these threads, but it sure doesn’t hurt!

Copyright © 2005-2019 Techgage Networks Inc. - All Rights Reserved.