Latest Forum Posts

Latest News Posts
Coming Soon!
Social
Go Back   Techgage.com > Submissions > Reader-Submitted News

Reader-Submitted News Stumble on some interesting news from around the Web and want to share? This is the place!

Reply
 
Thread Tools
Old 09-26-2012, 08:56 AM   #1
Jakal
Tech Monkey
 
Jakal's Avatar
 
Join Date: Nov 2005
Location: Missiskippy
Posts: 634
Default Integrated Graphics a Thing of the... Future..?

As someone still using a Core2 processor and ready to upgrade I want to learn as much as I could about the upcoming Haswell chips. My search began with expected release dates and moved on to the chips updated architecture and improvements. I stopped to post this once I learned about the up-and-coming chipset, Z87.

Ivy Bridge's home is the X79 chipset, and for good reason. Quad channel memory support for up to 128GB DDR3 RAM, native PCi3.0 (2x) x16, and a plethora of data pipelines to go with it. Along with the IB architecture comes better integrated graphics. There's even a video showing off the HD 4000 with an i7 3770. It maintains 20+ fps into High settings on BF3. No, these numbers aren't great, but imagine if you double or triple the on-die graphic abilities. That's what Haswell and Z87 bring to the table; integrated graphics that may not be great, but at least sufficient for most games on the market today.

The problem with this 'table' is PCIe support drops from 2 x16 slots for SLi/CF to x16 and x8. The current X79 chipset gives enthusiasts better performance when running two cards. Before there's any flaming, I only spoke of 2 PCIe slots because that's the most commong SLi/CF configuration. This raises a serious question in my mind. Are we consumers missing out because Intel is pushing their integrated graphics solution? Why wouldn't a newer architecture support current high-end graphic options? Will consumers opt for an add-on card?

It's very possible, even likely, things will change between now and June '13, but a lateral graphics move seems like an odd thing to do. This also begs to question the affect on graphic card prices. Especially if an integrated gpu can keep up with current-gen options.

Thoughts?
__________________
Intel C2Quad Q9400 @3.6Ghz | Asus PM5Q Deluxe | OCZ Reaper HPC PC2-8500 8GB | XFX Black Edition 260/216| knobs are great for twisting, turning, squeezing and pulling... especially your own..... that's how doors open | Chaos Havok: grrrr im lagging me | <@Deathspawner> I wish I was in Windows :-/
Jakal is offline   Reply With Quote
Old 09-26-2012, 10:27 AM   #2
Optix
Basket Chassis
 
Optix's Avatar
 
Join Date: Dec 2009
Location: New Brunswick, Canada
Posts: 1,679
Default

To be honest, I haven't heard of the Z78 chipset so I'll need to do some reading, but think of it this way.

Entry level (H-series boards) is usually a single slot at x16 and maybe a second running at x4 from the chipset, not the CPU.

Mainstream (Z-series boards) usually offer a single slot at x16 or knock it down to x8x8 if two cards are being used. Again, there could be another slot running at x4 or even x1 from the chipset.

Enthusiast (X-series boards) are the creme de la creme and offer x16x16 (and sometimes another x16 with a bridge chip).

If the Z-series boards offered everything that the X-series boards did, there'd be no reason to even put out the X-series. Either that or the Z-series boards would take a sharp jump in price because they would now be the enthusiast offering.

Integrated graphics will never be able to touch high resolution gaming with a stand alone GPU. It just can't happen. With that said, IGPUs have made HUGE gains in terms of performance. Capable of light gaming and 1080p resolution, there's just not reason to pick up a discrete GPU for a home theater PC or if you want a daily driver to check your email with an occasional WoW session thrown in.

You need to be really careful when interpretting PCI-e specs because some manufacturers twist things in such a way that you think you're getting x16x16 when you're really only getting x16x8.

***EDIT: After several searches I can't find squat on the Z78 chipset. Hey Rob, help a brotha out!
__________________

Intel i5 3570K, MSI Z77Z-GD55, 4x2GB Kingston Genesis 2133mhz, 120GB Mushkin Chronos SSD, 1TB Western Digital Caviar Blue, Intel 210 cache drive, MSI 7850 Power Edition OC, Corsair H100, Silverstone Strider 750w Gold, Killer2100 NIC, Corsair 600T SE White, LG W2242, ROCCAT Kone+, Isku & Kave, 3TB Seagate Backup+, 200GB Western Digital Scorpio Black external drive



Last edited by Optix; 09-26-2012 at 10:39 AM.
Optix is offline   Reply With Quote
Old 09-26-2012, 11:33 AM   #3
madmat
Soup Nazi
 
madmat's Avatar
 
Join Date: Jun 2005
Location: No soup for you!
Posts: 1,654
Default

Quote:
Originally Posted by Optix View Post
To be honest, I haven't heard of the Z78 chipset so I'll need to do some reading, but think of it this way.

Entry level (H-series boards) is usually a single slot at x16 and maybe a second running at x4 from the chipset, not the CPU.

Mainstream (Z-series boards) usually offer a single slot at x16 or knock it down to x8x8 if two cards are being used. Again, there could be another slot running at x4 or even x1 from the chipset.

Enthusiast (X-series boards) are the creme de la creme and offer x16x16 (and sometimes another x16 with a bridge chip).

If the Z-series boards offered everything that the X-series boards did, there'd be no reason to even put out the X-series. Either that or the Z-series boards would take a sharp jump in price because they would now be the enthusiast offering.

Integrated graphics will never be able to touch high resolution gaming with a stand alone GPU. It just can't happen. With that said, IGPUs have made HUGE gains in terms of performance. Capable of light gaming and 1080p resolution, there's just not reason to pick up a discrete GPU for a home theater PC or if you want a daily driver to check your email with an occasional WoW session thrown in.

You need to be really careful when interpretting PCI-e specs because some manufacturers twist things in such a way that you think you're getting x16x16 when you're really only getting x16x8.

***EDIT: After several searches I can't find squat on the Z78 chipset. Hey Rob, help a brotha out!
That's because it's Z87 not Z78.

http://lensfire.blogspot.com/2012/07...r-release.html
__________________

M4N82 Deluxe
Phenom II 940 Black Edition quad core @ 3.5Ghz
2x1 gig OCZ PC26400 Platinum, 2x1gig GSkill PC26400
EVGA GTX260
Buncha drives,
Some other stuff,
Even more stuff,
If the automobile had followed the same development cycle as the computer, a Rolls-Royce would today cost $100, get a million miles per gallon, and explode once a year, killing everyone inside. --Robert X. Cringely, InfoWorld magazine
madmat is offline   Reply With Quote
Old 09-26-2012, 12:10 PM   #4
Rob Williams
Editor-in-Chief
 
Rob Williams's Avatar
 
Join Date: Jan 2005
Location: Atlantic Canada
Posts: 13,347
Default

Just to be clear, X79 is for Sandy Bridge-E, not Ivy Bridge. IB is technically superior architecturally to SB-E, but it lacks the quad-channel memory controller and six-core options. At the moment, I am not even sure when the next six-cores are speculated for release... all Haswell talk specifically tackles quad-cores and under.

Quote:
Originally Posted by Jakal
No, these numbers aren't great, but imagine if you double or triple the on-die graphic abilities.
We've been on a path for a while where IGPs might negate the need for sub-$100 graphics cards, and Haswell might seal the deal. But for me, support also matters. I haven't tested out Intel IGP for a while, but last time I did (with Clarkdale, I believe), I had issues with games crashing on the IGP but didn't with even the smallest GPU I could find. Even now, IGPs are suitable for the masses, but your gaming requirements still have to be minimal.

IGPs are never going to be better than a dedicated card, if that's what you're asking. Even with smaller die-shrinks, there's just no room to fit in all of what a dedicated GPU can with its large PCB and allocated die-area. I'm sure these CPU sockets have some power-limit as well, ruling out the possibility of having a truly competitive GPU be placed there.

Quote:
Originally Posted by Jakal
The problem with this 'table' is PCIe support drops from 2 x16 slots for SLi/CF to x16 and x8.
I've never seen evidence of this mattering, to be honest, and the only time I'd ever be concerned was if you were to go the quad-GPU (on two cards) route, and maybe not even then. It'd be a different story if we were still restricted to PCIe 1.0, but current motherboards sport 3.0, so we have more than enough bandwidth for even the beefiest of cards.

In the overall scheme of things, what matters a lot more is CPU performance.
__________________
Intel Core i7-3960X, GIGABYTE G1.Assassin 2, Kingston 16GB DDR3-2133, NVIDIA GeForce GTX 770 2GB
Kingston HyperX 3K 240GB SSD (OS, Apps), WD VR 1TB (Games), Corsair 1000HX, Corsair H70 Cooler
Corsair 800D, Dell 2408WFP 24", ASUS Xonar Essence STX, Gentoo (KDE 4.11. 3.12 Kernel)

"Take care to get what you like, or you will be forced to like what you get!" - H.P. Baxxter
<Toad772> I don't always drink alcohol, but when I do, I take it too far.


Rob Williams is online now   Reply With Quote
Old 09-26-2012, 01:18 PM   #5
Tharic-Nar
Techgage Staff
 
Tharic-Nar's Avatar
 
Join Date: Nov 2009
Location: UK
Posts: 1,166
Default

I'd have to emphasise Rob's last point there. x16x8 really doesn't matter when you are dealing with PCIe 3.0 as far as GPUs goes. PCIe 2.0 x8 has more than enough bandwidth for a top end dual GPU card. PCIe 3.0 doubles the bandwidth further. Technically, you could run a top end dual GPU in an x4 slot with PCIe 3.0 - the reason they can't is because of power. So no, enthusiasts are not being given the shaft as far as limited PCIe lanes goes, there's plenty of bandwidth to go around.

PCIe SSDs on the other hand... while they won't be able to max out bandwidth at the moment, give them a couple years and I'm sure they could - not that a typical consumer, or indeed enthusiast would be able to take advantage of that kind of storage bandwidth (talking queue depths of 16+ to actually start making any real use of that, and a typical - highly stressed home pc barely scrapes a queue depth of 2).
__________________
PSU: Corsair 1000HX - Case: Thermaltake Xaser VI Full Tower - CPU: Intel Core i7 2600 @3.8GHz - Cooler: Thermaltake FRIO - Motherboard: ASUS P67 Sabertooth - Memory: 16GB Corsair Vengeance LP Arctic 1600 - GPU: 2x AMD Radeon HD 5870 - HDD: WD Caviar Black 1TB 6Gb/s +1TB +2TB storage - Audio: ODAC + O2 Amp, ASUS Xonar DX
K/B: Corsair K90 - Mouse: ROCCAT KONE XTD - Monitor: DELL U2410 - Speakers: Corsair SP2500 - Headphones: Beyerdynamic DT990 - Mic: Blue Yeti USB Microphone

- I need Fiber to fix my Irregular Bandwidth Syndrome -
Tharic-Nar is offline   Reply With Quote
Old 09-26-2012, 10:18 PM   #6
Jakal
Tech Monkey
 
Jakal's Avatar
 
Join Date: Nov 2005
Location: Missiskippy
Posts: 634
Default

Z77 vs X79

The Z77 supports 3rd Gen Intel Core processors, native USB 3.0, and better storage support, while X79 supports the 'Enthusiast' LGA2011 cpus, quad DDR memory controller, and native 2 x16 PCIe bandwidth.

Let's be honest here. We're at a point in CPU progression where overclocking is basically unnecessary and the graphics option is the real bottleneck. That's not to say scooping up a cheap entry-level processor and putting the fire to it doesn't save you a bunch of money.

Just thought it was an interesting comparison to what the Z87 offers. I know we'll probably see an X87 release, but as for now this is all we have to go on.

Good stuff guys.
__________________
Intel C2Quad Q9400 @3.6Ghz | Asus PM5Q Deluxe | OCZ Reaper HPC PC2-8500 8GB | XFX Black Edition 260/216| knobs are great for twisting, turning, squeezing and pulling... especially your own..... that's how doors open | Chaos Havok: grrrr im lagging me | <@Deathspawner> I wish I was in Windows :-/
Jakal is offline   Reply With Quote
Old 09-27-2012, 01:38 AM   #7
Rob Williams
Editor-in-Chief
 
Rob Williams's Avatar
 
Join Date: Jan 2005
Location: Atlantic Canada
Posts: 13,347
Default

If there existed a Z77 six-core CPU, X79 and SB-E would have no reason to exist - outside of the extreme example where someone needs the massive memory bandwidth a quad-channel controller can offer (not a regular enthusiast).

I'm still at a loss as to when the next -new- six-core is due. It can't possibly be with the architecture after Haswell, because that'd be way, way too long of a wait. There's another six-core due over the next month or so, but it's still X79.
__________________
Intel Core i7-3960X, GIGABYTE G1.Assassin 2, Kingston 16GB DDR3-2133, NVIDIA GeForce GTX 770 2GB
Kingston HyperX 3K 240GB SSD (OS, Apps), WD VR 1TB (Games), Corsair 1000HX, Corsair H70 Cooler
Corsair 800D, Dell 2408WFP 24", ASUS Xonar Essence STX, Gentoo (KDE 4.11. 3.12 Kernel)

"Take care to get what you like, or you will be forced to like what you get!" - H.P. Baxxter
<Toad772> I don't always drink alcohol, but when I do, I take it too far.


Rob Williams is online now   Reply With Quote
Old 09-27-2012, 02:31 AM   #8
RainMotorsports
Partition Master
 
RainMotorsports's Avatar
 
Join Date: Jul 2011
Posts: 359
Default

Quote:
Originally Posted by Jakal View Post
Let's be honest here. We're at a point in CPU progression where overclocking is basically unnecessary and the graphics option is the real bottleneck. That's not to say scooping up a cheap entry-level processor and putting the fire to it doesn't save you a bunch of money.
Overclocking unnecessary? Man id have to kill myself the time I would lose rendering models and encoding video by dropping my 1.2 Ghz overlock! Look only a portion of these chips go into gaming machines! GPU Acceleration is the way to go of course but not always available or even reliable (See Vegas Pro 11.... lol).

There is not an integrated solution even in the pipeline from either company that will actual do something for a serious gamer. Source engine games, the Call of Duty series sure and thats actually a great thing that has worked its way down. I mean I recently tested that 6670 with BF3 which clobbers the i5-2500K's graphics solution (which blows even overclocked) and that is a better card than most integrated solutions offer up until this very point in time. Ask AMD if they see a mid range next gen chip making it into CPU's next year, the answer is no. I wouldn't bother asking Intel for truth about anything we still need to give them crap about that F1 demo.

The future it may be even in the near future you won't see any serious gamer playing on one given the budget for what they want is available. When you do the trend that has always existed for graphics performance versus game releases suggests you would end up replacing the CPU more often then ever before.

Intel purposely limits the available PCI-E lanes. They were forced with legal action to keep PCI-E in their product line. Current generation GPU's will find themselves plenty happy on x8 3.0. Intel is afraid of GPU compute power. If it was not valuable for everything from servers to supercomputing you would actually have a shot at getting 32 lanes from a low end consumer chip and be entirely unworried about PCI-E 3.0 bandwidth for quite some time. But that wont happen, its profitable in many arena's and they will take your cash as well.
__________________
Desktop i5 2500K @ 4.5Ghz | ASUS P8Z68-V PRO | 16GB Corsair Vengeance 1600 | Seasonic X750
MSI GTX570 TF III @ 950/1900/2300 | 2TB 5900 + 2X320GB 7200 RAID0 |
Zalman CNPS9900 MAX
Laptop ASUS G50VT | Core2Duo T9600 @ 3.3Ghz | 9800M GS/GTS @ 650/1625/900
Phone Galaxy SII Epic 4G Touch | CyanogenMod 9 (Nightly 6/20) | FF11 Kernel | FF18 Modem

Last edited by RainMotorsports; 09-27-2012 at 02:36 AM.
RainMotorsports is offline   Reply With Quote
Old 09-27-2012, 12:53 PM   #9
Rob Williams
Editor-in-Chief
 
Rob Williams's Avatar
 
Join Date: Jan 2005
Location: Atlantic Canada
Posts: 13,347
Default

For a regular consumer, overclocking might not be that necessary. I don't even personally overclock my own rig, but it's mainly because I have six-cores which naturally boosts encode speeds already. I'd OC maybe if I had a beefier cooler, but it's not a major concern.

And yes, Intel does limit the PCIe lanes, but I think it's mostly done to fool those who don't know better into buying the biggest and best chipset out there.
__________________
Intel Core i7-3960X, GIGABYTE G1.Assassin 2, Kingston 16GB DDR3-2133, NVIDIA GeForce GTX 770 2GB
Kingston HyperX 3K 240GB SSD (OS, Apps), WD VR 1TB (Games), Corsair 1000HX, Corsair H70 Cooler
Corsair 800D, Dell 2408WFP 24", ASUS Xonar Essence STX, Gentoo (KDE 4.11. 3.12 Kernel)

"Take care to get what you like, or you will be forced to like what you get!" - H.P. Baxxter
<Toad772> I don't always drink alcohol, but when I do, I take it too far.


Rob Williams is online now   Reply With Quote
Old 09-27-2012, 05:55 PM   #10
Psi*
Tech Monkey
 
Psi*'s Avatar
 
Join Date: Jun 2009
Location: Westport, CT
Posts: 785
Default

Rain & I are mostly in the same camp.

I use GPGPUs extensively but even then the host CPU has to be top notch. Not all time consuming processes are worthy of pushing thru a massive about of processors even with Nvidia giving the CUDA library away for free. Another way of saying this is that not everything can be parallelized so the fastest CPUs (cores) for all of those single threaded processes will always be important.

For those processes that do benefit from massively parallel processors, then there just are not enough PCIe lanes to maintain communication with the host. (The amount is a wild card dependent on algorithms & problem.)

What is it that Nvidia is promising? Kepler is triple Fermi and next is Maxwell promised to be not quite triple of Kepler (2013). (I am getting excited!!)

I other words, I really like over clocking and many PCIe lanes.

For the average consumer, a category of which I also fall into, I see an incredible capability of 3D interactive ... experience. Real world simulation is coming soon.

Last, I am not sure that current systems are intentionally dumb-ed down PCIe-wise. The current computer motherboards are inexpensive & I am thinking of the HP & Dell workstations. Expensive are the server & router systems that Brocade & Cisco crank out ... try 20+ layers. For the current Nvidia Fermi GPUs, one CPU socket per GPU is needed to keep the necessary PCIe lanes. Additionally, the host system requires 4X the RAM of the GPGPU card(s). At least for my software.
Psi* is offline   Reply With Quote
Old 09-27-2012, 11:50 PM   #11
Jakal
Tech Monkey
 
Jakal's Avatar
 
Join Date: Nov 2005
Location: Missiskippy
Posts: 634
Default

Quote:
Originally Posted by RainMotorsports View Post
Man id have to kill myself the time I would lose rendering models and encoding video by dropping my 1.2 Ghz overlock
I was speaking more from an overall perspective with current-gen options. You are right on the money when it comes to encoding and rendering. The higher the better.

It's one reason I said:
Quote:
Originally Posted by Jakal
That's not to say scooping up a cheap entry-level processor and putting the fire to it doesn't save you a bunch of money.
I have as much fun as the next guy pushing my components to reach a good overclock. It's like buy a car and seeing how fast it'll go. The thing about it is that most stock options perform well enough that pushing the limits, and possibly causing damage, are unnecessary. It's still fun.
__________________
Intel C2Quad Q9400 @3.6Ghz | Asus PM5Q Deluxe | OCZ Reaper HPC PC2-8500 8GB | XFX Black Edition 260/216| knobs are great for twisting, turning, squeezing and pulling... especially your own..... that's how doors open | Chaos Havok: grrrr im lagging me | <@Deathspawner> I wish I was in Windows :-/
Jakal is offline   Reply With Quote
Old 09-28-2012, 07:27 AM   #12
Kougar
Techgage Staff
 
Kougar's Avatar
 
Join Date: Mar 2008
Location: Texas
Posts: 2,653
Default

Rob... IB-E is due next year. I don't have an exact date though... last roadmap I saw says Haswell in Q2 and IB-E in Q3. All I know is that Intel will not want to launch IB-E alongside Haswell, so those launches will be spaced apart.

The problem with IB-E is that it's only a tiny performance boost over SB-E. The successor Haswell-E (or whatever they plan to call it) will most likely not be compatible with X79, just as Haswell will require a new mainstream socket instead of Z77.

The issue with integrated graphics... you can double the amount of IGP hardware three, four, or even five times and you will still not come close to the performance of flagship GPUs. The discrepancy in hardware available is just too huge. Even if you look at AMD's Fusion design, they used a very trimmed down GPU core design for molding into their APUs. IGPs will certainly begin eating into the low-end GPU market but it will be quite some time before they reach the upper-midrange class of performance.
__________________
Core i7 4770k 4.2Ghz
Gigabyte Z87X-UD5H
Crucial Ballistix Sport LP 1600MHz 32GB
EVGA GTX 480 HydroCopper FTW
ASUS Xonar DX
Corsair Neutron GTX 240GB | Windows 7 64-bit
Apogee XT + MCP655 & Thermochill Triple 140mm Radiator
Corsair AX1200 PSU | Cooler Master HAF-X

Kougar is offline   Reply With Quote
Old 09-28-2012, 09:07 AM   #13
Tharic-Nar
Techgage Staff
 
Tharic-Nar's Avatar
 
Join Date: Nov 2009
Location: UK
Posts: 1,166
Default

I wouldn't be so hasty in dismissing IGPs like we used to. Remember, they were on the north bridge for quite some time and a lot of their reputation was built on that very inefficient foundation. Now that the GPU is on-die, not shared die, but actually part of the CPU, then it certainly removes a lot of the bottlenecks for bandwidth. The final obstacle is memory bandwidth which is when things get interesting. Since the GPU is sharing space with the CPU, it has access to it's memory registers and pre-fetch technology. RAM quantities are through the roof with 12GB and 16GB becoming more common, even in laptops. There is DDR4 coming out soon with a minimum freq of 2133 MHz - and RAM frequency has been shown to have a major impact on on-die GPUs.

Now, don't get me wrong, these IGPs will not compete with the monolithic beasts we have as discrete cards in the top end, but I do see them easily absorbing the low end and even mid-range in the medium term. Intel specifically has another major trick up its sleeve too, something that no other company can compete with, manufacture process. Intel can make the chips much more powerful vs the same footprint of other chip designs. AMD still has an advantage when it comes to drivers, architecture and experience, an advantage that is patent locked. NVIDIA is finding its niche with mobility and scientific, so I'm not too worried about it lacking a CPU integration profile since it's buddying up with ARM based CPUs.

Look at it this way... the IGPs we have now are much more powerful than the discrete solutions of consoles, are significantly more energy efficient, and take up a much smaller footprint. That may not be saying much considering the 5-7 year gap, but from a consumer perspective, IGPs are better than a console, but with the added benefit of a full computer on top; they just need a glossy and comfortable wrapper to make the system feel like a console. Enter Valve's hardware project and Linux support.

Start pulling the threads together and you'll begin to see where it's heading.
__________________
PSU: Corsair 1000HX - Case: Thermaltake Xaser VI Full Tower - CPU: Intel Core i7 2600 @3.8GHz - Cooler: Thermaltake FRIO - Motherboard: ASUS P67 Sabertooth - Memory: 16GB Corsair Vengeance LP Arctic 1600 - GPU: 2x AMD Radeon HD 5870 - HDD: WD Caviar Black 1TB 6Gb/s +1TB +2TB storage - Audio: ODAC + O2 Amp, ASUS Xonar DX
K/B: Corsair K90 - Mouse: ROCCAT KONE XTD - Monitor: DELL U2410 - Speakers: Corsair SP2500 - Headphones: Beyerdynamic DT990 - Mic: Blue Yeti USB Microphone

- I need Fiber to fix my Irregular Bandwidth Syndrome -
Tharic-Nar is offline   Reply With Quote
Old 09-28-2012, 03:08 PM   #14
Kougar
Techgage Staff
 
Kougar's Avatar
 
Join Date: Mar 2008
Location: Texas
Posts: 2,653
Default

I'm not dismissing them outright. I merely don't see IGP's taking over the gaming market anything within the next few years, to put it mildly.

The point about AMD's APU's showing performance increases with faster RAM also goes to my point. Changing from DDR3-1333 to DDR3-1866 RAM gave Llano a 20% boost in GPU framerates. This underscores that the IGP is still limited by its footprint and its requirement that it must wait on the CPU's' memory controllers, added latency involved, and any queuing involved when the CPU is using them.

As fast as the CPU controllers and RAM will be, they will never make up for having GDDR5 RAM directly next to the core with dedicated memory access. The latest quad-channel Core i7 3960X processor gets about 40GB/s of bandwidth (20GB/s for the 8-core FX-8150, or 25GB/s for the Core i7 2600K). Compare this to the 192GB/s of the GTX 680 or 264GB/s for the AMD 7970. It will take more than DDR4 and Quad channel memory to even begin to make up for the discrepancy... That said, Haswell / Trinity are only operating with just dual channel memory controllers, not triple or quad. Four CPU cores (especially with HT) can easily keep just two memory controllers busy.

Certainly midrange parts don't need nearly as much bandwidth, but the bandwidth requirement for games scale nearly exponentially as the resolution and detail settings are raised. APUs will not be able to remove this memory bottleneck any time in the near future... because the more powerful they get the higher the detail settings will be placed on them, and the more bandwidth they will need to sustain it. That's why modern GPUs are still so freaking big.

The second problem is more basic. The silicon real-estate required to make the IGP in an APU as powerful as a mid-range GPU would be significant. The only reason APUs have managed to shrink what they have is partly due to how stripped down they are along with the change in away from bulk fab production and the logic circuit design tricks CPUs employ. But none of that will magically shrink a midrange GPU core down to equal size as a quad-core CPU. They already removed compute hardware, threw out most of the shaders and who knows what else. The real-estate required is simply not there to put more than half a midrange GPU underneath the CPU heatsink.

The AMD 7770 is very midrange. It costs a mere $100 with a rebate today. It has 72GB/s of memory bandwidth thanks to GDDR5, and a die area of 123mm^2 By comparison a quad-core Ivy Bridge using a GT1 GPU core measures 132.8mm^2. Make it a GT2 core and it jumps to 159.8mm2. And that's with Intel's fabrication advantage already in play. There is no way even Intel can shrink things down far enough that they can squeeze in enough horsepower to challenge the midrange graphics market anytime soon.

So for both of those reasons I don't think midrange GPUs have much to fear for the next few years, if not the next five. The best possible case for APUs is when they become a true CPU/GPU hybrid. Duplicate sections of the CPU are removed (floating point engines, etc) and vice-versa with the GPU. Enough silicon would hopefully be removed to enough integer cores on the CPU side and shaders on the GPU side to allow for a sufficiently "beefy" GPU with quad-CPU to be built into a reasonable footprint. Even then, I think it will take 1-2 more years before they iron out the hybrid design and get it performing to its potential.... current roadmaps suggest it will be 2015 or later before we see the first genuine hybrids.

The point about modern IGPs trouncing consoles is fine, but six years ago chips were designed and built for the 90nm fab process. Take a 90nm Pentium 4 and build it on the 22nm process and it be something about 9% the size, assuming the transistors scaled equally. Prescott was 122mm^2 area with 125 million transistors. A 160mm^2 Ivy bridge chip is 1.4 billion transistors. So yeah, a modern IGP would trounce something from six years ago with that kind of scaling involved. At the end of the day, any modern APU is still not useful for more than gaming at lower resolutions or lower detail settings.
__________________
Core i7 4770k 4.2Ghz
Gigabyte Z87X-UD5H
Crucial Ballistix Sport LP 1600MHz 32GB
EVGA GTX 480 HydroCopper FTW
ASUS Xonar DX
Corsair Neutron GTX 240GB | Windows 7 64-bit
Apogee XT + MCP655 & Thermochill Triple 140mm Radiator
Corsair AX1200 PSU | Cooler Master HAF-X

Kougar is offline   Reply With Quote
Old 09-28-2012, 04:34 PM   #15
Tharic-Nar
Techgage Staff
 
Tharic-Nar's Avatar
 
Join Date: Nov 2009
Location: UK
Posts: 1,166
Default

The other thing that needs to be considered here is that people have been happy with console games. Tablets and phones are now catching up to consoles too. Yes the resolutions are lower, but this is the critical point: It's good enough. With the gaming industry becoming platform agnostic, the average denominator is console grade graphics. It's slightly lower with tablets and phones, and sky's the limit with PC, but more often than not, PC releases are more of a sympathy production compared to the real money makers. Why should developers spend so many resources on a PC release with all the bells a whistles that can't be rendered in the vast majority of systems. They get brownie points, sure, but not really making a living. Not saying all games are like this, just look at MMOs, but there are not many exclusive titles these days.

With this console centric nature accounted for - IGPs will come into their own, simply because they are good enough to run games at 720p on an HD TV - but with a few extra effects thrown in like AA. Why did I say 720p? Because that is the resolution 95%+ of console games run at - 1080p on a console is limited to 2D and movies.

Next gen consoles are still at least a year or two off (not counting the Wii U). Even when they are released, they will not be staggeringly better than what's available now, since it'll still take time for the devs to get used to a new way of working. Sure, the new Unreal engine demo looked impressive, but how long did it take to create for something so short, and does it work on the new consoles?

What we are stuck in is not really a hardware problem - we are stuck in a software loop. IGPs are great for now, even the next couple years. The only strain a PC gamer can get would be from multi-monitor displays, stereoscopic, and HD texture packs. That's basically all we've got going for us in the high-end discrete GPU market.
__________________
PSU: Corsair 1000HX - Case: Thermaltake Xaser VI Full Tower - CPU: Intel Core i7 2600 @3.8GHz - Cooler: Thermaltake FRIO - Motherboard: ASUS P67 Sabertooth - Memory: 16GB Corsair Vengeance LP Arctic 1600 - GPU: 2x AMD Radeon HD 5870 - HDD: WD Caviar Black 1TB 6Gb/s +1TB +2TB storage - Audio: ODAC + O2 Amp, ASUS Xonar DX
K/B: Corsair K90 - Mouse: ROCCAT KONE XTD - Monitor: DELL U2410 - Speakers: Corsair SP2500 - Headphones: Beyerdynamic DT990 - Mic: Blue Yeti USB Microphone

- I need Fiber to fix my Irregular Bandwidth Syndrome -
Tharic-Nar is offline   Reply With Quote
Reply

Tags
crossfire , haswell , intel , sli , z87

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
The Best Worst Thing marfig General Software 9 11-09-2012 01:03 PM
The Future is Parallel, But that Future is a Long Way Off Rob Williams Processors 1 05-11-2010 05:32 PM
Celebrating 50 Years of Silicon Integrated Circuits Rob Williams Processors 3 09-28-2009 09:16 AM
AMD's 790GX Chipset Offers Best Integrated Graphics Available Rob Williams Processors 0 08-07-2008 01:29 AM
Most Important Thing To Upgrade First? Baloh General Hardware 8 03-25-2005 05:55 PM


All times are GMT -4. The time now is 09:47 PM.