For most people, there is no such thing as a CPU vs. GPU battle, but for companies like Intel and NVIDIA, there is such a thing, and it’s heated. Naturally, gaming wouldn’t be stellar if the only acceleration came from the GPU, and likewise, we wouldn’t get too far with our OS running off of a GPU, but in between, there exists a battleground where neither is prepped to come out victorious.
At an event held in France called the International Symposium on Computer Architecture (don’t worry, this is the first I’ve heard of it also), Intel made a rather interesting statement that could be taken one of a few ways. In gist, the company debunked claims of GPU’s being 100x faster than CPU’s, by stating that they’re only 14x faster.
With a statement as interesting as this, not to mention one that seems to give a hearty nod to GPGPU, it’s little wonder that NVIDIA wasted little time in broadcasting Intel’s statement online. The result was a blog post made by the company’s General Manager GPU Computing, Andy Keane. In it, there is much poking at Intel, but aside from that the company also backs up this “100x” statement with the help of claims from various organizations.
Based on these, 100x is a modest figure, as the Massachusetts General Hospital found a 300x increase in performance when running a Monte Carlo simulation for photon migration on a GPU vs. CPU. The exact product models used to come up with this conclusion are not listed, however.
The highlight of the story, though, isn’t the fact that Intel downgraded NVIDIA’s claims, but rather that it admitted that GPU’s have the capability of delivering 14x the performance over a CPU. How often does something like that happen?
Long-time readers of our site of course knew that this has long been the case, but nothing has changed in recent memory in the CPU vs. GPU battle. For most tasks, the CPU excels, which even NVIDIA would have a difficult time in discrediting. In other cases though, such as applications and scenarios that can make good use of a heavily parallel processor and is able to get by without a CPU, performance gains, sometimes huge, are going to be seen.
I’m personally still waiting for more proof on the GPGPU front, because up to now we’ve seen multi-media encoding and not much else. There have been the odd password cracker and medical solution, but for most end-users, the reason to move over to a GPGPU mindset has yet to come. I can’t wait until we reach the day where we can run something as robust as games on our GPU’s! Oh wait…
The real myth here is that multi-core CPUs are easy for any developer to use and see performance improvements. Undergraduate students learning parallel programming at M.I.T. disputed this when they looked at the performance increase they could get from different processor types and compared this with the amount of time they needed to spend in re-writing their code.