Maths, easy for some, hard for others. For CPUs, it’s their bread and butter, what they are meant to do and how they earn their living… if they were alive of course. But sometimes, you don’t need to calculate the square root of 2,376,147 to within 3 million decimal places. Sometimes you don’t even need the number to be accurate, where good enough will do. We generally round numbers off and even guesstimate (some are better at it than others). The problem is that CPUs are not very good with inaccurate or abstract calculations, they can’t even generate a random number without some kind of logic – defeating the point of random.
There are certain scenarios where a high level of accuracy is not required, this is why ‘lossy’ compression formats exist for video and music, situations where ‘good enough’ will do. Joseph Bates and a small group at MIT were working on a way to scan through 100,000s of hours of video footage for specific things, as part of a research project into human speech acquisition. Existing algorithms for object recognition within static images are very prone to error, so what’s a few more percent if you can scan through images and video, 1000s of times faster. With some funding from the US Office of Naval Research, they simulated a ‘sloppy’ math chip by using an existing algorithms that separates foreground and background within images, and modified it so that all numerical results were offset by 0-1%. The result? A ‘trivial’ difference, “It was about 14 pixels out of a million, averaged over many, many frames of video.” George Shaw said, a graduate student from MIT working on the project.
While no actual chip has been manufactured, the design is in place for a 1000 core processor. There are various subtle differences with this chip compared to a standard multi-core design. For one, each core can only communicate with an adjacent neighbor. While limiting communication across the chip, it does significantly simplify the design (faster and more efficient), since with pixel based calculations, the top left pixel doesn’t need to know what the bottom right is doing. Video encoding doesn’t need to be pixel perfect either, the human eye is hard pushed to tell the difference between two pixels with an offset of 1 between them. Another avenue for the chip, and probably one you’ve thought of by now, is GPUs.
There are all kinds of problems this chip could be thrown at that don’t require super accurate results, or even require a certain amount of inaccuracy (Artificial Intelligence). The chip would still be part of an overall system, it still needs a CPU to tell it what to do, as well as to make sense of the results. Maybe another type of co-processor to integrate into the CPU at a later date.
For gaming though? The World isn’t perfect, so why should the calculations be? The Movie industry regularly adds film grain to dirty up the picture. A little digital dirt never hurt anyone… right?
Arithmetic circuits that returned such imprecise answers would be much smaller than those in today’s computers. They would consume less power, and many more of them could fit on a single chip, greatly increasing the number of calculations it could perform at once. The question is how useful those imprecise calculations would be.