Oh, Oak Ridge. You’ve gone and done it again. Remember the supercomputer “TITAN” that debuted five years ago? Even as far as supercomputers go, that thing was super, but the way things go in technology, no piece of performance gear can hold the top spot for too long. In recent years, TITAN has naturally fallen down the fastest supercomputers list a bit, but we knew Oak Ridge National Laboratory wouldn’t let that last too long.
On Friday, Summit was unveiled, and what a beautiful beast it is. Aesthetically, this has got to be one of the sharpest SCs ever crafted – a leap beyond TITAN, if I’m honest. But looks are not what’s important here; computation is. And Summit has it, to become the world’s fastest supercomputer.
Inside Summit is a staggering 27,648 NVIDIA Volta-based GPUs, each of which come equipped with Tensor Cores to dramatically improve deep-learning / artificial intelligence performance. Not impressed? Please sit down, and prop your jaw up for its own safety:
Summit is 100x faster in deep-learning than TITAN. The performance number to make that a reality is 3 billion billion (quintillion) calculations per second, or in simpler terms, 3 exaops. For more mind-boggling perspective, the universe we live in is half an exa-second old (roughly!).
NVIDIA’s Jensen Huang giving a speech at the Summit unveil
Of Summit’s total performance output for the work at hand, 95% comes from the GPUs. Aside from the deep-learning performance, the GPUs aggregately deliver 200 petaflops of double-precision performance, and while it’s not going to matter much for the uses of this supercomputer, that’d be 400 petaflops of single-precision (versus 27 PFLOPS with TITAN, and 1.75 PFLOPS of Jaguar before it). Other fun stats include the fact that it takes up 5,600 square feet, and weighs as much as a commercial jet.
I’d be remiss if I ignored the CPU horsepower in this beast. This project is actually a joint venture between ORNL, the DoE, NVIDIA, and IBM. Inside these servers are dual 22-core IBM Power9 chips, as well as the six Volta GPUs. In addition to this core performance hardware, there’s also a healthy 10 petabytes of DRAM.
Summit will be used by the United States Department of Energy, which has big plans for the supercomputer’s more than $100M (based on retail pricing) of GPUs. That includes high-energy physics, materials discovery, healthcare, and more. Summit’s already scheduled to help with cancer research, fusion energy, and also disease and addiction.
When ORNL built TITAN in 2012, it seemed to have foresight most others did not. It understood the importance of GPUs in the future of computing, and since TITAN’s debut, over 550 HPC applications accelerated by GPUs have been built – 15 of which are the “most widely used ones”.
Does anyone even want to predict where we will be in five years time?