Exascale computers hold the promise to unravel a new world of possibilities
By Jaya Jagadish
In recent years, we have witnessed a rapid increase in the adoption of technologies such as AI, ML, cloud computing and data science. During the Covid-19 pandemic, the widescale adoption of virtual business models by enterprises, the expansion, maturing and adoption of cloud ecosystems, growth in broadband, mobile and PC penetration, the shift to digital transactions by a growing tech-savvy consumer base, and the larger ecosystem’s efforts to gear up for 5G network etc.
have aided this. The resulting explosion of data is so huge that we can no longer rely on traditional supercomputers to process the incoming information.
We have entered the high-performance computing (HPC) megacycle. An IDC study estimates that over 59 Zettabytes of data was generated in 2020. One Zettabyte equals a trillion gigabytes. The volume of data created in the next three years will be much higher than the volume in the past thirty years. It is also estimated that the volume of data generated by embedded devices along with increasing metadata, will soon surpass all other types of data.
A new computing paradigm
While supercomputer performance doubled nearly every year from 2002 to 2009, this rate dropped to every 2.3 years from 2009 to 2019. This was due to several factors including the slowdown in Moore’s Law and technical constraints such as Dennard scaling. While they appeared to be significant obstacles, technologists have now found innovative ways to overcome them to usher in what is being called the Exascale era of computing. Exascale systems correspond to computing systems that are powerful enough to calculate a billion billion, that is, a quintillion (1018) operations per second.
To push the bleeding edge of performance and efficiency, engineers are building heterogeneous systems that use CPUs and GPUs, along with co-design, that is, the iterative optimisation of both hardware and software in pursuit of greater performance and efficiency at lower cost. This come to life when Frontier supercomputer powered by 3rd Gen EPYC CPUs and Radeon Instinct GPUs gets launched. Frontier is expected to be the world’s fastest and the world’s first Exascale supercomputer. It is being installed at Oak Ridge National Laboratory (ORNL) in the United States. Frontier will deliver more than 1.5 exaflops of peak processing power.
Expanding the frontiers of science
Researchers are looking at using this immense computational power coupled with the fusion of HPC and AI to tackle grand challenges once thought beyond reach. We saw a preview of this with the demand for high computational power during the global rush to develop the vaccine for Covid-19.
When it comes to ‘Frontier’, ORNL is preparing eight key scientific applications, including one that will investigate astrophysics and galaxy formations. Another interesting application is a plasma physics simulation system, called PIConGPU. Its main relevance is for applications in the radiotherapy of cancer, and in probing the structure of matter via X-rays in material and life sciences.
Another use case for Exascale computers is in climate science. Exascale computers will allow climate scientists to simulate the behaviour of the world’s oceans and atmosphere, and to model abrupt climate change, helping us understand what we need to do to keep the earth a hospitable place.
In conclusion, at over a quintillion operations per second, Exascale computers hold the promise to unravel a new world of possibilities including the fundamental forces of the universe, and who knows, maybe even the origin of the universe!
The writer is country head, AMD India