Skip to main content

Supercomputing: A new era of computing

To the point

Supercomputers are high-performance computers (HPCs) that combine a variety of components, (including processors, memory, storage, I/O [input/output], and networks) to perform computationally intensive tasks. Scientists first started building supercomputers in 1960 to solve issues that would have otherwise taken conventional computers decades to complete.

Over the past 60 years, developments within supercomputing have enabled significant advances in a variety of fields. Some interesting applications include weather forecasting, enabling governments to develop elaborate safety plans for nuclear events, and creating simulations of the human brain.

This article will highlight how supercomputers work, demystify the difference between supercomputing and high-performance computing (HPC), and detail how these systems might benefit society in a variety of circumstances.

What you need to know about supercomputing

From datacenter to software layer, supercomputing covers all areas involved in handling complex problems that require substantial computing power. A high level of computing capacity is made possible only by the parallelization of multiple computers and optimized CPU cores, as illustrated in Figure 1.

Figure 1: Schema of a supercomputer

Supercomputers are not the same as cloud computing, though they can create a similar structure. Comparatively, the interconnection link between each cloud instance uses the internet rather than a physical source, making cloud computing slower than most modern supercomputers.   

Floating-point computations are frequently used in scientific domains. Thus, a key indicator for supercomputers is the number of floating-point operations per second (or FLOPS) they can perform. For reference, the most powerful supercomputer can achieve 1 ExaFLOPS (1018) at its peak. By comparison, a traditional laptop can only produce 30 GigaFLOPS (109). Put in perspective, Indiana University determined a human would need to carry out one floating point calculation per second for more than 31 trillion years to match the computational power of an exa-scale supercomputer.    

Table 1: Computer performance in FLOPS with scale 

A word on high-performance computing (HPC)

Supercomputing is the use of a computer to solve a problem that requires a lot of computational power, a lot of data, or both. Comparatively, high-performance computing is the integration of all the knowledge and tools necessary to construct parallel supercomputers.

To remain competitive in the market, supercomputers must be parallelized, and most new supercomputers are based on parallel architecture. Thus, all supercomputers are now high-performance computers that operate in clusters. Due to this evolution, these days “supercomputers” and “high-performance computers” can be used interchangeably.

How does supercomputing performance rate against efficiency?  

There are thousands of supercomputers in use today, virtually all of which were created by the same five manufacturers. Several ranked lists organize them according to several benchmarks; the most well-known is TOP500, which publishes the 500 most powerful supercomputer rankings twice a year. Additionally, there are rankings based on environmental standards, such as the Green500, which consider energy efficiency in the context of supercomputing.  

The MeluXina supercomputer, constructed in Luxembourg (as part of EuroHPC) has been available for research and private use since the beginning of 2022. It is recognized as the 16th greenest supercomputer in the world and the 48th most powerful overall. One might expect an inverse relationship between energy efficiency and overall performance, or, conversely, a parallel relationship between the number of cores and performance. But when comparing the top supercomputers of both the Top and Green 500 rankings, we find no clear evidence of such.  

Consider the exceptional case of LUMI. Both the 3rd most efficient and the 3rd best performing, it requires considerably fewer cores than the other supercomputers in the top three. This anomaly is promising and suggests great potential for “green” supercomputers.  

Table 2: How the top 5 Top500 supercomputers rank against their Green500 ranking

                    Table 3: The top 5 European supercomputers in the TOP500

 

How is supercomputing disrupting the market?

The ease with which companies can leverage supercomputers is increasing. Consider again the case of MeluXina; more than 200 different businesses across 15 distinct industries have expressed interest in adopting it for their initiatives. As of June 2022, 60 of these 200 projects have been completed.

At a broader level, the supercomputers market is expected to register a compound annual growth rate (CAGR) of almost 10% between 2022 and 2027. Two trends contribute to this growth:

 

Parallel industry growth

The growth of three key sectors is also supporting the growth of the supercomputing market. They all rely on the power of supercomputers:

  • Artificial intelligence and machine learning to tackle data-intensive problems
  • Cloud computing to reduce capital expenditure and easily scale infrastructure using parallel computing
  • Cybersecurity to address cyberattacks that are drastically increasing in number. Infrastructures can be protected using sophisticated cryptographic algorithms that can only be operated on supercomputers.

 

Stability and accumulated domain expertise

The creation of supercomputers requires years of study. For that reason, supercomputing vendors (consisting of five major players) are expected to remain the same for the immediate future. With decades of research and industry expertise, there is a solid foundation that lends itself to dynamic evolution, helping businesses address changing needs in a way that is both meaningful and profitable.


How is supercomputing being used in the real world? 

Like all tools, supercomputers won’t do much completely on their own; skilled practitioners are crucial. From creating tailored products to helping staff upskill on parallel computing, consulting firms can help companies use supercomputers to transform their daily operations and thus, overall potential. For instance, a consultancy might help to not only design a unique infrastructure (made of third-party apps and bespoke solutions), but also provide the additional support needed to smoothly implement and integrate it into day-to-day business.

Here are some examples of cross-sector initiatives that use supercomputers to achieve their goals:

  • Construction: The process of developing and testing various building materials for a whole construction site could take years. Tianhe-1A, one of the most powerful supercomputers in the world, has been used to automatically simulate and optimize the many construction materials and complete logistics of a project, accelerating its completion. Ultimately, using an HPC reduced the cost and environmental impact of construction projects.
  • Aviation: The aviation industry is highly dependent on data. With airplane emissions being a significant contributor to climate change, data can be used to measure fuel efficiency more accurately, among other target metrics, to help reduce pollution. Using HPCs, a large airplane manufacturer was able to run many simulations to optimize their planes fuel-efficiency. Thanks to an estimated 200-pound weight reduction of one of their aircrafts, more than $200 million was saved.
  • Finance: To calculate the risks associated with fixed-income operations, tens of thousands of different market scenarios are examined, and the related risks must be constantly reviewed. This implies the extended execution of complex algorithms. To speed up this process, a large investment bank used a supercomputer to execute these algorithms; instead of hours, it took minutes.
  • Healthcare: It takes several months to test new drugs and techniques for cardiovascular operations. The Living Heart Project, a cardiovascular research initiative, created a virtual human heart with over 25 million parameters using HPCs. This condensed what would have been months of trials into roughly 40 hours.
  • Space: Global Positioning System (GPS) infrastructures on Earth are being compromised by solar storms, which occur every few years. To avoid serious disruption, NASA uses supercomputers and deep learning algorithms (designed to run on HPCs) to forecast these storms.

Most supercomputer use-cases are data-intensive simulations for long-term decision making. One challenge of supercomputing could be real-time inference; for these specific purposes, other technologies, such as EdgeAI, would be more appropriate.

Nevertheless, through the increasingly creative uses of supercomputers across sectors, there is a huge opportunity for society to benefit. Supercomputers are not solely calculators; they have the potential for vast, diverse impact and the possibilities are only increasing.  

Conclusion

Modern supercomputers offer a solution for data-intensive processes while being more effective than they once were. For this reason, many businesses, regardless of sector, can profit from using them in their daily workload. But at a broader level, and perhaps most importantly, leveraging the power of supercomputers has the potential to stimulate massive change within both industry and society. Thus, as supercomputing becomes increasingly accessible to companies, it could be considered negligent of executives to not consider them in their technology transformation.

Visit our Deloitte AI Institute