The Future
Super computers
Processing large amounts of data with machine learning algorithms has led to an increasing demand for computing power. High Performance Computing is a strategic resource with applications in countless sectors.
9 min
H
umanity generates tens of zettabytes (trillions of gigabytes) of data annually. Within a few years—when 150 billion new “smart” devices are connected to the Internet—this figure is expected to rise to several yottabytes, surpassing even the Avogadro constant (which represents the number of particles—atoms, molecules, or ions—in a mole). In this context, artificial intelligence—particularly machine learning, with its remarkable advances in recent decades—appears to be the only viable tool for analyzing and extracting value from such vast data volumes. Machine learning technologies are now crucial at every stage of big data analysis, from identifying patterns in the data (information), to detecting correlations between those patterns (knowledge), to building mathematical models that represent the data, and finally, developing predictive algorithms that align with these models.
The processing of large amounts of data with machine learning algorithms has driven an increasing demand for computing power. Since 2012, the computing power used during the training phases of machine learning algorithms has grown exponentially, with a doubling time of just 3.4 months. For comparison, Moore’s Law predicts a doubling time of 2 years. To put this in perspective, computational usage has increased by more than 300,000 times since 2012, whereas Moore’s Law would have only resulted in a sevenfold increase. These advancements in processing power have been a critical factor in the progress of machine learning algorithms. As this trend continues, it is crucial to prepare for computing systems that far exceed current capabilities.
High Performance Computing (HPC) today
High Performance Computing (HPC) technologies, currently capable of operating at petaFLOPS scale, typically rely on parallel computing. Interest in HPC began in the 1960s, with early applications—particularly simulations—at U.S. universities and government agencies. Since then, advancements in hardware architectures and software tools have unlocked the potential of HPC systems for a wide range of applications. In recent years, new opportunities have emerged for applying HPC in business sectors far removed from its original contexts. Today, HPC is widely used in fields such as cybersecurity, biometrics, optimization, risk management, process improvement, behavioral prediction, and business model enhancement. Partly due to its broad applications in business, the global HPC market is experiencing rapid growth, now exceeding USD 40 billion, with further expansion anticipated.
One example of HPC applications in business is PayPal, which uses an HPC cluster to identify fraud and suspicious behavior patterns in real time, quickly develop new anti-fraud models, and extract insights from the continuous data flow recorded on its systems. PayPal achieves this by correlating data from many disparate sources, allowing the company to make timely business interventions. Specifically, the HPC cluster's computing speed enables PayPal to process and correlate data from thousands of sources—handling approximately 3 million events per second—to generate real-time insights. The cluster analyzes application logs, operational data, environmental data, and social media activity, processing data flows of 25 terabytes per hour from thousands of servers, including trends from social media and customer interactions. HPC enables PayPal to detect patterns and anomalies, allowing the company to take immediate action before any negative impact on users occurs.
In a different business segment, Eni utilizes HPC in its upstream activities to enhance hydrocarbon exploration and improve reservoir modeling during production, as well as to strengthen prediction and simulation capabilities. To achieve these goals, Eni operates two HPC systems with a combined peak computing capacity of 52 PetaFLOP/s, integrating numerical simulation with machine learning techniques, particularly deep learning. The power of these HPC systems has allowed Eni to significantly reduce the time needed to reconstruct subsurface models—from several months to just days or even hours. Additionally, the introduction of HPC systems has halved the time required to begin production, reducing it from 9 years to 4.5 years.
As of June 2024, the United States and China lead the world in HPC installations, with 171 and 80 systems, respectively. Italy ranks eighth globally, with 11 installations. The majority of the world’s most powerful HPC systems are located in the U.S., including the top three: Frontier at Oak Ridge National Laboratory, Aurora at Argonne National Laboratory, and Eagle by Microsoft Azure. In the European Union, Italy ranks third in computing power, behind Germany and France, and has three systems in the global top 100: Cineca’s Leonardo, in seventh place, and Eni's HPC5 and HPC4, in twenty-fourth and eighty-first places, respectively. Compared to other regions, Italy stands out for its extensive use of HPC in industry.
Looking to the future
In the near future, the introduction of new HPC systems is expected, such as HPC6, recently announced by Eni, which will have a peak power of over 600 PetaFLOP/s. Additionally, two emerging technology trends are likely to have a significant impact on HPC in the near future: the interactions between HPC and quantum computing, and between HPC and cloud computing.
Quantum computing is based on quantum theory
One of its most intriguing computational aspects being the quantum bit (Qubit), which can exist in two states simultaneously due to the superposition principle of quantum physics. Unlike traditional computing, quantum computing’s power doesn't rely on higher clock speeds but on its ability to handle exponentially larger datasets. Given the current limitations of quantum computing and the ongoing evolution of HPC, hybrid systems are likely in the near term. These systems will combine traditional high-speed general-purpose processing (via HPC) with ultra-performance, use-case-specific processing (via quantum computing). Notably, the EuroHPC JU (European High Performance Computing Joint Undertaking) has funded the HPCQS (High-Performance Computer and Quantum Simulator hybrid) project, which aims to integrate quantum and traditional HPC technologies by incorporating quantum simulators into existing European supercomputers. This will be a globally unique incubator for Quantum-HPC hybrid computing, unlocking new innovation potential and preparing Europe for the post-Exascale era.
In the near future, it will become important to consider the interactions between HPC and cloud computing. In recent years, major cloud providers have invested heavily in global networks of massive-scale systems that could become highly competitive with current HPC systems. As the computational demands of AI algorithms grow, today’s cloud systems are increasingly built using custom chips and semiconductors. This trend could negatively impact HPC by reducing the financial leverage of traditional hardware and CPU vendors, which have historically played a key role in HPC development.
Will cloud providers become the dominant players in offering HPC-like services and applications? It’s difficult to predict, but cloud architectures are already taking the lead in many fields, such as gaming and computer vision, and are reshaping how we think about high-performance computing. Building the next generation of HPC systems will likely require rethinking traditional approaches and focusing more on the success factors of cloud architectures, such as custom hardware configurations and large-scale prototyping.
One crucial point to consider: while HPC systems have traditionally relied on large financial investments from governments and research institutions, cloud systems generate substantial revenues. Is this model sustainable for HPC? This dynamic suggests that stronger collaborations between HPC and dominant players in the computing ecosystem, including cloud providers, may be necessary. It’s no coincidence that the third most powerful HPC system in the world today is Microsoft Azure’s Eagle supercomputer.