The Future
Driver for innovation
Supercomputing is no longer just nice-to-have. The ability to process data at massive scale is essential for companies operating in an increasingly complex and interconnected world
15 min
I
n 1954, Enrico Fermi wrote a letter to the rector of the University of Pisa with a request that seemed, at the time, futuristic: a new calculator to support studies in the field of physics.
The legendary intuition of this Italian scientist marked a dividing line between the world that had been and the world that would be. Seventy years on, we are immersed in a technological revolution that is pushing science into territories that only the power of supercomputing is able to explore. In every sector—from physics to materials science, from energy to health—supercomputing has become an essential enabler of innovation.
Research complexity in the age of supercomputing
In today’s world, the complexity of the challenges we face and the urgency to bring innovative products and processes to market can no longer be tackled with traditional data processing systems. Whether it’s developing new materials, researching future energy solutions, or advancing health technologies, we need the capability to simulate billions of parameters and variables, integrate vast datasets, and process information globally in mere fractions of a second. The energy transition, in particular, demands vast computational resources. For example, improving the efficiency of solar cells, developing higher-performance batteries, or simulating plasma behavior in future magnetic confinement fusion plants all require immense computing power. Since October 2022, when the first public release of ChatGPT “democratized” access to generative AI applications, it has become evident that supercomputing has yet another vital field of application.
The computational power needed to train and query the various large language models, whether for private or corporate use, is poised to become an increasingly significant factor in the overall demand for supercomputing resources.
The challenge of energy efficiency
To paraphrase a famous comic book hero: with great power come great challenges. Supercomputers are remarkable machines, but their massive energy consumption is drawing growing public scrutiny. This issue is particularly significant for younger generations, who are highly attuned to environmental concerns, yet also the primary users in the digital era. The seemingly free access to services and applications often obscures the underlying technological complexity and the substantial resources required to sustain them.
The energy consumption of supercomputers like HPC6 is a much-debated topic.
Estimates of both the current situation and future developments vary widely depending on the source, but the trend is unmistakable: the global energy consumption of supercomputers and the infrastructure that support them is expected to rise significantly in the coming years. For instance, a single supercomputer can consume several megawatts of electricity—enough to power a small city. However, the total energy consumption of all supercomputers currently in use or projected for the near future is often compared to that of entire nations.
The challenge, then, is not only to build ever more powerful machines, but to make them more energy efficient as well. In this context, companies like Eni, which place innovation at the core of their strategy, must prioritize digital sustainability. For Eni, this aspect was not only fundamental but foundational in the design and construction of its Green Data Center (GDC) in Ferrera Erbognone, near Pavia, in 2013. The facility houses Eni’s supercomputers HPC4, HPC5, and the soon-to-be-operational HPC6.
The GDC stands as a model of excellence in combining advanced technology with environmental sustainability. From the outset, its primary goal has been not just operational efficiency, but also minimizing environmental impact, with sustainability at the core of its mission.
A key feature that sets the GDC apart is its focus on energy efficiency, measured using the PUE (Power Usage Effectiveness) indicator. PUE represents the ratio of a data center’s total energy consumption to the energy used specifically for powering its IT systems, such as servers, storage, and networking devices.
A PUE of 1.0 represents the maximum theoretical efficiency, meaning that all the energy consumed is used solely to power the IT systems, with no waste. The latest certified PUE value for Eni’s GDC, recorded in 2023, is 1.172—an extraordinary result that ranks the facility among the most efficient in Europe. This achievement is the result of several innovative solutions that optimize cooling and energy management, including the use of an air convection cooling system (“free cooling”) and the integration of a solar park that helps power the GDC.
The introduction of HPC6 at the GDC marks a further advancement in the supporting systems, including the adoption of a new liquid cooling system, all with the goal of maximizing overall energy efficiency.
Supercomputing at Eni: a precious resource
Given its costs and energy demands, it’s important to consider the value that the HPC family can offer. For a global company like Eni, which produces and processes vast amounts of data, supercomputing is not just a strategic choice—it’s an operational necessity.
Every exploration activity, decarbonization project, and study of new energy sources both generates and requires vast amounts of data. Without the power of machines like HPC6, transforming this data into actionable insights for our business and our decarbonization strategy would be impossible. By rapidly processing vast amounts of data, digital tools—particularly supercomputing—enable and accelerate innovation across a wide range of areas and processes within our organization.
Data quality and robustness
In the world of supercomputing, access to vast amounts of data is not enough; the data must also be robust and of high quality. As the old computer science adage goes: “garbage in, garbage out.” Poor-quality, incomplete, or untraceable data not only makes simulations and analyses useless but can also lead to faulty decisions.
From the sensors monitoring our offshore fields to the systems analyzing our bio-refining processes, robust data forms the foundation of every decision we make, both operational and strategic.
For this reason, we have implemented—and continually refine—systems and processes that enforce a strong data policy. These cutting-edge applications “federate” and validate data in a transversal way, making it accessible and usable across different professional teams. This ensures that decisions are based on solid, verifiable knowledge.
The importance of data quality extends beyond numerical data. A large company like ours also holds vast amounts of textual information, which today’s AI tools allow us to search and analyze. This enables us to quickly extract insights, suggestions, and solutions from past issues that remain relevant. All of this is made possible by unstructured and distributed knowledge management software.
Here too, it’s essential to carefully validate and systematize input data to ensure that the output can be trusted. At Eni, we are committed to fostering a “data culture” at all levels, so we can increasingly rely on this valuable resource with greater efficiency and security.
Algorithms: the real driver of supercomputing
Algorithms that transform data into useful energy, and they do so thanks to the skills and talent of the people who design them. Without advanced algorithms, HPC6 would be like a Formula 1 car without a skilled driver.
Over time, and thanks to the expertise of our team, we have built a strong capability in developing proprietary software that fully harnesses the power of the HPC machines. Thanks to these algorithms—and the programs that translate them into instructions optimized for our massively parallel computing systems—we can model gas deposits with extreme precision, optimize our operations for energy efficiency and safety, and explore and validate new technologies for producing energy from renewable sources and breakthroughs like magnetic confinement fusion. True strength lies not only in the sheer power of the supercomputer (even when, like Eni’s, it ranks among the best in the world) but in the ability to fully harness it through intelligent guidance.
The future: HPC6 and Quantum Computing
In the supercomputer ecosystem, time flies. The Top 500 ranking, which lists the 500 most powerful supercomputers in the world, is updated every six months. Due to new entries or updates, the top 10 are in constant churn. We must make the most of the machines we have, while also considering how to take advantage of the ongoing improvements in computational power, efficiency, and integration that technological progress continues to unlock. With the launch of HPC6, which will soon replace HPC4 and HPC5 (in operation since January 2018 and January 2020, respectively), Eni’s computing power is expected to increase nearly tenfold, from around 70 to approximately 600 petaFLOP/s at peak performance. HPC6 will support the next phase of our innovation process and, in particular, enable us to tackle the challenges of achieving carbon neutrality, serving as a crucial technological tool in the development of new energy solutions.
As part of our commitment to the energy transition, we are also launching significant quantum computing applications through Eniquantic, the joint venture with ITQuanta established last July. With Eniquantic, we are solidifying our position at the forefront of supercomputing, particularly in this exciting and potentially transformative field. Although still in its early stages, quantum technology has the potential to completely revolutionize research and development in critical sectors such as energy and materials. With its ability to process an unimaginable number of states simultaneously, quantum computing could solve problems that would take even the most powerful traditional supercomputers years to address.
A recent study on the profit potential of quantum computers highlights the significance of adopting quantum-hybrid solutions to maximize return on investment in this field. According to the study, such solutions “harness the power of both quantum and classical computers, reducing environmental noise and costs compared to pure quantum systems. As a result, they offer an immediate commercial quantum solution for enterprises at a lower cost, even before pure quantum computers are fully commercialized.”
What’s happening in the automotive sector with hybrid vehicles pioneering the way ahead of fully electric ones could also apply to quantum computing. In a field as cutting-edge as quantum technology, having both powerful traditional supercomputing capabilities and quantum expertise (and future quantum machines) within the same organization opens up new possibilities. At Eni, we are cautiously exploring these paths, fully aware of the complexities involved, but also the potentially revolutionary applications they could unlock.
For us, the concept of hybridization in supercomputing is not new; Eni was in fact among the first companies in the world (in 2014, with the HPC2 supercomputer) to recognize the importance of combining CPUs and GPUs in a hybrid configuration, in order to maximize the computational performance and energy efficiency of HPC6. This is a technical approach that, as we know, has now become so well-established in the world as to be paradigmatic, and is behind the fortune of some companies originally specialized in graphics cards for video games.
Quantum Computing on the Horizon
From Enrico Fermi’s insight into the need for computing power in physics to the evolution of supercomputing at Eni, the journey has been remarkable. Today, supercomputing is no longer a mere accessory—it is a central pillar of innovation. For companies like ours, operating in an increasingly complex and interconnected world, the ability to process data on a massive scale has become—and will continue to be—indispensable.
With HPC6 and quantum computing on the horizon, Eni not only continues to push the boundaries of technology but also reaffirms its role as a visionary leader in a sector where the future of energy and innovation depends on the power of supercomputing.
Moore’s Law1 may not last forever, but our ability to innovate will remain a defining feature of our commitment and mission.
Moore’s Law is (an) “empirical law that describes the development of microelectronics, starting from the beginning of the 1970s, with a substantially exponential, therefore extraordinary, progression; the law was enunciated for the first time in 1965 by Gordon Moore, one of the founders of INTEL and pioneers of microelectronics, who reaffirmed it publicly in 1974. It states that the complexity of microcircuits (…) doubles periodically, with the period originally foreseen at 12 months, extended to 2 years towards the end of the 1970s and, since the beginning of the 1980s, settling at 18 months.(…)” (Treccani Encyclopedia).