In the scientific world there are a great number of problems whose solution cannot be reached analytically. This type of problem includes the study of chaotic systems, complex systems of equations, simulations of biological systems, etc. The solution to these problems can be obtained by numerical simulation using a supercomputer for scientific computation . In other words, a powerful calculator designed to perform high performance mathematical calculations.
The best supercomputers: the TOP500 project
The TOP500 project aims to create an updated list of the 500 most powerful supercomputers on the planet. China is the dominant country when it comes to the total number of systems with 226 supercomputers . The United States follows with 114 systems, Japan with 30, France with 18 and Germany with 16.
The TOP500 project was born in 1993 and updates the list of systems twice a year. The first update coincides with the International Supercomputing Conference in June, the second takes place in November during the ACM / IEEE Supercomputing Conference. The project aims to provide a common reference basis for analyzing the evolution of supercomputers. System performance is evaluated using the HPL benchmark, a portable implementation of the LINPACK benchmark written in Fortran for non-distributed systems.
TOP5 of supercomputers for scientific computing
After the November 2021 update, Fugaku continues to hold the No. 1 position, which it first gained in June 2020. Its HPL benchmark score is 442 Pflop / s (peta floating point operations per second), which is the number of floating point operations performed in one second by the CPU. To keep company with Fugaku in the first 5 positions are: Summit, Sierra, Sunway TaihuLight and Perlmutter.
- Fugaku is the first ARM-based computer to achieve the title of fastest supercomputer in the world. Its computing power can go more than double the Summit supercomputer to which it has beaten the record. All this computing power is delivered by ARM's 48-core System on a chip (SOC)
- Summit is a supercomputer developed by IBM. Thanks to a performance of 148.8 Pflop / s, it remains the fastest system in the United States and second in the world. The data processing capacity is enabled by 4,356 nodes , each equipped with two IBM Power9 CPUs and six NVIDIA Tensorcore V100 GPUs.
- Sierra , a system at Lawrence Livermore National Laboratory, CA, USA. Its architecture is very similar to the Systems Summit, built with 4,320 nodes with two Power9 CPUs and four NVIDIA Tesla V100 GPUs. Sierra reaches 94.6 Pflop / s
- Sunway TaihuLight is a system developed by the National Research Center of Parallel Computer Engineering & Technology (NRCPC) of China and installed at the National Supercomputing Center in Wuxi, in the Chinese province of Jiangsu. It reaches number 4 thanks to its 93 Pflop / s
- Perlmutter, with a performance of 70.9 Pflop / s, peaked at number five last June
Some problems faced with supercomputers
According to experts , supercomputers will help us not only to overcome challenges such as pandemics and climate change, but also to solve the current problems of the scientific community . This type of problem includes the studies of chaotic systems, complex systems of equations, simulations of biological systems, simulations of the spatial distributions of the plasma density of a solar corona. Furthermore, in fluid dynamics numerical computation is used for the study of turbulence and in molecular dynamics for the solution of "many-body" problems.
Calculation times of supercomputers
Studies that include some relevant physical effects require very accurate code and millions of hours of computation , this value could suggest that a lifetime would not be enough to know the solution. In reality, the total time is calculated considering the working times of the individual processors used in multitasking.
Furthermore, the computation time depends on the compute cluster architectures used. For example, the NAMD code for simulating botulinum in water (303K atoms) takes about 756,000 hours. The architecture used for this type of simulation is Galileo ; the Italian supercomputer dedicated to scientific computing . This is made up of 516 compute nodes, each node containing 2 × 18 core (2.40 GHz) Intel Haswell processors and a shared memory of 128 GB.