Modeling and simulation

From modeling to supercomputing: understanding the scientific continuum

Date:

Changed on 23/05/2025

How can we translate and simulate the complexity of certain natural phenomena, governed by the complex laws of mechanics, physics, chemistry or biology, and translate them into clear, useful and comprehensible information for everyone? At the Inria center at the University of Bordeaux, scientists are responding to this challenge with a triptych of skills known as the “continuum”, combining modeling, supercomputing and parallel architectures. Together, they combine their expertise to structure their research in applied mathematics and computer science in order to simulate reality by computer, in a wide variety of fields.
Simulation
© Inria / Photo B. Fourrier

Modeling reality: principles and equations

It all starts with the need to understand a major phenomenon in detail: wave propagation, fluid flow, the workings of the heart, the sun... or even a wind turbine!

Each of these problems is based on fundamental, universal laws of physics, such as the conservation of mass and energy, or the balance of forces. These laws can be simplified and adapted to the real situation by setting out working hypotheses, which can then be formalized in a set of mathematical equations. These so-called differential equations transpose the fundamental principles of physics and establish a mathematical representation of the phenomenon to model it in broad outline. This lays the foundations for simulation.

Towards digital: the discretized world

These equations are often too complex to solve directly. Approximate solutions must therefore be sought. To do this, we use a method known as meshing: an essential tool in scientific computing. As in 3D animation, this involves dividing space into a network of points. This is known as discretization of space.  Using numerical analysis techniques, differential equations are translated into algebraic relations to calculate the associated physical values (velocities, temperatures, pressures, etc.) at each point. This is known as discretizing the equations. This procedure transforms continuous equations into systems that can be manipulated numerically and thus adapted to the computing capacities of computers. 

Algorithms to optimize calculations

The resulting relationships are arranged in two- or multi-dimensional arrays called matrices and tensors. But sometimes there are hundreds of millions, or even billions of equations to be processed, and the size of the resulting arrays changes accordingly! It is therefore imperative to optimize the calculation by determining a sequence of operations defined by specific algorithms and implemented on supercomputers, with hundreds or thousands of computing cores, such as Plafrim, hosted by the Inria center at the University of Bordeaux. This requires expertise such as that of the Topal team, which offers algorithms for discretization and software libraries, such as PaStiX or Chameleon, capable of running on supercomputers. 

 

Plafrim
© Inria / Photo B. Fourrier

Harnessing system performance with parallel computing

Indeed, beyond the actual processing of calculations, it's all about maximizing the use of processors to solve complex problems at extremely high speeds. This is where research teams specializing in High-Performance Computing (HPC) come in, such as the Concace project-team, which is looking at digital and parallel composability to enable complex algorithms to adapt more easily to different architectures (CPU, GPU...).   

Parallel architectures make it possible to run several tasks simultaneously on different computer processors with shared memory. This pooling enables real-time exchange of information to monitor calculation time precisely and interact with the system to optimize the deployment of solution methods. This real-time optimization can be achieved with software such as starPU, developed by the Storm project team. To achieve this goal, tools are also needed to model the hierarchical architectures of modern computing platforms. This is the aim of Hwloc, an open-source software package created by researchers from the Storm and Tadaam teams and integrated into Frontier, the world's most powerful supercomputer.

Data to exploit at every stage

At every stage of this continuum - from modeling to parallel architectures to supercomputing - data is omnipresent. Whether it's the data needed to understand the phenomenon under study, useful for defining a formal model and working hypotheses, essential for creating a model or generated by simulation or by the supercomputer's own operation, it is the key to optimization and must therefore be interpretable at every level in order to be re-injected and useful.  

This means they need to be comprehensible to all users, especially when it comes to interpreting results and reporting them to decision-makers and partners. This is the objective of data visualization, as practiced by the Bivwac project-team. 

 

Bivwac
© Inria / Photo G. Destombes

A wide range of applications

The research teams at the Inria center at the University of Bordeaux cover all the skills required for this continuum, with strong synergies between them: specialists in applied mathematics, scientific computing and IT, as well as solid partnerships with leading industrial players. The applications resulting from this research help to improve understanding, design, decision-making and control in a number of fields, including : 

  • Aeronautics, with partners such as Airbus, CEA and ONERA. For example, the Cagire project-team is working on the development of a future digital flow simulation tool to better understand heat exchange inside an aircraft turbine. Another challenge taken up by the Storm and Topal teams concerns numerical simulations to model complex phenomena - such as blast waves - during rocket take-off and thruster separation.
  • Energies, with major collaborations (IFPEN, EDF, SAFRAN) such as the Memphis team specializing in multiphysics interaction, which is working on the study and numerical modeling of natural wind resources.
  • Healthcare, in partnership with CHU de Bordeaux, BPH, IHU de Bordeaux and Institut Bergonié, with specific models developed for each discipline and according to the pathologies studied (oncology, cardiology, immunology, etc.). Since 2024, the Carmen and Tadaam teams have been working together on the MicroCard 2 project, which aims to model the structure of cardiac tissue and the electrical functioning of the heart on the scale of each of the billions of cells that make it up.
  • In the field of natural and coastal hazards, the Cardamom team is working with BRGM and CEA to use simulation to study marine phenomena (tsunamis, storms, etc.) on a very large scale, and to better predict the risk of flooding and submersion for coastal towns.
  • And finally... in more specific fields such as helioseismology! The Makutu team, experts in the development of advanced numerical methods and mathematical models to describe physical problems, are interested in gravitational and acoustic waves to better understand the Sun and its internal structure!