SPH-EXA2: SMOOTHED PARTICLE HYDRODYNAMICS AT EXASCALE


PIs in PASC SPH-EXA2 and SKACH:
Florina Ciorba (University of Basel)
Lucio Mayer (University of Zurich)
Rubén Cabezón (University of Basel)

Scientific Advisory Board:
Aurélien Cavelan (University of Basel, Switzerland)
Ioana Banicescu (Mississippi State University, MS, USA) SPH-EXA logo
Domingo García-Senz (Universitat Politècnica de Catalunya, Spain)
Thomas Quinn (University of Washington in Seattle, WA, USA)
Bastien Chopard (University of Geneva, Switzerland)
Romain Teyssier (University of Zurich, Switzerland)
Hans-Joachim Bungartz (Technical University of Munich, Germany)

Project members:
Osman Seckin Simsek (University of Basel)
Yiqing Zhu (University of Basel)
Lukas Schmidt (University of Basel)
Noah Kubli (University of Zürich)
Sebastian Keller (ETH Zurich/CSCS, Switzerland)
Jean-Guillaume Piccinali (ETH Zurich/CSCS, Switzerland)

Funding agency: Platform for Advanced Scientific Computing (http://www.pasc-ch.org)

Duration: 01.07.2021-30.06.2024

Software: The SPH-EXA simulation framework is publicly available here.

Project Summary

The goal of the SPH-EXA2 project is to scale the Smoothed Particle Hydrodynamics (SPH) method implemented in SPH-EXA1 to enable Tier-0 and Exascale simulations. To reach this goal we define four concrete and interrelated objectives: physics, performance, correctness, and portability & reproducibility

We aim at coupling relevant physics modules with our SPH framework enabling the possibility of addressing both long-standing and cutting-edge problems via beyond state state-of-the-art simulations at extreme scales in the fields of Cosmology and Astrophysics. Such simulations include the formation, growth, and mergers of supermassive black holes in the early universe which would greatly impact the scientific community (for instance, the 2020 Nobel Prize in Physics has been awarded for pioneering research on super-massive black holes). Moreover, the ability to simulate planet formation with high-resolution models will play an important role in consolidating Switzerland’s position as a leader in experimental physics and observational astronomy. Additional targets will be related to explosive scenarios such as core-collapse and Type Ia supernovas, in which Switzerland has also maintained a long record of international renown. These simulations would be possible with a Tier-0-ready SPH code and would have a large impact on projects such as the current NCCR PlanetS funded by the SNF. 

The long-term and ambitious vision of the SPH-EXA consortium is to study fluid and solid mechanics in a wide range of research fields, that nowadays are unfeasible (with the current models, codes, and architectures). To this end, in SPH-EXA2 we build on SPH-EXA1 and develop a scalable bare-bones SPH simulation framework, and refer to it as SPH-EXA. In Switzerland, within the framework of the PASC SPH-EXA (2017-2021) project, we developed the SPH-EXA miniapp as a scalable SPH code that employs state-of-the-art parallel programming models and software engineering techniques to exploit the current HPC architectures, including accelerators. The current SPH-EXA mini-app performs pure hydrodynamical simulations with up to 1 trillion SPH particles using only CPUs on 4,096 nodes on Piz Daint at CSCS. With relatively limited memory per GPU, the miniapp can still scale up to 250 billion SPH particles. 

In terms of performance, the use of accelerators is necessary to meet the above SPH-EXA2 goal and objectives. Offloading computationally-intensive steps to hardware accelerators, such as self-gravity evaluation and ancillary physics, will enable SPH-EXA to simulate increasingly complex cosmological & astrophysical scenarios. We envision that various types of hardware accelerators will be deployed on the supercomputers that we will use in this project, such as NVIDIA GPUs (in Piz Daint) or AMD GPUs (in LUMI). Portability across GPUs will be ensured by using OpenACC and OpenMP target offloading, which is supported by different GPU vendors. 

Scheduling & load balancing and fault tolerance are major challenges on the way to Exascale. We will address these challenges in SPH-EXA2 by employing locality-aware data decomposition, dynamic & adaptive scheduling and load balancing, and advanced fault tolerance techniques. Specifically, we will schedule & load balance the computational load across heterogeneous CPUs, various NUMA domains (e.g., multiple sockets or memory controllers, multi-channel DRAM, and NV-RAM), and between CPUs and GPUs.  

To achieve correctness, we will examine and verify the effectiveness of the new MPI 4.0 and beyond standard support for fault tolerance, in addition to selective particle replication (SPR) and optimal checkpointing (to NV-RAM or SSD). 

To ensure performance portability & reproducibility, we will benchmark SPH-EXA1’s performance on a wide variety of platforms, as well as build off-the-shelf SPH-EXA containers that can easily be deployed with no additional setup required. This will also enlarge the SPH-EXA code user base. 

The primary advantage of the SPH-EXA2 project is its scientific interdisciplinarity. The project involves computer scientists, computer engineers, astrophysicists, and cosmologists. This is complemented by a holistic co-design, which involves applications (cosmology, astrophysics, CFD), algorithms (SPH, domain decomposition, load balancing, scheduling, fault tolerance, etc.), and architectures (CPUs, GPUs, etc.) as opposed to the traditional binary software-hardware co-design.

The methodology employed to achieve the goal and objectives of SPH-EXA2 is a unique combination of: 

(a) State-of-the-art SPH implementation. The SPH-EXA framework will integrate the most recent advances in the SPH technique (namely, pairing resistant interpolation kernels, accurate evaluation of gradients via an integral formalism, adaptive generalized volume elements, and artificial viscosity switches), which are at the core of the current SPH-EXA mini-app. 

(b) State-of-the-art computer science methods. Domain decomposition for hybrid architectures (CPUs and GPUs), preserving data locality (by locality-aware domain traversal using space-filling curves), adaptive (not only dynamic) load balancing, and failure (e.g., permanent) and error (e.g., silent) detection and correction will be part of SPH-EXA2. 

(c) Trillion particles simulations. The target for SPH-EXA2 is to be the first SPH code to simulate 1012 particles on hybrid Tier-0 computing architectures. 

(d) Ease of use and adoption. A key design goal for SPH-EXA2 is to develop it as a heavily templated header-only code and to minimize, as much as possible, dependencies to third-party software libraries. 

The impact of the SPH-EXA2 project is not limited to Cosmology and Astrophysics. The implementation of the SPH technique will target future exascale infrastructures and result in an exascale-ready hydrodynamics code to support the long-term vision of the SPH-EXA consortium. Therefore, its influence on all fields that employ SPH (namely, geodynamics, hydraulic engineering, marine engineering, ballistics, physiology, chemistry, aeronautics, robotics, and others) is expected to be significant. 

SPH-EXA2 will also have a significant impact on the computer science community. The complex computational requirements of Cosmology and Astrophysics simulations demand highly scalable algorithms that rely on (multilevel) parallel programming models that are both compute-centric and data-aware. The challenges found when scaling the current approaches for memory management, domain decomposition, global and local synchronization, scheduling & load balancing, and fault tolerance will be addressed in the SPH-EXA2 project and will yield significant contributions in the High Performance Computing (HPC) field. Moreover, SPH-EXA2 will uncover untapped performance potential as well as limitations in the current compute-centric and data-unaware parallel programming models. 

The success of SPH-EXA2 will be in producing the first (to the best of our knowledge) PASC-funded Tier-0 SPH code. The SPH-EXA2 will produce a code that is easily usable by other scientists. This accessibility is strengthened by the adoption of the newest standards for parallel programming which include: C++20, CUDA 11.x, OpenMP 5.1, MPI 4.x, and OpenACC 2.x. This transition guarantees a long lifetime of the SPH-EXA framework and a more efficient implementation in particular cases. 

 

Publications

O.S. Simsek, JG. Piccinali, F.M. Ciorba. “Increasing Energy Efficiency of Astrophysics Simulations Through GPU Frequency Scaling” In Proceedings of The International Conference for High Performance Computing, Networking, Storage, and Analysis (SC24) Workshops Sustainable Supercomputing, November 2024. (to appear)

Y. Zhu, O.S. Simsek, J. Favre, R. Cabezon, F.M. Ciorba. “Scalable In-Situ Visualization for Extreme-Scale SPH Simulations” In Proceedings of The International Conference for High Performance Computing, Networking, Storage, and Analysis (SC24) Workshops In Situ Infrastructures for Enabling Extreme-scale Analysis and Visualization, November 2024. (to appear)

O.S. Simsek, JG. Piccinali, F.M. Ciorba. “Accurate Measurement of Application-level Energy Consumption for Energy-Aware Large-Scale Simulations.” In Proceedings of The International Conference for High Performance Computing, Networking, Storage, and Analysis (SC23) Workshops Sustainable Supercomputing, November 2023. (to appear) [C75.bib]

S. Keller, A. Cavelan, R. Cabezon, L. Mayer, F.M. Ciorba. “Cornerstone: Octree Construction Algorithms for Scalable Particle Simulations.” In Proceedings of the Platform for Advanced Scientific Computing Conference, June 2023.

W. Elwasif, W. Godoy, N. Hagerty, J. A. Harris, O. Hernandez, B. Joo, P. Kent, D. Lebrun-Grandie, E. Maccarthy, V. M. Vergara, B. Messer, R. Miller, S. Oral, S. Bastrakov, M. Bussmann, A. Debus, K., J. Stephan, R. Widera, S. Bryngelson, H. Le Berre, A. Radhakrishnan, J. Young, S. Chandrasekaran, F. M. Ciorba, O.S. Simsek, K. Clark, F. Spiga, J. Hammond, J. Stone, D. Hardy, S. Keller, J.G. Piccinali, and C. Trott. “Application Experiences on a GPU-Accelerated Arm-based HPC Testbed”. International Workshop on Arm-based HPC: Practice and Experience (IWAHPCE-2023), February 2023.

H. Brunst, S. Chandrasekaran, F. Ciorba, N. Hagerty, R. Henschel, G. Juckeland, J. Li, V. Vergara, S. Wienke, M. Zavala. “First Experiences in Performance Benchmarking with the New SPEChpc 2021 Suites”, 22nd IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGRID), May 2022.

You may find previous publications in publications from SPH-EXA (2017-2021).

Posters

Until new posters appear from this project, you may read the posters from SPH-EXA (2017-2021).

Talks

Simsek, O. S.; “Increasing Energy Efficiency of SPH Codes Through Dynamic GPU Frequency Scaling“, Talk at the 18th International SPHERIC Workshop, June 18-20 2024, Berlin.

Ciorba, F. M.; “Sustainable and Scalable Simulations for Computational Sciences”, Invited panel talk at the Sibiu Innovation Days (SID 2023), 2023 October 5-6, Sibiu, Romania. (video)

Simsek, O. S. “Unlocking Data Locality in Imbalanced Supercomputers: Unveiling the Energy Perspective”, Talk in 6th Programming and Abstractions for Data Locality Workshop (PADAL), September 4-6, 2023, Istanbul, Turkey.

Simsek, O. S. “Accurately Measuring Energy Consumption of Large Cosmological Simulations”, Talk in PASC23 Mini-symposium on Green Computing Architectures and Tools for Scientific Computing, June 26-28, 2023, Davos, Switzerland.

Ciorba, F. M.; “SPH-EXA: A Framework for Scalable, Flexible, and Extensible Astrophysical and Cosmological Simulations”, Invited talk at the 35th Workshop on Sustained Simulation Performance Workshop, 2023 April 13-14, Stuttgart, Germany.

Ciorba, F. M.; “SPH-EXA: A Framework for Scalable, Flexible, and Extensible Astrophysical and Cosmological Simulations”, Talk in SIAM Conference on Computational Science and Engineering (CSE23), 2023 February, Amsterdam, Netherlands

Ciorba, F.M.; “Analysis of Load Imbalance in SPH-EXA Simulations”, Talk in SKACH Winter Meeting, 2023 January, Basel, Switzerland

Touzet, J.; “Multiphysics/Hydrodynamical Simulations with Attached and Detached Data: Simulation of Nuclear Networks in SPH-EXA”, Talk in SKACH Winter Meeting, 2023 January, Basel, Switzerland

Kubli, N.; “Fragmenting galactic disk simulations with SPH-EXA”, Talk in SKACH Winter Meeting, 2023 January, Basel, Switzerland

Cabezón, R.; “SPH-EXA: Bringing Computational Astrophysics to Exascale”, Talk in Universitat Politècnica de Catalunya, 2022 October, Barcelona, Spain.

Keller, S.; Simsek, O.; “SPH-EXA: A Framework for Smoothed Particle Hydrodynamics and Gravity at Exascale”, Talk in SKA days, 2022 October, Lugano, Switzerland.

You may also find previous talks from SPH-EXA (2017-2021).