OBLIMAP ice sheet model coupler parallelization and optimization

 A Use Case by

Short description

Within the ESiWACE2 project, we have parallelized and optimized OBLIMAP. OBLIMAP is a climate model - ice sheet model coupler that can be used for offline and online coupling with embeddable mapping routines. In order to anticipate future demand concerning higher resolution and/or adaptive mesh applications, a parallel implementation of OBLIMAP's fortran code with MPI has been developed. The data-intense nature of this mapping task required a shared memory approach across the processors per compute node in order to prevent the node memory from being the limiting bottleneck. Moreover, the current parallel implementation allows multi-node scaling and includes parallel NetCDF IO in addition to loop optimizations.

Results & Achievements

Results show that the new parallel implementation offers better performance and scales well. On a single node, the shared memory approach allows now to use all the available cores, up to 128 cores in our experiments of the Antarctica 20x20km test case where the original code was limited to 64 cores on this high-end node, and was even limited to 8 cores on moderate platforms. The multi-node parallelization yields for the Greenland 2x2km test case a speedup of 4.4 on 4 high-end compute nodes equipped with 128 cores each when compared to the original code, which was able to run only on 1 node. This paves the way to establishing OBLIMAP as a candidate ice sheet coupling library for large-scale, high-resolution climate modeling.

Objectives

The goal of the project is firstly to reduce the memory footprint of the code by improving its distribution over parallel tasks, and secondly to resolve the I/O bottleneck by implementing parallel reading and writing. This will improve the intra-node scaling of OBLIMAP-2.0 by using all the cores of a node. A second step will be the extension of the parallelization scheme to support inter-node execution. This work will establish OBLIMAP-2.0 as a candidate ice coupling library for large-scale, high-resolution climate models.

Technologies

OBLIMAP code 
MPI
NetCDF
Atos BullSequana XH2000 supercomputer

Collaborating Institutions

KNMI
Atos

GPU Optimizations for Atmospheric Chemical Kinetics

 A Use Case by

Short description

Within the ESiWACE2 project, open HPC services to the Earth system modelling community in Europe provide guidance, engineering, and advice to support exascale preparations for weather and climate models. ESiWACE2 aims to improve model efficiency and to enable porting models to existing and upcoming HPC systems in Europe, with a focus on accelerators such as GPUs. In this context, through a collaboration between Cyprus Institute, Atos, NLeSC and also with the participation of Forschungszentrum Jülich, the ECHAM/MESSy Atmospheric Chemistry model EMAC has been optimized. EMAC describes chemical interactions in the atmosphere, including sources from ocean biochemistry, land processes and anthropogenic emissions. This computationally intensive code was ported in the past to GPUs using CUDA to achieve speedups of a factor of 5-10. The application had a high memory footprint, which precluded handling very large problems such as more complex chemistry.

Results & Achievements

Thanks to a series of optimizations to alleviate stack memory overflow issues, the performance of GPU computational kernels in atmospheric chemical kinetics model simulations has been improved. Overall, the memory consumption of EMAC has been reduced by a factor of 5, allowing a time to solution speedup of 1.82 on a benchmark representative of a real-world application, simulating one model month.

As a result, we obtained a 23% time reduction with respect to the GPU-only execution. In practice, this represents a performance boost equivalent to attaching an additional GPU per node and thus a much more efficient exploitation of the resources.

Objectives

The goal of the project service was to reduce the memory footprint of the EMAC code in the GPU device, thereby allowing more MPI tasks to be run concurrently on the same hardware. This allowed the model to be optimized reaching high performance for current and future GPU technologies and later to extend its computational capability, enabling it to handle chemistry that is an order of magnitude more complex, such as the Mainz Organic Mechanism (MOM).
Source: Theodoros Christoudias, Timo Kirfel, Astrid Kerkweg, Domenico Taraborrelli, Georges-Emmanuel Moulard, Erwan Raffin, Victor Azizi, Gijs van den Oord, and Ben van Werkhoven. 2021. GPU Optimizations for Atmospheric Chemical Kinetics. In The International Conference on High Performance Computing in Asia-Pacific Region (HPC Asia 2021). Association for Computing Machinery, New York, NY, USA, 136–138. DOI:https://doi.org/10.1145/3432261.3439863

Technologies

EMAC (ECHAM/MESSy) code 
CUDA
MPI
NVIDIA GPU accelerator
Atos BullSequana XH2000 supercomputer

Use Case Owner

Collaborating Institutions

List of innovations by the CoEs, spotted by the EU innovation radar

idea-diego1100x350
The EU Innovation Radar aims to identify high-potential innovations and innovators. It is an important source of actionable intelligence on innovations emerging from research and innovation projects funded through European Union programmes. 
 
These are the innovations from the HPC Centres of Excellence as spotted by the EU innovation radar:
 
bioexcel-logo

Title: GROMACS, a versatile package to perform molecular dynamics
Market maturity: Exploring
Project: BioExcel
Innovation Topic: Excellent Science
KUNGLIGA TEKNISKA HOEGSKOLAN - SWEDEN

Cheese_logo

Title: Urgent Computing services for the impact assessment in the immediate aftermath of an earthquake
Market maturity: Tech Ready
Market creation potential: High
Project: ChEESE
Innovation Topic: Excellent Science
EIDGENOESSISCHE TECHNISCHE HOCHSCHULE ZUERICH - SWITZERLAND
BULL SAS - FRANCE

compbiomed_long_logo
ecam-300x238
eocoe
esiways logo_type_grey_left copy

Table: New coupled earth system model
Market maturity: Tech Ready
Project: ESiWACE
Innovation Topic: Excellent Science
BULL SAS - FRANCE
MET OFFICE - UNITED KINGDOM
EUROPEAN CENTRE FOR MEDIUM-RANGE WEATHER FORECASTS - UNITED KINGDOM
 

Excellerat_Logo_ELR_V1_20180209-01-300x106

Title: In-Situ Analysis of CFD Simulations
Market maturity: Tech Ready
Market creation potential: High
Project: Excellerat
Innovation Topic: Excellent Science
KUNGLIGA TEKNISKA HOEGSKOLAN - SWEDEN
FRAUNHOFER GESELLSCHAFT ZUR FOERDERUNG DER ANGEWANDTEN FORSCHUNG E.V. - GERMAN

Title: Interactive in situ visualization in VR
Market maturity: Tech Ready
Market creation potential: High
Project: Excellerat
Innovation Topic: Excellent Science
UNIVERSITY OF STUTTGART - GERMANY

Title: Machine Learning Methods for Computational Fluid Dynamics (CFD) Data
Market maturity: Tech Ready
Market creation potential: Noteworthy
Project: Excellerat
Innovation Topic: Excellent Science
KUNGLIGA TEKNISKA HOEGSKOLAN - SWEDEN
FRAUNHOFER GESELLSCHAFT ZUR FOERDERUNG DER ANGEWANDTEN FORSCHUNG E.V. - GERMAN

MaX-logo-without-subline

Title: Quantum Simulation as a Service
Market maturity: Exploring
Market creation potential: Noteworthy
Project: MaX
Innovation Topic: Excellent Science
EIDGENOESSISCHE TECHNISCHE HOCHSCHULE ZUERICH - SWITZERLAND
CINECA CONSORZIO INTERUNIVERSITARIO - ITALY

NOMAD_Logo_srgb_web_whigh
pop250

DYAMOND intercomparison project for storm-resolving global weather and climate models

 A Use Case by

Short description

The growth in computational resources now enables global weather and climate models to operate on the scale of a few kilometres. At this resolution, they can explicitly resolve storm systems and ocean eddies. The DYAMOND model intercomparison is the first project to perform a systematic intercomparison of these next-generation models. The ESiWACE flagship models IFS and ICON participate in the intercomparsion, and ESiWACE supports the intercomparison by providing data storage at DKRZ, resources for server-side processing and support in the use of the tools.

Results & Achievements

Currently 51 users from 30 institutions worldwide have access to the intercomparison dataset. A special edition in the Journal of the Meteorological Society of Japan is dedicated to this intercomparison, and more and more papers are being published also in other journals. Two hackathons supported by ESiWACE have brought the community together and have provided guidance to junior researchers.

Objectives

By supporting the DYAMOND intercomparison of storm-resolving global weather and climate models, ESiWACE facilitates the development of these next-generation models, and advances climate science. The intercomparison allows to identify common features and model-specific behaviour, and thus yields new scientific discoveries and increases the robustness of our knowledge and the models. At the same time, this intercomparison serves as a perfect test case for the high-performance data analysis and visualization workflows necessary for dealing with the challenging amounts of data that these models produce, allowing ESiWACE scientists to improve the workflows using real-world cases.

Technologies

CDO, ParaView, jupyter, server-side processing

Use Case Owner

Collaborating Institutions

DKRZ, MPI-M (and many others)

Optimization of Earth System Models on the path to the new generation of Exascale high-performance computing systems

 A Use Case by

Short description

In recent years, our understanding of climate prediction has significantly grown and deepened. This is being facilitated by improvements of our global Earth System Models (ESMs). These models aim for representing our future climate and weather ever more realistically, reducing uncertainties in these chaotic systems and explicitly calculating and representing features that were previously impossible to resolve with models of coarser resolutions.

A new generation of exascale supercomputers and massive parallelization are needed in order to calculate small-scale processes and features using high resolution climate and weather models.

However, the overhead produced by the new massive parallelization will be dramatic, and new high performance computing techniques will be required to rise to the challenge. These new HPC techniques will enable scientists to make efficient use of upcoming exascale machines, and to set up ultra-high resolution experiment configurations of ESMs and run the respective simulations. Such experiment configurations will be used to predict climate change over the next decades, and to study extreme events like hurricanes.

Results & Achievements

The new EC-Earth version in development is being tested for the main components (OpenIFS and NEMO) on Marenostrum IV, using a significant number of cores to test the new ultra-high resolutions of 10 km in the horizontal domain, using up to 2048 nodes (98,304 cores) for the NEMO component and up to 1024 nodes (49,152 cores) for the OpenIFS component.

Different optimizations (developed in the framework of the projects ESiWACE and ESiWACE2) included in these components have been tested to evaluate the computational efficiency achieved. For example, the OpenIFS version including the new integrated parallel I/O allows for an output of hundreds of Gigabytes, while the execution time increases only by 2% compared to the execution without I/O. This is much better than the previous version, which produced an overhead close to 50%. Moreover, this approach will allow for using the same I/O server for both components, facilitating more complex computations online and using a common file format (netCDF).

Preliminary results using the new mixed precision version integrated in NEMO have shown an improvement of almost 40% in execution time, without any loss of accuracy in the simulation results.

Objectives

EC-Earth is one such model system, and it is being used in 11 different countries and by up to 24 meteorological or academic institutions to produce reliable climate predictions and climate projections. It is composed of different components, with the atmospheric model OpenIFS and the ocean model NEMO being the most important ones.

EC-Earth is one of the ESMs that suffer from a lack of scalability when using higher resolutions, with an urgent need for improvements in capability and capacity on the path to exascale. Our main goal is achieving a good scalability of EC-Earth using resolutions of up to 10 km of horizontal spatial resolution with extreme parallelization. In order to achieve this, different objectives are being pursued:

(1) The computational profiling analysis of EC-Earth. Analysing the most severe bottlenecks of the main components when extreme parallelization is being used.

(2) Trying to exploit high-end architectures efficiently, reducing the energy consumption of the model to achieve a minimum efficiency in order to be ready for the new hardware. For this purpose, different High Performance Computing techniques are being applied, for example the integration of a full parallel in- and output (I/O), or the reduction in precision of some variables used by the model, maintaining the same accuracy in the results while improving the final execution time of the model.

(3) Evaluating, if massive parallel execution and the new methods implemented could affect the quality of the simulations or impair reproducibility.

ESiWACE Newsletter 01/2021 published

13. January 2021
The Centre of Excellence in Simulation of Weather and Climate in Europe (ESiWACE) published a new issue of their newsletter: Learn more about upcoming virtual trainings and workshops, as well as further news.

ETP4HPC handbook 2020 released

6. November 2020

The 2020 edition of the ETP4HPC Handbook of HPC projects is available. It offers a comprehensive overview over the European HPC landscape that currently consists of around 50 active projects and initiatives. Amongst these are the 14 Centres of Excellence and FocusCoE, that are also represented in this edition of the handbook.

>> Read here

ESiWACE success story: Improving weather and climate forecasting with a new NEMO configuration

Highlighted Centre of Excellence

ESiWACE, the “Centre of Excellence in Simulation of Weather and Climate in Europe” has been funded by the European Commission to substantially improve efficiency and productivity of numerical weather and climate simulation. Overall, the Centre of Excellence prepares the European weather and climate community to make use of future exascale systems in a co-design effort involving modelling groups, computer scientists and HPC industry. .

Organisations & Codes Involved:

Atos Center for Excellence in Performance Programming (CEPP) provides expertise and services in High Performance Computing, Artificial Intelligence and Quantum Computing.
LOCEAN laboratory, part of the CNRS-IPSL, conducts studies on the physical and biogeochemical processes of the ocean and their role in climate in interaction with marine ecosystems.

CERFACS research center is specialized in modelling and numerical simulation, through its facilities and expertise in high-performance computing.

 

CHALLENGE:​

A key model for weather and climate comprehension is NEMO (Nucleus for European Modelling of the Ocean), a modelling framework for research activities and forecasting services in ocean and climate sciences. NEMO is used by a wide variety of applications at global or regional focus, with different resolutions, different numerical schemes, parameterizations and therefore with different performance constraints. The technical challenge here was to find a way to ease the profiling and benchmarking of NEMO for its versatile uses in order to increase the performance of this framework.

 

SOLUTION:​

In response to this challenge, ESiWACE has developed a new configuration, adapted to HPC benchmarking, that is polymorphic and can very simply reproduce any use of NEMO.
Thanks to a close collaboration between the Atos CEPP, LOCEAN and CERFACS, a dedicated configuration of NEMO to ease its profiling and benchmarking has been set up.
Many tests of this configuration, including large-scale experiments on the Atos Bull supercomputers at Météo France and CEA’s Very Large Computing Centre (TGCC) have been performed. This resulted in several optimisations improving the performance of the NEMO code in this configuration by up to 38%.

The open source NEMO 4.0, which was released in 2019, benefited from this work and included the following improvements: an automatic MPI sub-domain decomposition, and a rewriting of the communication routines with an optimisation of treatment of the North pole singularity.

Business impact:

Some of the NEMO uses such as weather or climate forecasts are among the key challenges our society must address. Improvements of NEMO numerical performance allow to refine model results, to reduce forecast uncertainties and to better predict high-impact extreme events, thus saving lives. On Earth the Ocean has a huge impact on the atmosphere. Thus, NEMO is widely used coupled with atmospheric models. For example, it is used to simulate ocean eddies, temperature and salinity that play a key role in cyclone intensity and trajectory forecast. Therefore, business impacts of improving NEMO may be indirect but there are significant as they concern everyone and all kinds of companies and entities.

This work benefits society by improving the efficiency and productivity of numerical weather and climate simulation and by preparing them for future exascale systems. It fosters expertise exchanges which enable researchers and industry to be more productive, leading to scientific excellence in Europe, through direct and tight collaborations.

Benefits for further research:

  • Simplified profiling and benchmarking of NEMO at different scales to find the most relevant optimisation paths.
  • Up to 38% efficiency and scalability increase of NEMO with the optimized configuration on HPC systems.
  • Reduction in time/cost of ocean simulations for both research and production purpose improving weather and climate predictions, allowing to protect property, interests and saving lives.
Image: Example of NEMO High Resolution Simulation (ORCA 1/12°) showing here the Sea Surface Salinity in both Atlantic and Pacific Ocean on the equator. Source: https://www.nemo-ocean.eu/

ESiWACE newsletter October edition published

October 12th, 2020

The October edition of the ESiWACE newsletter was recently published. It features online training courses, events, news and latest publications from the ESiWACE CoE.

>> Read on Zenodo

>> Subscribe to newsletter here