Faster Than Real-Time Tsunami Simulations

 A Use Case by

Short description

Faster-than-real-time (FTRT) tsunami computations are crucial in the context of Tsunami Early Warning Systems (TEWS). Greatly improved and highly efficient computational methods are the first raw ingredient to achieve extremely fast and effective calculations. High-performance computing facilities have the role to bring this efficiency to a maximum possible while drastically reducing computational times. This use case will comprise both earthquake and landslide sources. Earthquake tsunami generation is to an extent simpler than landslide tsunami generation, as landslide generated tsunamis depend on the landslide dynamics which necessitate coupling dynamic landslide simulation models to the tsunami propagation. In both cases, FTRT simulations in several contexts and configurations will be the final aim of this use case.

Results & Achievements

Improving HySEA codes in CHEESE:

We have improved the load balancing algorithm. In particular we have added support for heterogeneous GPUs in the load balancing algorithm by assigning a numerical weight to each GPU.

We have developed a new algorithm for the nested meshes processing based on the current state values and we have implemented the activation of the nested meshes processing when a movement of the water is detected in their area.

Implemented asynchronous file writing by creating an additional thread for each MPI process using C++11 threads (see Table 1 at the end of this document).

Added the possibility of resuming a stored simulation.

Added sponge layers for a better processing of the border conditions, in order to avoid possible numerical instabilities in the borders of the domain. Implemented asynchronous CPU-GPU memory transfers.

We have dramatically reduced the size of the output files compressing the data using the algorithm described in Tolkova (2008) and saving most of the data in single precision files. A new version of Tsunami-HySEA has been developed to run simultaneous simulations on the same domain attending the requirements of PD7 and PD8, by executing one simulation on each GPU. This new version is able to use up to 1024 GPUs simultaneously with a very good weak scaling (losing around 3% of efficiency).

With these improvements, we obtain around 30% of reduction on the computational time with respect to the previous version of the codes.

The codes have been tested on CTE-POWER (BSC), DAVIDE and Marconi100 (CINECA) and Piz Daint (CSCS) supercomputers.

Objectives

The aim of this use case is to provide robust and very efficient numerical codes for FTRT Tsunami simulations that can be run in massively parallel multi-GPU architectures.

Technologies

Workflow

The Faster-Than-Real-Time (FTRT) prototype for extremely fast and robust tsunami simulations is based upon GPU/multi-GPU (NVIDIA) architectures and is able to use earthquake information from different locations and with heterogeneous content (full Okada parameter set, hypocenter and magnitude plus Wells and Coppersmith (1994)). Using these inhomogeneous inputs, and according to the FTRT workflow (see Fig. 1), tsunami computations are launched for a single scenario or a set of related scenarios for the same event. Basically, the automated retrieval of earthquake information is sent to the system and on-the-fly simulations are automatically launched. Therefore, several scenarios are computed at the same time. As updated information about the source is provided, new simulations should be launched. As output, several options are available tailored to the end-user needs, selecting among: sea surface height and its maximum, simulated isochrones and arrival times to the coastal areas, estimated tsunami coastal wave height, times series at Points of Interest (POIs) and oceanographic sensors.

 A first successful implementation  has been done for the Emergency Response Coordination Centre (ERCC), a service provided by the ARISTOTLE-ENHSP Project. The system implemented for ARISTOTLE follows the general workflow presented in Figure 1. Currently, in this system, a single source is used to assess the hazard and the computational grids are predefined.  The computed wall-clock time is provided for each experiment and the outputs of the simulation are maximum water height and arrival times on the whole domain, and water height time-series on a set of selected POIs, predefined for each domain. 

A library of Python codes are used to generate the input data required to run HySEA codes and to extract the topo-bathymetric data and construct the grids used by HySEA codes.

Software involved

Tsunami-HySEA has been successfully tested with the following tools and versions:

Compilers: GNU C++ compiler 7.3.0 or 8.4.0, OpenMPI 4.0.1, Spectrum MPI 10.3.1, CUDA 10.1 or 10.2

Management tools: CMake 3.9.6 or 3.11.4

External/third party libraries: NetCDF 4.6.1 or 4.7.3, PnetCDF 1.11.2 or 1.12.0

Pre-processing:

Nesting mesh generation tools.

In-house developed python tools for pre-processing purposes. 

Visualization tools:

In-house developed python tools.

Use Case Owner

Jorge Macías Sanchez
Universidad de Málaga

Collaborating Institutions

UMA, INGV, NGI, IGN, PMEL/NOAA (with a role in pilot’s development).

Other institutions benefiting from use case results with which we collaborate:
IEO, IHC, IGME, IHM, CSIC, CCS, Junta de Andalucía (all Spain); Italian Civil Protection, Seismic network of Puerto Rico (US), SINAMOT (Costa Rica), SHOA and UTFSM (Chile), GEUS (Denmark), JRC (EC), University of Malta, INCOIS (India), SGN (Dominican Republic), UNESCO, NCEI/NOAA (US), ICG/NEAMTWS, ICG/CARIBE-EWS, among others.

Probabilistic Volcanic Hazard Assessment (PVHA)

 A Use Case by

Short description

PVHA methodologies provide a framework for assessing the likelihood of a given measure of intensity of different volcanic phenomena, such as tephra loading on the ground, airborne ash concentration, pyroclastic flows, etc., being exceeded at a particular location within a given time period. This pilot deals with regional long- and short-term PVHA. Regional assessments are crucial for a better land-use planning and for counter-measurements for risk mitigation actions of civil protection authorities. Because of the computational costs required to adequately simulate volcanic phenomena, PVHA is most based on single or very few selected reference scenarios. Independently of the degree of approximation of the used numerical model, PVHA for tephra loading and/or airborne ash concentration will necessitate a high number (typically several thousands, in order to capture variability in meteorological and volcanological conditions) of tephra dispersion simulations of which each is moderately intensive. This pilot will comprise both long- and short-term probabilistic hazard assessment for volcanic tephra fallout by adopting and improving a methodology recently proposed (Sandri et al., 2016) able to capture aleatory and epistemic uncertainties. Long term probabilistic hazard assessment for PDCs will also be envisaged, focussing on aleatory and epistemic uncertainties on Eruptive Source Parameters. Since tephra fallout models allow also a consistent treatment of spatially and temporally variable wind fields and can describe also phenomena like ash aggregation, an Exascale capacity will allow also to spatially extend, for the first time, the PVHA for evaluating potential impact from all active volcanoes in Italy on the entire national territory.

Results & Achievements

The award of PRACE resources, in association with PD3 (High-resolution volcanic plume simulation ) and PD12 (High-Resolution Volcanic Ash Dispersal Forecast), to run FALL3d simulations at the required target resolution and spatial domain.

The prototypal version of PVHA_WF to process the simulations and produce hazard maps.

The application of PVHA_WF to the case of Campi Flegrei volcano, in Southern Italy, in an illustrative example for the days 5, 6 and 7 December 2019, to show the proof-of-concept and feasibility

Objectives

The objective of this use case is to provide innovative hazard maps with uncertainty, and overcoming the current limits of PVHA imposed so far by the high computational cost required to adequately simulate complex volcanic phenomena (such as tephra dispersal) while fully exploring the natural variability associated to such volcanic phenomena, on a country-size domain (~thousands of km) at a high resolution (one to few km).

Technologies

Workflow

PVHA_WF_st fetches the monitoring data (seismic and deformation) and, together with the configuration file of the volcano, calculates the eruptive forecasting (probability curves and vent opening positions) and uses the output file from  alphabeta_MPI.py to create the volcanic hazard probabilities and maps. 

PVHA_WF_lt uses the configuration file of the volcano to calculate the eruptive forecasting and, together with the output file from alphabeta_MPI.py, creates the volcanic hazard probabilities and maps. 

Meteo data download process is fully automated. PVHA_WF_st and PVHA_WF_lt connect to the Climate Data Store (Copernicus data server) and download the meteorological data associated with a specified analysis grid. These data will later be used to obtain the results of tephra deposition by FALL3D. 

Software involved

FALL3D

Use Case Owner

Laura Sandri
INGV Bologna

Collaborating Institutions

INGV
BSC
IMO

High-Resolution Volcanic Ash Dispersal Forecast

 A Use Case by

Short description

Operational volcanic ash dispersal forecasts are routinely used to prevent aircraft encounters with volcanic ash clouds and to perform re-routings avoiding contaminated airspace areas. However, a gap exists between current operational forecast products (e.g. issued by the Volcanic Ash Advisory Centers) and the requirements of the aviation sector and related stakeholders. Two aspects are particularly critical: 1) time and space scales of current forecasts are coarse (for example, the current operational setup of the London VAAC at U.K. Met. Office outputs on a 40 km horizontal resolution grid and 6 hour time averages) and; 2) quantitative forecasts. Several studies (e.g. Kristiansen et al., 2012) have concluded that the main source of epistemic/aleatory uncertainty in ash dispersal forecasts comes from the quantification of the source term (eruption column height and strength) which, very often, is not fully-constrained on real time. This limitation can be circumvented in part by integrating into models ash cloud observations away from the source, typically from satellite retrievals of fine ash column mass load (i.e. vertical integration of concentration). Model data assimilation has the potential to improve ash dispersal forecasts by an efficient joint estimation of the (uncertain) volcanic source parameters and the state of the ash cloud.

Results & Achievements

Implementation of ensemble forecasts in FALL3D to run different ensemble members (realizations) as a single model run.

A new workflow component has been developed to retrieve ash (and SO2) cloud column mass from last-generation satellite instrumentation.

A new satellite data assimilation module based on the Parallel Data Assimilation Framework (PDAF) has been implemented.

Objectives

Volcanic ash cloud forecasts are performed shortly before or during an eruption in order to predict expected fallout rates in the next hours/days and/or to prevent aircraft encounters with volcanic clouds. These forecasts constitute the main decision tool for flight cancellations and airplane re-routings avoiding contaminated airspace areas. However, an important gap exists between current operational products and the actual requirements from the aviation industry and related stakeholders in terms of model resolution, frequency of forecasts, and quantification of airborne ash concentration. This pilot demonstrator is implementing an ensemble-based data assimilation system (workflow) combining the FALL3D dispersal model with high-resolution geostationary satellite retrievals in order to furnish high-resolution forecasts

Technologies

Workflow

Use case workflow includes the following components:

The download and pre-process of required meteorological data.

The download of raw satellite data and the cloud mass quantitative retrievals (SEVIRI retrievals at 0.1º resolution, 1-hour frequency).

The ensemble forecast execution using the FALL3D model (i.e. the HPC component of the workflow)

No WMS available yet (work in progress)

Software involved

 

FALL3D code

Use Case Owner

Arnau Folch
Barcelona Supercomputing Center-Centro Nacional de Supercomputación (BSC-CNS)

Collaborating Institutions

BSC
INGV
IMO

ChEESE: New open access publication on Probabilistic Tsunami Hazard Analysis

5. January 2021
cheese
Check out new open access publication by ChEESE CoE on Probabilistic Tsunami Hazard Analysis.
(c) ChEESE

New POP CoE blog post: Speedups of a Volcanic Hazard Assessment Code

11. November 2020

Latest blog post by POP CoE – discover how their work on The Probabilistic Volcanic Hazard Assessment Work Flow package (PVHA_WF) led to speedups of around 500x over the total execution time.

The package is a workflow created for the ChEESE CoE Pilot Demonstrator 6 (PD6).

>> POP CoE Blog Post
>> ChEESE Pilot Demonstrators

(c) POP CoE

ETP4HPC handbook 2020 released

6. November 2020

The 2020 edition of the ETP4HPC Handbook of HPC projects is available. It offers a comprehensive overview over the European HPC landscape that currently consists of around 50 active projects and initiatives. Amongst these are the 14 Centres of Excellence and FocusCoE, that are also represented in this edition of the handbook.

>> Read here

HPC Centres of Excellence @ Supercomputing '20

4. November 2020

Due to restrictions caused by the global COVID-19 pandemic, the SC20 conference – the world’s leading HPC event – will take place online this year from November 9-19. 

Find below the CoE’s contributions to the 2020 edition of the Supercomputing Conference.

From research to societal relevance: How ChEESE and urgent computing may enhance INGV´s hazard forecasting

19. October 2020

Cheese

ChEESE is helping to dramatically improve near real-time hazard assessment and hazard forecasting services which will positively impact natural hazard observatories and warning centers in Europe. Learn in this article how ChEESE and urgent computing may enhance INGV´s hazard forecasting. 

>> Read More

(c) ChEESE