DYAMOND intercomparison project for storm-resolving global weather and climate models

 A Use Case by

Short description

The growth in computational resources now enables global weather and climate models to operate on the scale of a few kilometres. At this resolution, they can explicitly resolve storm systems and ocean eddies. The DYAMOND model intercomparison is the first project to perform a systematic intercomparison of these next-generation models. The ESiWACE flagship models IFS and ICON participate in the intercomparsion, and ESiWACE supports the intercomparison by providing data storage at DKRZ, resources for server-side processing and support in the use of the tools.

Results & Achievements

Currently 51 users from 30 institutions worldwide have access to the intercomparison dataset. A special edition in the Journal of the Meteorological Society of Japan is dedicated to this intercomparison, and more and more papers are being published also in other journals. Two hackathons supported by ESiWACE have brought the community together and have provided guidance to junior researchers.

Objectives

By supporting the DYAMOND intercomparison of storm-resolving global weather and climate models, ESiWACE facilitates the development of these next-generation models, and advances climate science. The intercomparison allows to identify common features and model-specific behaviour, and thus yields new scientific discoveries and increases the robustness of our knowledge and the models. At the same time, this intercomparison serves as a perfect test case for the high-performance data analysis and visualization workflows necessary for dealing with the challenging amounts of data that these models produce, allowing ESiWACE scientists to improve the workflows using real-world cases.

Technologies

CDO, ParaView, jupyter, server-side processing

Use Case Owner

Collaborating Institutions

DKRZ, MPI-M (and many others)

Prediction of pollutants and design of low-emission burners

 A Use Case by

Short description

The harmful effects for human health of pollutants like NOx and CO have boosted their experimental and numerical study during the last years. However, their relatively long timescales have required to increase the level of complexity of turbulent combustion models in order to obtain accurate predictions as emphasized in Valera-Medina et al. 2019 and Karagöz et al. 2019. Moreover, the use of alternative hydrogen-based fuels, even reducing CO and other HC emissions (Cappelletti and Martelli, 2017) and increasing engine efficiency (Verhelst et al.,2009), may have dramatic effects on NOx emissions compared to conventional fuels. For these fuels three times higher amounts of NOx have been measured compared to gas natural in some operational conditions for gas turbines (Riccio et al. 2009) although some studies show that the combined use of ammonia and hydrogen have potential to reduce NOx production (Xiao and Valera-Medina, 2017). Finally, technologies with hydrogen burners need development as the mixing strategies have to be adjusted since H2 is much lighter than natural gas(Cappelletti and Martelli, 2017). Therefore, there is need to extend the knowledge about pollutant emissions not only for conventional fuels used in current engines but for hydrogen blends in order to produce innovative concepts.

Objectives

To optimize burner performance in terms of pollutant emissions making use of large-scale simulations. Advanced combustion and soot models will be used to pursue this objective. This use case will make tangible the potential of HPC as an important tool to increase reliability and accuracy in numerical simulations for practical applications with a strong industrial focus.

Technologies

CLIO, Alya, Nek5000, OpenFOAM, PRECISE_UNS

Use Case Owner

Barcelona Supercomputing Center (BSC)

Collaborating Institutions

BSC, RWTH, TUE, UCAM, TUD, ETHZ, AUTH

Prediction of soot formation in practical applications

 A Use Case by

Short description

Soot formation is a complex phenomenon which requires the use of large chemical mechanisms not only to account for a detailed description of the small gas phase molecules but for the enormous variety of PAH. These expensive requirements have made its modelling elusive for the scientific community although acceptable results have been obtained in the last years (Yang et al. 2019, Rodrigues et al. 2018, Hoerlle and Pereira 2019). In order to increase the reliability of the simulations, a lot of work has to be done to improve PAH chemistry and their interaction with turbulence, soot modelling oxidation and simulation of particle size distribution. To sort out the difficulties that the soot solid phase introduces in the fluid mechanics simulations, the Method of Moments (MOM) (Mueller et al., 2009, Chong et al. 2019, Salenbauch et al. 2019) and the sectional model (Rodrigues et al. 2018, Hoerlle and Pereira 2019) have been used. Both methods have been coupled with flamelet combustion models providing stateof-the-art results (Rodrigues et al. 2018, Yang et al. 2019). Finally, another approach is the coupling of soot models with the Conditional Moment Closure (CMC) model for which promising results have also been obtained (Giusti et al. 2018).

Objectives

The objective of this use case is to demonstrate the predictive capabilities of Exascale simulations to provide accurate results of soot formation when applied to large-scale simulations. This ECD will narrow the gap between simulations and experiments obtaining state-of-the-art soot models and showing that satisfactory degrees of accuracy can be achieved for the prediction of an extremely complex process such as soot formation in engines.

Technologies

CLIO, OpenFOAM, PRECISE_UNS, Alya, Nek5000

Use Case Owner

Barcelona Supercomputing Center (BSC)

Collaborating Institutions

UCAM, TUD, TUE, AUTH, ETHZ

Optimization of Earth System Models on the path to the new generation of Exascale high-performance computing systems

 A Use Case by

Short description

In recent years, our understanding of climate prediction has significantly grown and deepened. This is being facilitated by improvements of our global Earth System Models (ESMs). These models aim for representing our future climate and weather ever more realistically, reducing uncertainties in these chaotic systems and explicitly calculating and representing features that were previously impossible to resolve with models of coarser resolutions.

A new generation of exascale supercomputers and massive parallelization are needed in order to calculate small-scale processes and features using high resolution climate and weather models.

However, the overhead produced by the new massive parallelization will be dramatic, and new high performance computing techniques will be required to rise to the challenge. These new HPC techniques will enable scientists to make efficient use of upcoming exascale machines, and to set up ultra-high resolution experiment configurations of ESMs and run the respective simulations. Such experiment configurations will be used to predict climate change over the next decades, and to study extreme events like hurricanes.

Results & Achievements

The new EC-Earth version in development is being tested for the main components (OpenIFS and NEMO) on Marenostrum IV, using a significant number of cores to test the new ultra-high resolutions of 10 km in the horizontal domain, using up to 2048 nodes (98,304 cores) for the NEMO component and up to 1024 nodes (49,152 cores) for the OpenIFS component.

Different optimizations (developed in the framework of the projects ESiWACE and ESiWACE2) included in these components have been tested to evaluate the computational efficiency achieved. For example, the OpenIFS version including the new integrated parallel I/O allows for an output of hundreds of Gigabytes, while the execution time increases only by 2% compared to the execution without I/O. This is much better than the previous version, which produced an overhead close to 50%. Moreover, this approach will allow for using the same I/O server for both components, facilitating more complex computations online and using a common file format (netCDF).

Preliminary results using the new mixed precision version integrated in NEMO have shown an improvement of almost 40% in execution time, without any loss of accuracy in the simulation results.

Objectives

EC-Earth is one such model system, and it is being used in 11 different countries and by up to 24 meteorological or academic institutions to produce reliable climate predictions and climate projections. It is composed of different components, with the atmospheric model OpenIFS and the ocean model NEMO being the most important ones.

EC-Earth is one of the ESMs that suffer from a lack of scalability when using higher resolutions, with an urgent need for improvements in capability and capacity on the path to exascale. Our main goal is achieving a good scalability of EC-Earth using resolutions of up to 10 km of horizontal spatial resolution with extreme parallelization. In order to achieve this, different objectives are being pursued:

(1) The computational profiling analysis of EC-Earth. Analysing the most severe bottlenecks of the main components when extreme parallelization is being used.

(2) Trying to exploit high-end architectures efficiently, reducing the energy consumption of the model to achieve a minimum efficiency in order to be ready for the new hardware. For this purpose, different High Performance Computing techniques are being applied, for example the integration of a full parallel in- and output (I/O), or the reduction in precision of some variables used by the model, maintaining the same accuracy in the results while improving the final execution time of the model.

(3) Evaluating, if massive parallel execution and the new methods implemented could affect the quality of the simulations or impair reproducibility.

HPC-enabled multiscale simulation helps uncover mechanistic insights of the SARS-CoV-2 infection

 A Use Case by

Short description

To be able to use all the potential of future Exascale systems, researchers will need an efficient and sustainable infrastructure to support the development of personalised medicine. PerMedCoE is building such an infrastructure by upscaling tools that are used in personalised medicine projects, such as, to translate the consequences of omics information into actionable molecular disease models. Multiscale modelling prove useful in integrating mechanisms with different time and space scales. We have enabled the use in HPC of PhysiCell, a multiscale modelling framework and one of PerMedCoE’s core tools, to expand its scope and study the dissemination and infection of the SARS-CoV-2 virus by incorporating cell- and pathway-specific Boolean models to detail the interactions of virus, drugs and human cells. These Boolean models are simulated using MaBoSS, another PerMedCoE’s core tool, allowing for the study of genetic and environmental perturbations. PerMedCoE is collaborating with the COVID-19 Disease Map and the PC4COVID initiatives and introducing leading-edge technologies to discover biomarkers and actionable therapeutic targets that will help against the COVID-19 pandemic.

Results & Achievements

BSC has successfully implemented MPI in the multiscale modelling and the use of shared memory has been an enabling technology to have more complex, bigger multiscale simulations. This opens the possibility, for instance, of having simulations of real-size tumours of a billion cells.

Even though this PerMedCoE use case is in its preliminary stages, we have identified two genes, one from the SARS-CoV-2 virus and the other from the human epithelial cells, whose inactivation allows for the evasion of Apoptosis or programmed cell death. Thus, these two genes are strong candidates as therapeutic targets in COVID-19 infection.

Objectives

In this use case, our main aim is to uncover mechanistic insights that could help in the fight against SARS-CoV-2. For this we use Boolean models of signalling pathways, agent-based models for populations of cells and the communication among virus, epithelial host cells and immune cells. The multiscale scope of the modelling and the breadth and depth of the HPC-enabled simulation using MPI facilitate the uncovering of therapeutic targets. In addition, the organisation of this work in building blocks and pipelines allows for an optimised orchestration and distribution of the different tasks in the project.

Finally, a global aim of PerMedCoE is to serve as an example for the upscaling of other personalised medicine tools that are part of PerMedCoE’s observatory.

Technologies

Use Case Owner

Barcelona Supercomputing Center-Centro Nacional de Supercomputación (BSC-CNS)

Collaborating Institutions

Increasing accuracy in the automotive field simulations

 A Use Case by

Short description

For complex flow simulations, a priori knowledge of physics and the flow regimes is not always available, so the process of generating an optimal mesh is a tedious, time consuming process associated with a high computational cost. The use of goal driven a posteriori adjoint based error estimation can drive an adaptive process, resulting in a final optimal mesh. The benefits of an optimal mesh are seen in an increased accuracy of numerical simulation results, e.g. for the evaluation of drag or acoustic noise in the automotive and aeronautical fields. By using error estimation and adaptivity, a fully automated process can be established, involving an iterative workflow between mesh generation, simulation, result evaluation and CAD model morphing.

Results & Achievements

The automated simulation methods described above have been extensively used in academia and recently gained interest from both independent software vendors (ISV) and industry. The increasing computational complexity of industrial applications urges the scientific community to provide cutting edge methods packed with solid HPC capabilities to provide reliable solutions in affordable time. Industrial users are focused on solving engineering problems, and are typically not computing experts. The challenging size of real case problems though, with mesh sizes of several millions of elements, requires that codes are able to run smoothly on Exascale systems. The coupling Unicorn HPC + FEniCS HPC provides an Exascale-ready framework, with built-in parallelisation of FEM (Finite Elements Method) assembly phase, mesh adaption and linear algebra solvers. Our effort focuses on improving performances and robustness of the HPC solution and filling the gap between academia and industry by testing the code on real case applications. In this context, the joint effort of core code developers and use case owners is addressing the ease of the installation process, as well as the enrichment of the engineering relevant quantities extracted from the solution, the improvement of code stability, the definition of an optimal meshing strategy and the introduction of the drag driven morphing capability. Preliminary solutions have been obtained so far for increasingly complex models of the car.

Objectives

The aim is to use a posteriori error estimation to drive both mesh adaption and CAD morphing in an iterative process to produce an optimal design for a given output of interest. Our strategy is based on Unicorn HPC, a finite element CFD solver built on top of the FEniCS HPC code. It computes an approximation of a weak solution of the incompressible Navier Stokes equation, and comes with a built-in a posteriori adjoint-based error estimation strategy used to drive the adaptive mesh refinement only increasing resolution specifically in regions of interest. Through the adjoint method it is possible to evaluate the sensitivity of a desired scalar output to a change in the solution, without explicitly recomputing the solution. The scalar quantity at hand can be a physical quantity of interest, e.g the drag, or the norm error of the computed solution, related to the mesh size. We are thus applying the adjoint-based techniques implemented in the code for mesh adaptation to enable the drag-reduction based morphing of the geometry model.

Technologies

Use Case Owner

Collaborating Institutions

Geomagnetic forecasts

 A Use Case by

Short description

The Earth’s magnetic field is sustained by a fluid dynamo operating in the Earth’s fluid outer core. Its geometry and strength define the equivalent of the climatological mean over which the interaction of the Earth with its magnetic environment takes place. It is consequently important to make physics-based predictions of the evolution of the dynamo field over the next few decades. In addition, the geomagnetic field has the remarkable ability to reverse its polarity every now and then (the last reversal occurred some 780.000 years ago). Observations of the properties of the field during polarity transition are sparse, and ultra-high resolution simulations should help better define these properties.

Objectives

To simulate and analyse the consequences of geomagnetic reversals with an unprecedented level of accuracy. These events are extremely rare in the history of our planet, hence the need to resort to numerical simulations to better understand the properties of reversals and their possible consequences for society.

Technologies

Workflow

XSHELLS produces simulated reversals which are subsequently analysed and assessed using the parallel python processing chain. Through ChEESE we are working to orchestrate this workflow using the WMS_light software developed within the ChEESE consortium.

Software involved

XSHELLS code 

Post-processing: Python 3

External library: SHTns

Use Case Owner

Alexandre Fournier
Institut de Physique du Globe de Paris (IPGP)

Collaborating Institutions

IPGP, CNRS

Physics-Based Probabilistic Seismic Hazard Assessment (PSHA)

 A Use Case by

Short description

Physics-Based Probabilistic Seismic Hazard Assessment (PSHA) is widely established for deciding safety criteria for making official national hazard maps, developing building code requirements, safety of critical infrastructure (e.g. nuclear power plants) and determining earthquake insurance rates by governments and industry. However, PSHA currently rests on empirical, time-independent assumptions known to be too simplistic and conflict with earthquake physics. Respective deficits become apparent as many damaging earthquakes occur in regions rated as low-risk by PSHA hazard maps and near-fault effects from rupture on extended faults is not taken into account. Combined simulations of dynamic fault rupture and seismic wave propagation are crucial tools to shed light onto the poorly constrained processes of earthquake faulting. Realistic model setups should acknowledge topography, 3D geological structures, rheology, and fault geometries with appropriate stress and frictional parameters, all of which contribute to complex ground motion patterns. A fundamental challenge hereby is to model the high frequency content of the three-dimensional wave field, since the frequency range of 0–10 Hz is of pivotal importance for engineering purposes. Multiple executions of such multi-physics simulations need to be performed to provide a probabilistic-based hazard estimation.

Results & Achievements

Fault models built up in both north and south Iceland

Fully non-linear dynamic simulations accounting for 3-D velocity structures, topography, off-fault plasticity, and model parameter uncertainties and achieved target resolution.

Cybershake implemented successfully and a demo run for south Iceland
Generate the rupture probability using SHERIFS

GMPEs based hazard curves and maps with OpenQuake

About the code SeisSol: Extended YATeTo DSL to generate GPU GEMM kernels

Developed a python library as a GEMM backend for YATeTo

Adapted both SeisSol and YATeTO for batched computations

Implemented Elastic Solver: time, local, neighbour integrals

Both GTS and LTS scheme are working Enabled a distributed Multi-GPU setup Implemented Plasticity kernel (needs to get updated)

Tested performance on a multi-GPU distributed cluster: M100

Merged first stage from experimental to the production code

As a result, we obtained a 23% time reduction with respect to the GPU-only execution. In practice, this represents a performance boost equivalent to attaching an additional GPU per node and thus a much more efficient exploitation of the resources.

Objectives

The objectives of this use case is to develop general concepts for enabling physics-based seismic hazard assessment with state-of-the-art multi-physics earthquake simulation software (SeisSol, SpecFEM3D, ExaHyPE, AWP-ODC) and conduct 3D physics-based seismic simulations to improve PSHA for validation scenarios provided by IMO (Iceland) and beyond. This use case is expected to be applicable to supplement established methods by stakeholders, for different target regions and varying degrees of complexity.

Technologies

Workflow

The workflow of this pilot is shown in Figure 1.

To use the SeisSol code to run fully non-linear dynamic rupture simulations, accounting for various fault geometries, 3D velocity structures, off-fault plasticity, and model parameters uncertainties, to build a fully physics-based dynamic rupture database of mechanically plausible scenarios. 

Then the linked post-processing python codes are used to extract ground shakings (PGD, PGV, PGA and SA in different periods) from the surface output of SeisSol simulations to build a ground shaking database.

SHERIFS uses a logic tree method, with the input of the fault to fault ruptures from dynamic rupture database, converting the slip rate to the annual seismic rate given the geometry of the fault system. 

With the rupture probability estimation from SHERIFS, and ground shakings from the SeisSol simulations, we can generate the hazard curves for selected site locations and hazard maps for the study region. 

In addition, the OpenQuake can use the physics-based ground motion models/prediction equations, established with the ground shaking database from fully dynamic rupture simulations. And the Cybershake, which is based on the kinematic simulations, to perform the PSHA and complement the fully physics-based PSHA. 

Software involved

SeisSol (LMU)

ExaHyPE (TUM)

AWP-ODC (SCEC)

SHERIFS (Fault2SHA and GEM)

sam(oa)² (TUM) 

OpenQuake (GEM): https://github.com/gem/oq-engine 

Pre-processing:

Mesh generation tools: Gmsh (open source), Simmetrix/SimModeler (free for academic institution), PUMGen

Post-processing & Visualization: Paraview, python tools 

Use Case Owner

Alice-Agnes Gabriel
Ludwig Maximilian University of Munich (LMU)

Collaborating Institutions

IMO, BSC, TUM, INGV, SCEC, GEM, FAULT2SHA, Icelandic Civil Protection, Italian Civil Protection

Faster Than Real-Time Tsunami Simulations

 A Use Case by

Short description

Faster-than-real-time (FTRT) tsunami computations are crucial in the context of Tsunami Early Warning Systems (TEWS). Greatly improved and highly efficient computational methods are the first raw ingredient to achieve extremely fast and effective calculations. High-performance computing facilities have the role to bring this efficiency to a maximum possible while drastically reducing computational times. This use case will comprise both earthquake and landslide sources. Earthquake tsunami generation is to an extent simpler than landslide tsunami generation, as landslide generated tsunamis depend on the landslide dynamics which necessitate coupling dynamic landslide simulation models to the tsunami propagation. In both cases, FTRT simulations in several contexts and configurations will be the final aim of this use case.

Results & Achievements

Improving HySEA codes in CHEESE:

We have improved the load balancing algorithm. In particular we have added support for heterogeneous GPUs in the load balancing algorithm by assigning a numerical weight to each GPU.

We have developed a new algorithm for the nested meshes processing based on the current state values and we have implemented the activation of the nested meshes processing when a movement of the water is detected in their area.

Implemented asynchronous file writing by creating an additional thread for each MPI process using C++11 threads (see Table 1 at the end of this document).

Added the possibility of resuming a stored simulation.

Added sponge layers for a better processing of the border conditions, in order to avoid possible numerical instabilities in the borders of the domain. Implemented asynchronous CPU-GPU memory transfers.

We have dramatically reduced the size of the output files compressing the data using the algorithm described in Tolkova (2008) and saving most of the data in single precision files. A new version of Tsunami-HySEA has been developed to run simultaneous simulations on the same domain attending the requirements of PD7 and PD8, by executing one simulation on each GPU. This new version is able to use up to 1024 GPUs simultaneously with a very good weak scaling (losing around 3% of efficiency).

With these improvements, we obtain around 30% of reduction on the computational time with respect to the previous version of the codes.

The codes have been tested on CTE-POWER (BSC), DAVIDE and Marconi100 (CINECA) and Piz Daint (CSCS) supercomputers.

Objectives

The aim of this use case is to provide robust and very efficient numerical codes for FTRT Tsunami simulations that can be run in massively parallel multi-GPU architectures.

Technologies

Workflow

The Faster-Than-Real-Time (FTRT) prototype for extremely fast and robust tsunami simulations is based upon GPU/multi-GPU (NVIDIA) architectures and is able to use earthquake information from different locations and with heterogeneous content (full Okada parameter set, hypocenter and magnitude plus Wells and Coppersmith (1994)). Using these inhomogeneous inputs, and according to the FTRT workflow (see Fig. 1), tsunami computations are launched for a single scenario or a set of related scenarios for the same event. Basically, the automated retrieval of earthquake information is sent to the system and on-the-fly simulations are automatically launched. Therefore, several scenarios are computed at the same time. As updated information about the source is provided, new simulations should be launched. As output, several options are available tailored to the end-user needs, selecting among: sea surface height and its maximum, simulated isochrones and arrival times to the coastal areas, estimated tsunami coastal wave height, times series at Points of Interest (POIs) and oceanographic sensors.

 A first successful implementation  has been done for the Emergency Response Coordination Centre (ERCC), a service provided by the ARISTOTLE-ENHSP Project. The system implemented for ARISTOTLE follows the general workflow presented in Figure 1. Currently, in this system, a single source is used to assess the hazard and the computational grids are predefined.  The computed wall-clock time is provided for each experiment and the outputs of the simulation are maximum water height and arrival times on the whole domain, and water height time-series on a set of selected POIs, predefined for each domain. 

A library of Python codes are used to generate the input data required to run HySEA codes and to extract the topo-bathymetric data and construct the grids used by HySEA codes.

Software involved

Tsunami-HySEA has been successfully tested with the following tools and versions:

Compilers: GNU C++ compiler 7.3.0 or 8.4.0, OpenMPI 4.0.1, Spectrum MPI 10.3.1, CUDA 10.1 or 10.2

Management tools: CMake 3.9.6 or 3.11.4

External/third party libraries: NetCDF 4.6.1 or 4.7.3, PnetCDF 1.11.2 or 1.12.0

Pre-processing:

Nesting mesh generation tools.

In-house developed python tools for pre-processing purposes. 

Visualization tools:

In-house developed python tools.

Use Case Owner

Jorge Macías Sanchez
Universidad de Málaga

Collaborating Institutions

UMA, INGV, NGI, IGN, PMEL/NOAA (with a role in pilot’s development).

Other institutions benefiting from use case results with which we collaborate:
IEO, IHC, IGME, IHM, CSIC, CCS, Junta de Andalucía (all Spain); Italian Civil Protection, Seismic network of Puerto Rico (US), SINAMOT (Costa Rica), SHOA and UTFSM (Chile), GEUS (Denmark), JRC (EC), University of Malta, INCOIS (India), SGN (Dominican Republic), UNESCO, NCEI/NOAA (US), ICG/NEAMTWS, ICG/CARIBE-EWS, among others.

Probabilistic Volcanic Hazard Assessment (PVHA)

 A Use Case by

Short description

PVHA methodologies provide a framework for assessing the likelihood of a given measure of intensity of different volcanic phenomena, such as tephra loading on the ground, airborne ash concentration, pyroclastic flows, etc., being exceeded at a particular location within a given time period. This pilot deals with regional long- and short-term PVHA. Regional assessments are crucial for a better land-use planning and for counter-measurements for risk mitigation actions of civil protection authorities. Because of the computational costs required to adequately simulate volcanic phenomena, PVHA is most based on single or very few selected reference scenarios. Independently of the degree of approximation of the used numerical model, PVHA for tephra loading and/or airborne ash concentration will necessitate a high number (typically several thousands, in order to capture variability in meteorological and volcanological conditions) of tephra dispersion simulations of which each is moderately intensive. This pilot will comprise both long- and short-term probabilistic hazard assessment for volcanic tephra fallout by adopting and improving a methodology recently proposed (Sandri et al., 2016) able to capture aleatory and epistemic uncertainties. Long term probabilistic hazard assessment for PDCs will also be envisaged, focussing on aleatory and epistemic uncertainties on Eruptive Source Parameters. Since tephra fallout models allow also a consistent treatment of spatially and temporally variable wind fields and can describe also phenomena like ash aggregation, an Exascale capacity will allow also to spatially extend, for the first time, the PVHA for evaluating potential impact from all active volcanoes in Italy on the entire national territory.

Results & Achievements

The award of PRACE resources, in association with PD3 (High-resolution volcanic plume simulation ) and PD12 (High-Resolution Volcanic Ash Dispersal Forecast), to run FALL3d simulations at the required target resolution and spatial domain.

The prototypal version of PVHA_WF to process the simulations and produce hazard maps.

The application of PVHA_WF to the case of Campi Flegrei volcano, in Southern Italy, in an illustrative example for the days 5, 6 and 7 December 2019, to show the proof-of-concept and feasibility

Objectives

The objective of this use case is to provide innovative hazard maps with uncertainty, and overcoming the current limits of PVHA imposed so far by the high computational cost required to adequately simulate complex volcanic phenomena (such as tephra dispersal) while fully exploring the natural variability associated to such volcanic phenomena, on a country-size domain (~thousands of km) at a high resolution (one to few km).

Technologies

Workflow

PVHA_WF_st fetches the monitoring data (seismic and deformation) and, together with the configuration file of the volcano, calculates the eruptive forecasting (probability curves and vent opening positions) and uses the output file from  alphabeta_MPI.py to create the volcanic hazard probabilities and maps. 

PVHA_WF_lt uses the configuration file of the volcano to calculate the eruptive forecasting and, together with the output file from alphabeta_MPI.py, creates the volcanic hazard probabilities and maps. 

Meteo data download process is fully automated. PVHA_WF_st and PVHA_WF_lt connect to the Climate Data Store (Copernicus data server) and download the meteorological data associated with a specified analysis grid. These data will later be used to obtain the results of tephra deposition by FALL3D. 

Software involved

FALL3D

Use Case Owner

Laura Sandri
INGV Bologna

Collaborating Institutions

INGV
BSC
IMO