FocusCoE at EuroHPC Summit Week 2022

With the support of the FocusCoE project, almost all European HPC Centres of Excellence (CoEs) participated once again in the EuroHPC Summit Week (EHPCSW) this year in Paris, France: the first EHPCSW in person since 2019’s event in Poland. Hosted by the French HPC agency Grand équipement national de calcul intensif  (GENCI), the conference was organised by Partnership for Advanced Computing in Europe (PRACE), the European Technology Platform for High-Performance Computing (ETP4HPC), The EuroHPC Joint Undertaking (EuroHPC JU), and the European Commission (EC).As usual, this year’s event gathered the main European HPC stakeholders from technology suppliers and HPC infrastructures to scientific and industrial HPC users in Europe.

At the workshop on the European HPC ecosystem on Tuesday 22 March at 14:45, where the diversity of the ecosystem was presented around the Infrastructure, Applications, and Technology pillars, project coordinator Dr. Guy Lonsdale from Scapos talked about FocusCoE and the CoEs’ common goal.

Later that day from 16:30 until 18:00h, the FocusCoE project hosted a session titled “European HPC CoEs: perspectives for a healthy HPC application eco-system and Exascale” involving most of the EU CoEs. The session discussed the key role of CoEs in the EuroHPC application pillar, focussing on their impact for building a vibrant, healthy HPC application eco-system and on perspectives for Exascale applications. As described by Dr. Andreas Wierse on behalf of EXCELLERAT, “The development is continuous. To prepare companies to make good use of this technology, it’s important to start early. Our task is to ensure continuity from using small systems up to the Exascale, regardless of whether the user comes from a big company or from an SME”.

Keen interest in the agenda was also demonstrated by attendees from HPC related academia and industry filling the hall to standing room only. In light of the call for new EU HPC Centres of Excellence and the increasing return to in-person events like EHPCSW, the high interest in preparing the EU for Exascale has a bright future.

FocusCoE Hosts Intel OneAPI Workshop for the EU HPC CoEs

On March 2, 2022 FocusCoE hosted Intel for a workshop introducing the oneAPI development environment. In all, over 40 researchers representing the EU HPC Centres of Excellence (CoEs)were able to attend the single day workshop to gain an overview of OneAPI. The 8 presenters from Intel gave presentations through the day covering the OneAPI vision, design, toolkits, a use case with GROMACS (which is already used by some of the EU HPC CoEs), and specific tools for migration and debugging.

Launched in 2019, the Intel OneAPI cross-industry, open, standards-based unified programming model is being designed to deliver a common developer experience across accelerator architectures. With the time saved designing for specific accelerators, OneAPI is intended to enable faster application performance, more productivity, and greater innovation. As summarized on Intel’s OneAPI website, “Apply your skills to the next innovation, and not to rewriting software for the next hardware platform.” Given the work that EU HPC CoEs are currently doing to optimise codes for Exascale HPC systems, any tools that make this process faster and more efficient can only boost CoEs capacity for innovation and preparedness for future heterogeneous systems.

The OneAPI industry initiative is also encouraging collaboration on the oneAPI specification and compatible oneAPI implementations. To that end, Intel is investing time and expertise into events like this workshop to give researchers the knowledge they need not only to use but help improve OneAPI. The presenters then also make themselves available after the workshop to answer questions from attendees on an ongoing basis. Throughout our event, participants were continuously able to ask questions and get real-time answers as well as offers for further support from software architects, technical consulting engineers, and the researcher who presented a use case. Lastly, the full video and slides from presentations are available below for any CoEs who were unable to attend or would like a second look at the detailed presentations.

SIMAI 2021

11. October 2021

The 2020 edition of the bi-annual congress of the Italian Society of Applied and Industrial Mathematics (SIMAI) has been held in Parma, hosted by the University of Parma, from August 30 to September 3, 2021, in a hybrid format (physical and online). The conference aimed to bring together researchers and professionals from academia and industry that are active in the study of mathematical and numerical models and their application to industry and general real-life problems, stimulate interdisciplinary research in applied mathematics, and foster interactions between the scientific community and industry. Six plenary lectures have been covering a wide range of topics. In this edition, a posters session was also organized to broaden the opportunity to disseminate the results of interesting research. A large part of the conference was dedicated to minisymposia, autonomously organized around specific topics by their respective promoters. Furthermore, an Industrial Session gathering together both academic and industrial researchers was organized, particularly with more than 70 industry representatives focusing on mathematical problems encountered in R&D areas.

Within the SIMAI conference, the minisymposium on HPC, European High-Performance Scientific Computing: Opportunities and Challenges for Applied Mathematics” was organized by ENEA within the framework of project FocusCoE, and a contribution to the aforementioned Industrial Session was given through a talk on results on Hydropower production modelling obtained by project EoCoE. In the minisymposium agenda of 3th of September (9:30-12:00), five presentations (20 min + 5 min Q) have been made from CoEs representatives: Ignatio Pagonabarraga (CoE: E-CAM), Alfredo Buttari (CoE: EoCoE), Francesco Buonocore (CoE: EoCoE), Pasqua D’Ambra (CoEs: EoCoE. EUROHPC project:TEXTAROSSA), Tomaso Esposti Ongaro (CoE: ChEESE). The number of sustained participants has been ca. 25.

The FocusCoE contribution within the industrial session of September 1 (17:50-18:15) has been made through a talk of prof. Bruno Majone (CoEs: EoCoE) presented by Andrea Galletti (CoEs: EoCoE) titled “Detailed hydropower production modelling over large-scale domains: the HYPERstreamHS framework”.

CoEs at Teratec Forum 2021 and ISC21

15. June 2021

With the support of FocusCoE, a number of HPC CoEs will give short presentations at the virtual PRACE booth in the following two HPC-related events: Teratec Forum 2021 and ISC2021 that will take place towards the end of this month. See the schedule below for more details. Please reserve the slots in your calendars, registration details will be provided on the PRACE website soon!

“We are happy to see that FocusCoE was able to help the HPC CoEs to have a significant presence at this year’s editions of ISC and Teratec Forum, two major HPC events, enabled through our good synergies with PRACE”, says Guy Lonsdale, FocusCoE coordinator.

 

Teratec Forum 2021 schedule

Date / Event

Time slot CEST

Title

Speaker

Organisation

Tue 22 June

11:00 – 11:15

EoCoE-II: Towards exascale for Energy

Edouard Audit, EoCoE-II coordinator

CEA (France)

 

14:30 – 14:45

POP CoE: Free Performance Assessments for the HPC Community

Bernd Mohr

 Jülich Supercomputing Centre

 Thu 24 June

13:45 – 14:00

EXCELLERAT – paving the way for the evolution towards Exascale

Amgad Dessoky / Sophia Honisch

HLRS

 

ISC 2021 schedule

Date / Event

Time slot CEST

Title

Speaker

Organisation

 Thu 24 June

13:45 – 14:00

EXCELLERAT – paving the way for the evolution towards Exascale

Amgad Dessoky / Sophia Honisch

HLRS

Fri 25 June

11:00 – 11:15

The Center of Excellence for Exascale in Solid Earth (ChEESE)

Alice-Agnes Gabriel

Geophysik, University of Munich

 

15:30 – 15:45

EoCoE-II: Towards exascale for Energy

Edouard Audit, EoCoE-II coordinator

CEA (France)

Tue 29 June

11:00 – 11:15

Towards a maximum utilization of synergies of HPC Competences in Europe

Bastian Koller, HLRS

HLRS

Wed 30 June
ISC2021

10:45 -11:00

CoE
RAISE: Bringing AI to Exascale

Dr.-Ing. Andreas Lintermann

Jülich Supercomputing Centre, Forschungszentrum Jülich GmbH

Thu 1 July
ISC2021

11:00 -11:15

POP CoE: Free Performance Assessments for the HPC Community

Bernd Mohr

 Jülich Supercomputing Centre

 

14:30 -14:45

 TREX: an innovative view of HPC usage applied to Quantum Monte Carlo simulations

 Anthony Scemama (1), William Jalby (2), Cedric Valensi (2), Pablo de Oliveira Castro (2)

(1) Laboratoire de Chimie et Physique Quantiques, CNRS-Université Paul Sabatier, Toulouse, France

(2) Université de Versailles St-Quentin-en-Yvelines, Université Paris Saclay, France

 

Please register to the short presentations through the PRACE event pages here:

PRACE Virtual booth at Teratec Forum  2021PRACE Virtual booth at ISC2021
prace-ri.eu/event/teratec-forum-2021/prace-ri.eu/event/praceisc-2021/

List of innovations by the CoEs, spotted by the EU innovation radar

idea-diego1100x350.jpg
The EU Innovation Radar aims to identify high-potential innovations and innovators. It is an important source of actionable intelligence on innovations emerging from research and innovation projects funded through European Union programmes. 
 
These are the innovations from the HPC Centres of Excellence as spotted by the EU innovation radar:
 
bioexcel-logo.png

Title: GROMACS, a versatile package to perform molecular dynamics
Market maturity: Exploring
Project: BioExcel
Innovation Topic: Excellent Science
KUNGLIGA TEKNISKA HOEGSKOLAN - SWEDEN

Cheese_logo.png

Title: Urgent Computing services for the impact assessment in the immediate aftermath of an earthquake
Market maturity: Tech Ready
Market creation potential: High
Project: ChEESE
Innovation Topic: Excellent Science
EIDGENOESSISCHE TECHNISCHE HOCHSCHULE ZUERICH - SWITZERLAND
BULL SAS - FRANCE

compbiomed_long_logo.png
ecam-300x238-1.png
eocoe.png
esiways-logo_type_grey_left-copy-2.png

Table: New coupled earth system model
Market maturity: Tech Ready
Project: ESiWACE
Innovation Topic: Excellent Science
BULL SAS - FRANCE
MET OFFICE - UNITED KINGDOM
EUROPEAN CENTRE FOR MEDIUM-RANGE WEATHER FORECASTS - UNITED KINGDOM
 

Excellerat_Logo_ELR_V1_20180209-01-300x106-1.png

Title: In-Situ Analysis of CFD Simulations
Market maturity: Tech Ready
Market creation potential: High
Project: Excellerat
Innovation Topic: Excellent Science
KUNGLIGA TEKNISKA HOEGSKOLAN - SWEDEN
FRAUNHOFER GESELLSCHAFT ZUR FOERDERUNG DER ANGEWANDTEN FORSCHUNG E.V. - GERMAN

Title: Interactive in situ visualization in VR
Market maturity: Tech Ready
Market creation potential: High
Project: Excellerat
Innovation Topic: Excellent Science
UNIVERSITY OF STUTTGART - GERMANY

Title: Machine Learning Methods for Computational Fluid Dynamics (CFD) Data
Market maturity: Tech Ready
Market creation potential: Noteworthy
Project: Excellerat
Innovation Topic: Excellent Science
KUNGLIGA TEKNISKA HOEGSKOLAN - SWEDEN
FRAUNHOFER GESELLSCHAFT ZUR FOERDERUNG DER ANGEWANDTEN FORSCHUNG E.V. - GERMAN

MaX-logo-without-subline.png

Title: Quantum Simulation as a Service
Market maturity: Exploring
Market creation potential: Noteworthy
Project: MaX
Innovation Topic: Excellent Science
EIDGENOESSISCHE TECHNISCHE HOCHSCHULE ZUERICH - SWITZERLAND
CINECA CONSORZIO INTERUNIVERSITARIO - ITALY

NOMAD_Logo_srgb_web_whigh-1.jpg
pop250.png

Watch The Presentations Of The First CoE Joint Technical Workshop

22. February 2021

Watch the recordings of the presentations from the first technical CoE workshop. The virtual event was organized by the three HPC Centres of Excellence ChEESE, EXCELLRAT and HiDALGO. The agenda for the workshop was structured in these four session:

Session 1: Load balancing
Session 2: In situ and remote visualisation
Session 3: Co-Design
Session 4: GPU Porting

You can also download a PDF version for each of the recorded presentations. The workshop took place on January 27 – 29, 2021. 

Session 1: Load balancing

Title: Introduction by chairperson
Speaker: Ricard Borell (BSC)

Title: Intra and inter-node load balancing in Alya Speaker: Marta Garcia and Ricard Borell (BSC)

Title: Load balancing strategies used in AVBP
Speaker: Gabriel Staffelbach (CERFACS)

Title: Addressing load balancing challenges due to fluctuating performance and non-uniform workload in SeisSol and ExaHyPE
Speaker: Michael Bader (TUM)

Title: On Discrete Load Balancing with Diffusion Type Algorithms
Speaker: Robert Elsäßer (PLUS)

Session 2: In situ and remote visualisation

Title: Introduction by chairperson
Speaker: Lorenzo Zanon & Anna Mack (HLRS)

 

Title: An introduction to the use of in-situ analysis in HPC
Speaker: Miguel Zavala (KTH)

Title: In situ visualisation service in Prace6IP
Speaker: Simone Bnà (CINECA)

Title: Web-based Visualisation of air pollution simulation with COVISE
Speaker: Anna Mack (HLRS)

Title: Virtual Twins, Smart Cities and Smart Citizens
Speaker: Leyla Kern, Uwe Wössner, Fabian Dembski (HLRS)

Title: In-situ simulation visualisation with Vistle Speaker: Dennis Grieger (HLRS)

Session 3: Co-Design

Title: Introduction by chairperson, and Excellerat’s Co-Design Methodology
Speaker: Gavin Pringle (EPCC)

Title: Accelerating codes on reconfigurable architectures
Speaker: Nick Brown (EPCC)

Title: Benchmarking of Current Architectures for Improvements
Speaker: Nikela Papadopoulou (ICCS)

Title: Example Co-design Approach with the Seissol and Specfem3D Practical cases
Speaker: Georges-Emmanuel Moulard (ATOS)

Title: Exploitation of Exascale Systems for Open-Source Computational Fluid Dynamics by Mainstream Industry
Speaker: Ivan Spisso (CINECA)

Session 4: GPU Porting

Title: Introduction
Speaker: Giorgio Amati (CINECA)

Title: GPU Porting and strategies by Excellerat Speaker: Ivan Spisso (CINECA)

Title: GPU Porting and strategies by ChEESE
Speaker: Piero Lanucara (CINECA)

Title: GPU porting by third party library
Speaker: Simone Bnà (CINECA)

Title: The HySEA GPGPU development and its role in ChEESE project Speaker: Marc de la Asunción (UMA)

Video of the Week: ChEESE Women in Science

11. February 2021
Cheese_logo
ChEESE celebrates the International Day of Women and Girls in Science 2021 by interviewing several of its women researchers. This video acknowledges their contributions and recognises their importance to earth sciences and to science in general.

Geomagnetic forecasts

 A Use Case by

Short description

The Earth’s magnetic field is sustained by a fluid dynamo operating in the Earth’s fluid outer core. Its geometry and strength define the equivalent of the climatological mean over which the interaction of the Earth with its magnetic environment takes place. It is consequently important to make physics-based predictions of the evolution of the dynamo field over the next few decades. In addition, the geomagnetic field has the remarkable ability to reverse its polarity every now and then (the last reversal occurred some 780.000 years ago). Observations of the properties of the field during polarity transition are sparse, and ultra-high resolution simulations should help better define these properties.

Objectives

To simulate and analyse the consequences of geomagnetic reversals with an unprecedented level of accuracy. These events are extremely rare in the history of our planet, hence the need to resort to numerical simulations to better understand the properties of reversals and their possible consequences for society.

Technologies

Workflow

XSHELLS produces simulated reversals which are subsequently analysed and assessed using the parallel python processing chain. Through ChEESE we are working to orchestrate this workflow using the WMS_light software developed within the ChEESE consortium.

Software involved

XSHELLS code 

Post-processing: Python 3

External library: SHTns

Use Case Owner

Alexandre Fournier
Institut de Physique du Globe de Paris (IPGP)

Collaborating Institutions

IPGP, CNRS

Physics-Based Probabilistic Seismic Hazard Assessment (PSHA)

 A Use Case by

Short description

Physics-Based Probabilistic Seismic Hazard Assessment (PSHA) is widely established for deciding safety criteria for making official national hazard maps, developing building code requirements, safety of critical infrastructure (e.g. nuclear power plants) and determining earthquake insurance rates by governments and industry. However, PSHA currently rests on empirical, time-independent assumptions known to be too simplistic and conflict with earthquake physics. Respective deficits become apparent as many damaging earthquakes occur in regions rated as low-risk by PSHA hazard maps and near-fault effects from rupture on extended faults is not taken into account. Combined simulations of dynamic fault rupture and seismic wave propagation are crucial tools to shed light onto the poorly constrained processes of earthquake faulting. Realistic model setups should acknowledge topography, 3D geological structures, rheology, and fault geometries with appropriate stress and frictional parameters, all of which contribute to complex ground motion patterns. A fundamental challenge hereby is to model the high frequency content of the three-dimensional wave field, since the frequency range of 0–10 Hz is of pivotal importance for engineering purposes. Multiple executions of such multi-physics simulations need to be performed to provide a probabilistic-based hazard estimation.

Results & Achievements

Fault models built up in both north and south Iceland

Fully non-linear dynamic simulations accounting for 3-D velocity structures, topography, off-fault plasticity, and model parameter uncertainties and achieved target resolution.

Cybershake implemented successfully and a demo run for south Iceland
Generate the rupture probability using SHERIFS

GMPEs based hazard curves and maps with OpenQuake

About the code SeisSol: Extended YATeTo DSL to generate GPU GEMM kernels

Developed a python library as a GEMM backend for YATeTo

Adapted both SeisSol and YATeTO for batched computations

Implemented Elastic Solver: time, local, neighbour integrals

Both GTS and LTS scheme are working Enabled a distributed Multi-GPU setup Implemented Plasticity kernel (needs to get updated)

Tested performance on a multi-GPU distributed cluster: M100

Merged first stage from experimental to the production code

As a result, we obtained a 23% time reduction with respect to the GPU-only execution. In practice, this represents a performance boost equivalent to attaching an additional GPU per node and thus a much more efficient exploitation of the resources.

Objectives

The objectives of this use case is to develop general concepts for enabling physics-based seismic hazard assessment with state-of-the-art multi-physics earthquake simulation software (SeisSol, SpecFEM3D, ExaHyPE, AWP-ODC) and conduct 3D physics-based seismic simulations to improve PSHA for validation scenarios provided by IMO (Iceland) and beyond. This use case is expected to be applicable to supplement established methods by stakeholders, for different target regions and varying degrees of complexity.

Technologies

Workflow

The workflow of this pilot is shown in Figure 1.

To use the SeisSol code to run fully non-linear dynamic rupture simulations, accounting for various fault geometries, 3D velocity structures, off-fault plasticity, and model parameters uncertainties, to build a fully physics-based dynamic rupture database of mechanically plausible scenarios. 

Then the linked post-processing python codes are used to extract ground shakings (PGD, PGV, PGA and SA in different periods) from the surface output of SeisSol simulations to build a ground shaking database.

SHERIFS uses a logic tree method, with the input of the fault to fault ruptures from dynamic rupture database, converting the slip rate to the annual seismic rate given the geometry of the fault system. 

With the rupture probability estimation from SHERIFS, and ground shakings from the SeisSol simulations, we can generate the hazard curves for selected site locations and hazard maps for the study region. 

In addition, the OpenQuake can use the physics-based ground motion models/prediction equations, established with the ground shaking database from fully dynamic rupture simulations. And the Cybershake, which is based on the kinematic simulations, to perform the PSHA and complement the fully physics-based PSHA. 

Software involved

SeisSol (LMU)

ExaHyPE (TUM)

AWP-ODC (SCEC)

SHERIFS (Fault2SHA and GEM)

sam(oa)² (TUM) 

OpenQuake (GEM): https://github.com/gem/oq-engine 

Pre-processing:

Mesh generation tools: Gmsh (open source), Simmetrix/SimModeler (free for academic institution), PUMGen

Post-processing & Visualization: Paraview, python tools 

Use Case Owner

Alice-Agnes Gabriel
Ludwig Maximilian University of Munich (LMU)

Collaborating Institutions

IMO, BSC, TUM, INGV, SCEC, GEM, FAULT2SHA, Icelandic Civil Protection, Italian Civil Protection

Faster Than Real-Time Tsunami Simulations

 A Use Case by

Short description

Faster-than-real-time (FTRT) tsunami computations are crucial in the context of Tsunami Early Warning Systems (TEWS). Greatly improved and highly efficient computational methods are the first raw ingredient to achieve extremely fast and effective calculations. High-performance computing facilities have the role to bring this efficiency to a maximum possible while drastically reducing computational times. This use case will comprise both earthquake and landslide sources. Earthquake tsunami generation is to an extent simpler than landslide tsunami generation, as landslide generated tsunamis depend on the landslide dynamics which necessitate coupling dynamic landslide simulation models to the tsunami propagation. In both cases, FTRT simulations in several contexts and configurations will be the final aim of this use case.

Results & Achievements

Improving HySEA codes in CHEESE:

We have improved the load balancing algorithm. In particular we have added support for heterogeneous GPUs in the load balancing algorithm by assigning a numerical weight to each GPU.

We have developed a new algorithm for the nested meshes processing based on the current state values and we have implemented the activation of the nested meshes processing when a movement of the water is detected in their area.

Implemented asynchronous file writing by creating an additional thread for each MPI process using C++11 threads (see Table 1 at the end of this document).

Added the possibility of resuming a stored simulation.

Added sponge layers for a better processing of the border conditions, in order to avoid possible numerical instabilities in the borders of the domain. Implemented asynchronous CPU-GPU memory transfers.

We have dramatically reduced the size of the output files compressing the data using the algorithm described in Tolkova (2008) and saving most of the data in single precision files. A new version of Tsunami-HySEA has been developed to run simultaneous simulations on the same domain attending the requirements of PD7 and PD8, by executing one simulation on each GPU. This new version is able to use up to 1024 GPUs simultaneously with a very good weak scaling (losing around 3% of efficiency).

With these improvements, we obtain around 30% of reduction on the computational time with respect to the previous version of the codes.

The codes have been tested on CTE-POWER (BSC), DAVIDE and Marconi100 (CINECA) and Piz Daint (CSCS) supercomputers.

Objectives

The aim of this use case is to provide robust and very efficient numerical codes for FTRT Tsunami simulations that can be run in massively parallel multi-GPU architectures.

Technologies

Workflow

The Faster-Than-Real-Time (FTRT) prototype for extremely fast and robust tsunami simulations is based upon GPU/multi-GPU (NVIDIA) architectures and is able to use earthquake information from different locations and with heterogeneous content (full Okada parameter set, hypocenter and magnitude plus Wells and Coppersmith (1994)). Using these inhomogeneous inputs, and according to the FTRT workflow (see Fig. 1), tsunami computations are launched for a single scenario or a set of related scenarios for the same event. Basically, the automated retrieval of earthquake information is sent to the system and on-the-fly simulations are automatically launched. Therefore, several scenarios are computed at the same time. As updated information about the source is provided, new simulations should be launched. As output, several options are available tailored to the end-user needs, selecting among: sea surface height and its maximum, simulated isochrones and arrival times to the coastal areas, estimated tsunami coastal wave height, times series at Points of Interest (POIs) and oceanographic sensors.

 A first successful implementation  has been done for the Emergency Response Coordination Centre (ERCC), a service provided by the ARISTOTLE-ENHSP Project. The system implemented for ARISTOTLE follows the general workflow presented in Figure 1. Currently, in this system, a single source is used to assess the hazard and the computational grids are predefined.  The computed wall-clock time is provided for each experiment and the outputs of the simulation are maximum water height and arrival times on the whole domain, and water height time-series on a set of selected POIs, predefined for each domain. 

A library of Python codes are used to generate the input data required to run HySEA codes and to extract the topo-bathymetric data and construct the grids used by HySEA codes.

Software involved

Tsunami-HySEA has been successfully tested with the following tools and versions:

Compilers: GNU C++ compiler 7.3.0 or 8.4.0, OpenMPI 4.0.1, Spectrum MPI 10.3.1, CUDA 10.1 or 10.2

Management tools: CMake 3.9.6 or 3.11.4

External/third party libraries: NetCDF 4.6.1 or 4.7.3, PnetCDF 1.11.2 or 1.12.0

Pre-processing:

Nesting mesh generation tools.

In-house developed python tools for pre-processing purposes. 

Visualization tools:

In-house developed python tools.

Use Case Owner

Jorge Macías Sanchez
Universidad de Málaga

Collaborating Institutions

UMA, INGV, NGI, IGN, PMEL/NOAA (with a role in pilot’s development).

Other institutions benefiting from use case results with which we collaborate:
IEO, IHC, IGME, IHM, CSIC, CCS, Junta de Andalucía (all Spain); Italian Civil Protection, Seismic network of Puerto Rico (US), SINAMOT (Costa Rica), SHOA and UTFSM (Chile), GEUS (Denmark), JRC (EC), University of Malta, INCOIS (India), SGN (Dominican Republic), UNESCO, NCEI/NOAA (US), ICG/NEAMTWS, ICG/CARIBE-EWS, among others.