Increasing accuracy in the automotive field simulations

 A Use Case by

Short description

For complex flow simulations, a priori knowledge of physics and the flow regimes is not always available, so the process of generating an optimal mesh is a tedious, time consuming process associated with a high computational cost. The use of goal driven a posteriori adjoint based error estimation can drive an adaptive process, resulting in a final optimal mesh. The benefits of an optimal mesh are seen in an increased accuracy of numerical simulation results, e.g. for the evaluation of drag or acoustic noise in the automotive and aeronautical fields. By using error estimation and adaptivity, a fully automated process can be established, involving an iterative workflow between mesh generation, simulation, result evaluation and CAD model morphing.

Results & Achievements

The automated simulation methods described above have been extensively used in academia and recently gained interest from both independent software vendors (ISV) and industry. The increasing computational complexity of industrial applications urges the scientific community to provide cutting edge methods packed with solid HPC capabilities to provide reliable solutions in affordable time. Industrial users are focused on solving engineering problems, and are typically not computing experts. The challenging size of real case problems though, with mesh sizes of several millions of elements, requires that codes are able to run smoothly on Exascale systems. The coupling Unicorn HPC + FEniCS HPC provides an Exascale-ready framework, with built-in parallelisation of FEM (Finite Elements Method) assembly phase, mesh adaption and linear algebra solvers. Our effort focuses on improving performances and robustness of the HPC solution and filling the gap between academia and industry by testing the code on real case applications. In this context, the joint effort of core code developers and use case owners is addressing the ease of the installation process, as well as the enrichment of the engineering relevant quantities extracted from the solution, the improvement of code stability, the definition of an optimal meshing strategy and the introduction of the drag driven morphing capability. Preliminary solutions have been obtained so far for increasingly complex models of the car.

Objectives

The aim is to use a posteriori error estimation to drive both mesh adaption and CAD morphing in an iterative process to produce an optimal design for a given output of interest. Our strategy is based on Unicorn HPC, a finite element CFD solver built on top of the FEniCS HPC code. It computes an approximation of a weak solution of the incompressible Navier Stokes equation, and comes with a built-in a posteriori adjoint-based error estimation strategy used to drive the adaptive mesh refinement only increasing resolution specifically in regions of interest. Through the adjoint method it is possible to evaluate the sensitivity of a desired scalar output to a change in the solution, without explicitly recomputing the solution. The scalar quantity at hand can be a physical quantity of interest, e.g the drag, or the norm error of the computed solution, related to the mesh size. We are thus applying the adjoint-based techniques implemented in the code for mesh adaptation to enable the drag-reduction based morphing of the geometry model.

Technologies

Use Case Owner

Collaborating Institutions

EXCELLERAT: Enabling parallel mesh adaptation with Treeadapt

 A Use Case by

Short description

CFD remains highly dependent on mesh quality. Advanced meshing software is generally limited to sequential or shared memory architectures. Therefore, meshing requires tens of hours to generate highly complex grids and it is highly dependent on user experience. Refinement zones are also bounded by standard geometrical shapes. To bypass these bottlenecks, codes have turned to mesh adaptation as a solution but massively parallel mesh adaptation workflows remain scarce and require efficient load balancing, interpolation and remeshing techniques.

Results & Achievements

So far, simulations using AVBP relied on static commercial software generated meshes. This bounded the quality of results to the experience and insight of the user performing the simulation. Furthermore, these mesh generation tools are mostly non-parallel and require days to mesh complex cases.

Recent work on efficient load balancing within the EPEEC project allowed CERFACS to use the open source library Treepart to create a new tool/library called Treeadapt that allows massively parallel mesh adaptation. Treeadapt uses its mesh partitioning and load balancing algorithms based on the ZOLTAN library to efficiently decompose the domains taking into account the architecture intricacies of the system it is running on. These load balancing algorithms coupled with MMG allow faster and more efficient remeshing.

Then, Treeadapt has been used to improve the simulation of a 42 injector rocket engine demonstrator simulation called BKD in the PRACE project: the prediction of combustion instabilities in liquid rocket engines (RockDyn). This simulation in collaboration with EM2C Centrale Supelec (T. Schmitt) reaching one Billion tetrahedras in less than 30 minutes, using 4,096 AMD Epyc 2 cores (compared to 70 hours with a standard meshing tool).

Objectives

Building on previous experience with the INRIA MMG library and a partnership with the EPEEC project during which an efficient mesh partitioning and load balancing library called Treepart was developed, EXCELLERAT is looking to develop a new application/library called Treeadapt for massively parallel mesh adaptation. Currently operating in full tetrahedral grids, Treeadapt generates a first partitioned domain where MMG can be called independently while freezing the parallel interfaces. A rebalancing and adaptation iterative process then occurs until the whole domain is within a user provided tolerance with regards to the error estimator (example: gradient of density, Hessian of the velocity).

Technologies

AVBP code
Treeadapt

Use Case Owner

Collaborating Institutions

EXCELLERAT blog post: Screening the coding style of Large Fortran HPC Codes

3. February 2021
excellerat250
Want to learn more about how to scan large Fortran legacy HPC codes for an improvement of coding style? Read the latest EXCELLERAT blog post about Flinter, a tool developed by EXCELLERAT for maintaining their core codes on the way to the Exascale level.

Smart platform for predicting COVID-19 healthcare system demands

4. January 2021
Excellerat_Logo_ELR_V1_20180209-01-300x106
EXCELLERAT has developed an intelligent data transfer platform that helps researchers predict the demand for intensive care units during the coronavirus pandemic.

EXCELLERAT Newsletter #4 published

18. December 2020
Excellerat_Logo_ELR_V1_20180209-01-300x106
Read the latest issue of the EXCELLERAT CoE newsletter. It features a success story on improved simulations in the automotive field and a report on the digital CAE conference.
(c) Fusion Medical Animation on Unsplash

How EU projects work on supercomputing applications to help contain the corona virus pandemic

The Centres of Excellence in high-performance computing are working to improve supercomputing applications in many different areas: from life sciences and medicine to materials design, from weather and climate research to global system science. A hot topic that affects many of the above-mentioned areas is, of course, the fight against the corona virus pandemic.

There are rather obvious challenges for those EU projects that are developing HPC applications for simulations in medicine or in the life sciences, like CompBioMed (Biomedicine) BioExcel (Biomolecular Research), and PerMedCoE (Personalized Medicine). But also other projects from scientific areas, that you would, at first sight, not directly relate to research on the pandemic, are developing and using appropriate applications to model the virus and its spread, and support policy makers with computing-heavy simulations. For example, did you know that researchers can simulate the possible spread of the virus on a local level, taking into account measures like closing shops or quarantining residents?

This article gives an overview over the various ways in which EU projects are using supercomputing applications to tackle and support the global challenge of containing the pandemic.

Simulations for better and faster drug development

CompBioMed is an EU-funded project working on applications for computational biomedicine. It is part of a vast international consortium across Europe and USA working on urgent coronavirus research. “Modelling and simulation is being used in all aspects of medical and strategic actions in our fight against coronavirus. In our case, it is being harnessed to narrow down drug targets from billions of candidate molecules to a handful that can be clinically trialled”, says Peter Coveney from University College London (UCL) who is heading CompBioMed’s efforts in this collaboration. The goal is to accelerate the development of antiviral drugs by modelling proteins that play critical roles in the virus life cycle in order to identify promising drug targets.

Secondly, for drug candidates already being used and trialled, the CompBioMed scientists are modelling and analysing the toxic effects that these drugs may have on the heart, using supercomputing resources required to run simulations on such scales.  The goal is to assess the drug dosage and potential interactions between drugs to provide guidance for their use in the clinic.

Finally, the project partners analysed a model used to inform the UK Government’s response to the pandemic. It has been found to contain a large degree of uncertainty in its predictions, leading it to seriously underestimate the first wave. “Epidemiological modelling has been and continues to be used for policy-making by governments to determine healthcare interventions”, says Coveney. “We have investigated the reliability of such models using HPC methods required to truly understand the uncertainty and sensitivity of these models.” As a conclusion, a better public understanding of the inherent uncertainty of models predicting COVID-19 mortality rates is necessary, saying they should be regarded as “probabilistic” rather than being relied upon to produce a particular and specific outcome.

Image of SuperMUC-NG, supercomputer at Leibniz Supercomputing Centre of the Bavarian Academy of Sciences. (c)MMM/LRZ
Image of SuperMUC-NG, supercomputer at Leibniz Supercomputing Centre of the Bavarian Academy of Sciences, consortium member in the CompBioMed project. (c) MMM/LRZ

BioExcel is an EU-funded project developing some of the most popular applications for modelling and simulations of biomolecular systems. Along with code development, the project builds training programmes to address competence gaps in extreme-scale scientific computing for beginners, advanced users and system maintainers.

When COVID-19 struck, BioExcel launched a series of actions to support the community on SARS-CoV-2 research, with an extensive focus on facilitating collaborations, user support, and providing access to HPC resources at partner centers. BioExcel partnered with Molecular Sciences Software Institute to establish the COVID-19 Molecular Structure and Therapeutics Hub to allow researchers to deposit their data and review other group’s submissions as well.

During this period, there was an urgent demand for diagnostics and sharing of data for COVID-19 applications had become vital more than ever. A dedicated BioExcel-CV19 web-server interface was launched to provide access to study molecules involved in the COVID-19 disease. This allowed the project to be a part of open access initiative promoted by the scientific community to make research accessible.

Recently, BioExcel endorsed the EU manifesto for COVID-19 Research launched by European Commission as part of their response to the coronavirus outbreak.

Modelling the electronic structure of the protease

MaX (MAterials design at the eXascale) is a European Centre of Excellence aiming at materials modelling, simulations, discovery and design on the exascale supercomputing architectures.

Though the main interest of the MaX flagship codes is then centered on materials science, the CoE is participating in the fight against SARS-CoV-2. Given the critical pandemic situation that the world is currently facing, an unprecedented effort is being devoted to the study of SARS-CoV-2 by researchers from different scientific communities and groups worldwide. From the biomolecular standpoint, particular focus is being devoted to the main protease, as well as to the spike protein. As such, it is an important potential antiviral drug target: if its function is inhibited, the virus remains immature and non-infectious. Using fragment-based screening, researchers have identified a number of small compounds that bind to the active site of the protease and can be used as a starting point for the development of protease inhibitors.

Sars-Cov-2 main protease monomer, in green, with the N3 3-mer peptide inhibitor bound in the enzyme’s active site.(from PDB crystal structure 6lu7). Structure like this ones can be simulated with a full DFT calculation and automatically decomposed into fragments whose interaction network can be characterized and analyzed.

Among other quantities, MaX researchers now have the possibility to model the electronic structure of the protease in contact with a potential docked inhibitor, and provide new insights on the interactions between them by selecting specific amino-acids that are involved in the interaction and characterizing their polarities. This new approach proposed by the MaX scientists is complementary to the docking methods used up to now and based on in-silico research of the inhibitor. Biological systems are naturally composed of fragments such as amino-acids in proteins or nitrogenous bases in DNA.

With this approach, it is possible to evaluate whether the amino acid-based fragmentation is consistent with the electronic structure resulting from the QM computation. This is an important indicator for the end-user, as it enables to evaluate the quality of the information associated with a given fragment. Then, QM observables on the system’s fragments can be obtained, which are based on a population analysis of electronic density of the system, projected on the amino-acid.

A novelty that this approach enables is the possibility of quantifying the strength of the chemical interaction between the different fragments. It is possible to select a target region and identify which fragments of the systems interact with this region by sharing electrons with it.

“We can reconstruct the fragmentation of the system in such a way as to focus on an active site in a specific portion of the protein”, says Luigi Genovese from CEA (Commissariat à l’énergie atomique et aux énergies alternatives) who is heading Max’s efforts on this topic. “We think this modelling approach could inform efforts in protein design by granting access to variables otherwise impervious to observation.”

This illustration, created at the Centers for Disease Control and Prevention (CDC), reveals ultrastructural morphology exhibited by coronaviruses. Note the spikes that adorn the outer surface of the virus, which impart the look of a corona surrounding the virion, when viewed electron microscopically. A novel coronavirus, named Severe Acute Respiratory Syndrome coronavirus 2 (SARS-CoV-2), was identified as the cause of an outbreak of respiratory illness first detected in Wuhan, China in 2019. The illness caused by this virus has been named coronavirus disease 2019 (COVID-19).
Various EU projects are using supercomputing applications to tackle and support the global challenge of containing the pandemic (c)CDC on Unsplash

Improving drug design and biosensors

The project E-CAM supports HPC simulations in industry and academia through software development, training and discussion in simulation and modeling. Project members are currently following two approaches to add to the research on the corona virus.

Firstly, the SARS-CoV-2 virus that causes COVID-19 uses a main protease to be functional. One of the drug targets currently under investigation is an inhibitor for this protease. While efforts on simulations of binding stability and dynamics are being conducted, not much is known of the dynamical transitions of the binding-unbinding reaction. Yet, this knowledge is crucial for improved drug design. E-CAM aims to shed light on these transitions, using a software package developed by project teams at the University of Amsterdam and the Ecole Normale Superieure in Lyon.

Secondly, E-CAM contributes to the development of the software required to design a protein-based sensor for the quick detection of COVID-19. The sensor, developed at the partner University College Dublin with the initial purpose to target influenza, is now being adapted to SARS-CoV-2. This adaptation needs DNA sequences as an input for suitable antibody-epitope pairs. High-performance computing is required to identify these DNA sequences to design and simulate the sensors prior to their expression in cell lines, purification and validation.

Studying COVID-19 infections on the cell level

The project PerMedCoE aims to optimise codes for cell-level simulations in high-performance computing, and to bridge the gap between organ and molecular simulations. The project started in October 2020.

“Multiscale modelling frameworks prove useful in integrating mechanisms that have very different time and space scales, as in the study of viral infection, human host cell demise and immune cells response. Our goal is to provide such a multiscale modelling framework that includes infection mechanisms, virus propagation and detailed signalling pathways,” says Alfonso Valencia, PerMedCoE project coordinator at the Barcelona Supercomputing Center.

The project team has developed a use case that focusses on studying COVID-19 infections using single-cell data. The work was presented to the research community at a specialized virtual conference in November, the Disease Map Community Meeting. “This use case is a priority in the first months of the project”, says Valencia.

On the technical level, disease maps networks will be converted to models of COVID-19 and human cells from the lung epithelium and the immune system. Then, the team will use omics data to personalise models of different patients’ groups, differentiated for example by age or gender. These data-tailored models will then be incorporated into a COVID-focussed version of the open source cell-level simulator PhysiCell.

Supporting policy makers and governments

The HiDALGO project focusses on modelling and simulating the complex processes which arise in connection with major global challenges. The researchers have developed the Flu and Coronavirus Simulator (FACS) with the objective to support decision makers to provide an appropriate response to the current pandemic situation taking into account health and care capabilities.

FACS is guided by the outcomes of SEIR (Susceptible-Exposed-Infectious-Recovered) models operating at national level. It uses geospatial data sources from Openstreet Map to approximate the viral spread in crowded places, while trading the potential routes to reach them.

In this way, the simulator can model the COVID-19 spread at local level to provide estimations of infections and hospital arrivals, given a range of public health interventions, going from no interventions to lockdowns. Public authorities can use the results of the simulations to identify peaks of contagion, set appropriate measures to reduce spread and provide necessary means to hospitals to prevent collapses. “FACS has enabled us to forecast the spread of COVID-19 in regions such as the London Borough of Brent. These forecasts have helped local National Health Service Trusts to more effectively plan out health and care services in response to the pandemic.” says Derek Groen from the HiDALGO project partner Brunel University London.

Scientists from the HiDALGO project use simulations to predict the spread of the Corona virus in certain areas of London. (c)HiDALGO
Scientists from the HiDALGO project use simulations to predict the spread of the Corona virus in certain areas of London. (c)HiDALGO

EXCELLERAT is a project that is usually focussing on supercomputing applications in the area of engineering. Nevertheless, a group of researchers from EXCELLERAT’s consortium partner SSC-Services GmbH, an IT service provider in Böblingen, Germany and the High-Performance Computing Center Stuttgart (HLRS) are also providing measures to contain the pandemic by supporting the German Federal Institute for Population Research (Bundesinstitut für Bevölkerungsforschung, BiB).

The scientists have developed an intelligent data transfer platform, which enables the BiB to upload data, perform computing-heavy simulations on the HLRS’ supercomputer Hawk, and download the results. The platform supports the work of BiB researchers in predicting the demand for intensive care units during the COVID-19 pandemic. “Nowadays, organisations face various issues while dealing with HPC calculations, HPC in general or even the access to HPC resources,” said Janik Schüssler, project manager at SSC Services. “In many cases, calculations are too complex and users do not have the required know-how with HPC technologies. This is the challenge that we have taken on. The BiB’s researchers had to access HLRS’s Hawk in a very complex way. With the help of our new platform, they can easily access Hawk from anywhere and run their simulations remotely.”

“This platform is part of EXCELLERAT’s overall strategy and tools development, which not only addresses the simulation part of engineering workflows, but provides users the necessary means to optimise their work”, said Bastian Koller, Project Coordinator of EXCELLERAT and HLRS’s Managing Director. “Extending the applicability of this platform to further use cases outside of the engineering domain is a huge benefit and increases the impact of the work performed in EXCELLERAT.”

EXCELLERAT, MaX and POP at the International CAE Conference 2020

4. December 2020

   

The three HPC centres of excellence EXCELLERAT, POP and MaX participated in the 36th edition of the International CAE conference 2020, that was held online from 30th November until 3rd December 2020.

Under the topic “At the epicentre of the digital transformation of industry”, high-performance computing is a key enabler for this digital transformation and it was presented at a dedicated collateral event on Wednesday, December 2nd at 14:00h CET. 

In this session, the technical director of EXCELLERAT Amgad Dessoky presented a session titled “EXCELLERAT: paving the way for the evolution towards Exascale”. The EXCELLERAT activity brings together European experts to establish a Centre of Excellence (CoE) in Engineering Applications on HPC with a broad service portfolio, paving the way for the evolution towards Exascale. The aim is to solve highly complex and costly engineering problems, and create enhanced technological solutions even at the development stage.

In the exhibition, MaX and EXCELLERAT had a joint virtual booth together to show their latest results. The virtual format made it possible to interact with both CoEs via video and chat. The booth was visible for three months after the event. 

POP CoE was also present at the event with a virtual booth to exhibit its latest research results. 

>> CAE Conference Website

Full airplane simulations on Heterogeneous Architectures

A solution based on Dynamic Load Balancing

 A Use Case by

Short description

Many of the future Exascale systems will be heterogeneous and include accelerators such as GPUs. With the explosion of parallelism, we also expect the performance of the various computing devices to be more variable and, therefore, the performance of the system components to be less certain. Leading-edge engineering simulation codes need to be malleable enough to adapt to the new environment. In the current use case Alya is used, which is one of the only two CFD codes of the Unified European Applications Benchmark Suite (UEBAS) as well as the Accelerator benchmark suite of PRACE. Alya, EXCELLERAT’s reference code is used for modelling complex systems, like airplane simulations, dynamic load balance mechanics are required to adjust the workload distribution to the measured performance of each component of the system.

Results & Achievements

The EXCELLERAT software, based on the above mentioned SFC method, can partition 250 Million elements mesh of an airplane within 0.08 seconds using 128 nodes (6,144 CPU-cores) of the MareNostrum V supercomputer. Consequently, mesh partitions can be recomputed at runtime for load balancing without producing a significant overhead. This approach was applied to perform full airplane simulations on the heterogeneous POWER9 cluster installed at the Barcelona Supercomputing Center. In the BSC POWER9 cluster we demonstrated that we could perform a well-balanced co-execution using both the CPUs and GPUs simultaneously. As a result, we obtained a 23% time reduction with respect to the GPU-only execution. In practice, this represents a performance boost equivalent to attaching an additional GPU per node and thus a much more efficient exploitation of the resources.

Objectives

In EXCELLERAT we use dynamic load balancing (DLB) to increase the parallel efficiency for airplane simulations, minimising idle time of underloaded devices at synchronisation points. Alya has been provisioned with a distributed memory DLB mechanism, complementary to the node-level parallel performance strategy already in place. The kernel parts of the method are an efficient in-house Space Filling Curve (SFC)-based mesh practitioner and an online redistribution module to migrate the simulation between two different partitions. Those are used to correct the partition according to runtime measurements. We have focused on maximising the parallel performance of the mesh partition process to minimise the load balancing overhead.

Technologies

Alya CFD code

Use Case Owner

Barcelona Supercomputing Center-Centro Nacional de Supercomputación (BSC-CNS)

Collaborating Institutions

Barcelona Supercomputing Center (BSC)

ETP4HPC handbook 2020 released

6. November 2020

The 2020 edition of the ETP4HPC Handbook of HPC projects is available. It offers a comprehensive overview over the European HPC landscape that currently consists of around 50 active projects and initiatives. Amongst these are the 14 Centres of Excellence and FocusCoE, that are also represented in this edition of the handbook.

>> Read here

HPC Centres of Excellence @ Supercomputing '20

4. November 2020

Due to restrictions caused by the global COVID-19 pandemic, the SC20 conference – the world’s leading HPC event – will take place online this year from November 9-19. 

Find below the CoE’s contributions to the 2020 edition of the Supercomputing Conference.