The International CAE Conference & Exhibition has a 36-year track record rendering it unique in the industry, where it is recognized as the richest, most intense and stimulating annual meeting of minds from all parts of the engineering simulation environment across industry and academia, the public and private sectors.

A dedicated HPC session titled “High Performance Computing, a key enabler for digital transformation” has been included on the programme. This session at the International CAE Conference will explore the most advanced algorithms and applications available to better leverage the huge computational capabilities of modern HPC systems.

EXCELLERAT & MaX are participating, FocusCoE has booked for them a common booth in the Exhibition area. In addition to the common booth in the exhibition area, EXCELLERAT is having a talk during the HPC session of the conference.

The goal of this workshop is to present the latest updates of services provided by several HPC CoEs including EXCELLERAT, EoCoE, ChEESE, HiDALGO, BioExcel and CompBioMed. Organized and supported by FocusCoE, this session will present to the HiPEAC community the latest updates of these HPC CoEs.

Agenda

9:30 – 9:45 Welcome and FocusCoE introduction, Guy Lonsdale, Scapos
9:45 – 10:00 FPGAs and scientific computing: A match made in heaven?, Nick Brown (EPCC) EXCELLERAT
10:00 – 10:15 Renewable energy in the exascale era, Edouard Audit (CEA), EoCoE project
10:15 – 10:30 The HPC synergy for Solid Earth Science, Arnau Folch (BSC) ChEESE
10:30 – 10:45 HPC and Big Data Technologies for Global Systems, Javi Nieto (ATOS) HiDALGO
10:45 – 11:00 Meeting User Needs in HPC: Why, What and How in the Life Sciences, Rossen Apostolov, BioExcel

11:00 – 11:30 Coffee break (30min)
11:30 – 11:45 Developing HPC services for the biomedical community, Marco Verdicchio, SURFsara – CompBioMed
11:45 – 12:45 Joint Co-Design panel

Chaired by: Guy Lonsdale, Scapos

* CompBioMed: David Wifling (LRZ)
* Excellerat: Gavin Pringle (EPCC)
*ChEESE: Soline Laforet (Bull Atos)
* Hidalgo: Javi Nieto (Atos)
*BioExcel: Berk Hess (KTH)
*EoCoE: Edouard Audit

12:45 – 13:00 Wrap-up and conclusions, Guy Lonsdale (Scapos)

Register to this event here: https://www.hipeac.net/2021/spring-virtual/#/program/sessions/7858/

Life Science research has become increasingly digital and has a direct influence on our daily life in areas such as health and medical applications, drug discovery, agriculture and food industry. It is one of the largest and fastest growing communities in need of high-end computing, leading to an increasing number of life science researchers who are not computing experts but who need to use complicated computationally intensive biomolecular modelling tools. High quality training is required to enable researchers to use computational tools.

A competency framework can be used to define the areas of training needs and develop a training programme based on it. A competency is an observable ability of any professional, integrating multiple components such as knowledge, skills and behaviours. BioExcel has developed a competency framework that lists the competencies for professionals in the field of computational biomolecular research. This framework also enables the definition of different profiles within its field of application, which can help people identify the abilities they need for a specific role, e.g. computational chemist, and therefore, inform their career choices and professional development.

The EMBL-EBI training team is involved in a number of projects that use competencies as the basis of their training programme. To provide a central and sustainable home for the generated competency framework the team has developed the competency hub to facilitate access to competency frameworks and relevant training resources and career profiles associated with them. The BioExcel framework is one of the driving use cases for the competency hub. We will demonstrate how users can obtain information from the website to guide their professional development.

The 2020 edition of the EGI conference will take place virtually, from the 2nd until the 4th of November.

With our theme “Federated infrastructures for connected communities” we aim to bring together science, computing, and (international) collaboration through a diverse and interactive programme.

This workshop organised by VI-HPS and CINECA will:

  • give an overview of the VI-HPS programming tools suite
  • explain the functionality of individual tools, and how to use them effectively
  • offer hands-on experience and expert assistance using the tools

Psi-k 2020 will be the 6th general conference for the worldwide Psi-k community, following very successful events held in San Sebastián (2015), Berlin (2010), and Schwäbisch Gmünd (2005, 2000, 1996).

The conference program will be structured around 3.5 days (from the afternoon of September 14, 2020 to the end of September 17, 2020), that will include 7 plenary talks, 42 symposia in 6 parallel sessions (~135 invited talks and ~170 contributed talks), and a MAD evening.

All up-to-date information can be found at https://www.psik2020.net

The event focuses on the diverse challenges that industrial users of HPC face, including complex programming codes, new technologies, licensing models, user support, and the organisation of workflows, among others. The Industrial HPC User Round Table provides information about current developments in high-performance computing and a forum for discussions between HPC users.

Parallel CFD is an annual international conference devoted to the discussion of recent developments and applications of parallel computing in the field of Computational Fluid Dynamics and related disciplines.

Since the establishment of the ParCFD conference series, many new developments and technologies have emerged. Emergence of multi-core and heterogeneous architectures in parallel computers has created new challenges and opportunities for applied research and performance optimization in advanced CFD technology.

Over the years, the conference sessions involved papers on parallel algorithms, numerical methods and challenging applications in science, engineering and industry.

ParCFD 2020 will include contributed and invited papers, panel discussions and mini-symposiums with specific themes.  Topics of interest include, but are not limited to:

  • Parallel Algorithms and Solvers

  • Extreme-Scale Computing

  • Mechanical and Aerospace Engineering Applications

  • Atmospheric and Ocean Modeling

  • Medical and Biological Applications

  • Fluid-Structure Interaction

  • Turbulence

  • Combustion

  • Acoustics

  • Multi-disciplinary Design Optimization

  • Multi-Scale and Multi-Physics Applications

  • Software Frameworks and C/G-PU Computing

HPC (High Performance Computing) and BigData technologies are revolutioning how computational materials science is addressed. In a few years the new generation of supercomputers will be capable of delivering a computational power in the range of about 1018 floating point operations per second. The availability of this tremendous computational power opens new ways to face challenges in nanotechnology research. Materials science will be greatly affected since a new kind of dynamics between theory and experiment will be established, with the potential to accelerate materials discovery to meet the increased demand for task-specific materials. Moreover HPC will be able to analyze very large amount of data (BigData) giving access to unforeseen interpretations of both experimental and computational data. The heightened demand for automation, advanced analysis and predictive capabilities inherent to these new methods put it in an especially exciting crossroads between chemistry, mathematics and computational science. In the European sphere, the transversal multidisciplinary approach is the key ingredient of the Horizon2020 Energy oriented Centre of Excellence (EoCoE) which aims to accelerate the European transition to a reliable low carbon energy supply exploiting the ever-growing computational power of HPC. This session aims to bring together researchers in materials science and computer science to discuss new approaches and explore new collaborations in the theoretical discovery of materials.

The tools and techniques of High Performance Computing (HPC) have gained broad acceptance in wide areas of research and industry due to sustained progress in computational hardware and software technologies, ranging from hybrid CPU/GPU systems, multicore and distributed architectures, and virtualization, to relatively new paradigms such as cloud computing, explosive growth of the use of Artificial Intelligence (AI) techniques in myriad applications, and advances in quantum computer realizations. At the same time, the extremely fast pace of the field introduces new challenges in technological, intellectual, ethical and even political areas that must be addressed to continue to enable wider acceptance, implementation, and ultimately societal impact of high performance computing technologies, applications, and paradigms.

 The main aim of this workshop is to present and debate advanced topics, open questions, current and future developments, and challenging applications related to advanced high-performance distributed computing and data systems, encompassing implementations ranging from traditional clusters to warehouse-scale data centers, and with architectures including hybrid, multicore, distributed, cloud models, and systems targeted for AI applications. In addition, quantum computing has captured intense and widespread interest in the last two years, in large part due to the deployment of several systems with diverse architectures. This workshop will provide a forum for exploration of both challenges and synergies that might arise from exchange of ideas across the many aspects of HPC and its applications.

 The rapid uptake of AI methods to tackle myriad applications has led to rethinking of the relevant algorithms and of the microarchitectures of computers that are optimized for such applications. Although machine and deep learning are the AI technologies that are in the headlines daily and flood submissions to conferences and journals, other aspects of AI are also maturing and in some cases require HPC resources.

 Similarly, the growing deployment of quantum computers, some of which are accessible to the open research community, is spurring experimentation with reformulation of problems, algorithms, and programming techniques for such computers. Quantum sensing and quantum communication are also beginning to have physical instantiations.

 The importance of Cloud Computing in HPC continues to grow. We are seeing more and more cloud testbeds and production facilities that are used by government agencies, industry and academia. . Commercial cloud service providers like Amazon Web Services, Bull extreme factory, Fujitsu TC Cloud, Gompute, Microsoft Azure, Nimbix, Nimbula, Penguin on Demand, UberCloud, and many more are now offering HPC-focused infrastructure, platform, and application services. However, careful application benchmarking of different cloud infrastructures still have to be performed to find out which HPC cloud architecture is best suited for a specific application.

 From an application standpoint, many of the most widely used application codes have undergone many generations of adaptation as new architectures have emerged, from vector to MPP to cluster to cloud, and more recently to multicore and hybrid. As exascale systems move toward millions of processing units the interplay between system and user software, compilers and middleware, even programmer and run-time environment must be reconsidered. For example, how much resilience and fault-tolerance can, or should, be embedded transparently in the system versus exposed to the programmer? Perhaps even greater challenges arise from the complexity of applications, which are increasingly multi-scale and multi-physics and are built from hundreds of building blocks, and from the difficulty of achieving portability across traditional architectures.

 Finally, discussions and presentations related to emerging and strategically challenging application areas will also be an important part of the workshop. A special emphasis will be given to the potential of computational modeling and advanced analytics related to urban systems, including the associated diverse data sources and streams. Similarly, the challenges of data integration and use for new types of data sources such as the Internet of Things, will be examined. These and other new application areas enabled by new sources of data, including IoT and sensor networks, represent an interesting new set of HPC challenges.

Summarizing, the aim of this special workshop is to shed some light on key topics in advanced high performance computing systems and, in particular, to address the aforementioned contemporary scheduling, scaling, fault tolerance, and emerging application topics. The four and a half day program of this workshop will include roughly fifty invited talks and associated panels by experts in the field.