Psi-k 2020 will be the 6th general conference for the worldwide Psi-k community, following very successful events held in San Sebastián (2015), Berlin (2010), and Schwäbisch Gmünd (2005, 2000, 1996).

The conference program will be structured around 3.5 days (from the afternoon of September 14, 2020 to the end of September 17, 2020), that will include 7 plenary talks, 42 symposia in 6 parallel sessions (~135 invited talks and ~170 contributed talks), and a MAD evening.

All up-to-date information can be found at https://www.psik2020.net

The event focuses on the diverse challenges that industrial users of HPC face, including complex programming codes, new technologies, licensing models, user support, and the organisation of workflows, among others. The Industrial HPC User Round Table provides information about current developments in high-performance computing and a forum for discussions between HPC users.

Parallel CFD is an annual international conference devoted to the discussion of recent developments and applications of parallel computing in the field of Computational Fluid Dynamics and related disciplines.

Since the establishment of the ParCFD conference series, many new developments and technologies have emerged. Emergence of multi-core and heterogeneous architectures in parallel computers has created new challenges and opportunities for applied research and performance optimization in advanced CFD technology.

Over the years, the conference sessions involved papers on parallel algorithms, numerical methods and challenging applications in science, engineering and industry.

ParCFD 2020 will include contributed and invited papers, panel discussions and mini-symposiums with specific themes.  Topics of interest include, but are not limited to:

  • Parallel Algorithms and Solvers

  • Extreme-Scale Computing

  • Mechanical and Aerospace Engineering Applications

  • Atmospheric and Ocean Modeling

  • Medical and Biological Applications

  • Fluid-Structure Interaction

  • Turbulence

  • Combustion

  • Acoustics

  • Multi-disciplinary Design Optimization

  • Multi-Scale and Multi-Physics Applications

  • Software Frameworks and C/G-PU Computing

HPC (High Performance Computing) and BigData technologies are revolutioning how computational materials science is addressed. In a few years the new generation of supercomputers will be capable of delivering a computational power in the range of about 1018 floating point operations per second. The availability of this tremendous computational power opens new ways to face challenges in nanotechnology research. Materials science will be greatly affected since a new kind of dynamics between theory and experiment will be established, with the potential to accelerate materials discovery to meet the increased demand for task-specific materials. Moreover HPC will be able to analyze very large amount of data (BigData) giving access to unforeseen interpretations of both experimental and computational data. The heightened demand for automation, advanced analysis and predictive capabilities inherent to these new methods put it in an especially exciting crossroads between chemistry, mathematics and computational science. In the European sphere, the transversal multidisciplinary approach is the key ingredient of the Horizon2020 Energy oriented Centre of Excellence (EoCoE) which aims to accelerate the European transition to a reliable low carbon energy supply exploiting the ever-growing computational power of HPC. This session aims to bring together researchers in materials science and computer science to discuss new approaches and explore new collaborations in the theoretical discovery of materials.

The tools and techniques of High Performance Computing (HPC) have gained broad acceptance in wide areas of research and industry due to sustained progress in computational hardware and software technologies, ranging from hybrid CPU/GPU systems, multicore and distributed architectures, and virtualization, to relatively new paradigms such as cloud computing, explosive growth of the use of Artificial Intelligence (AI) techniques in myriad applications, and advances in quantum computer realizations. At the same time, the extremely fast pace of the field introduces new challenges in technological, intellectual, ethical and even political areas that must be addressed to continue to enable wider acceptance, implementation, and ultimately societal impact of high performance computing technologies, applications, and paradigms.

 The main aim of this workshop is to present and debate advanced topics, open questions, current and future developments, and challenging applications related to advanced high-performance distributed computing and data systems, encompassing implementations ranging from traditional clusters to warehouse-scale data centers, and with architectures including hybrid, multicore, distributed, cloud models, and systems targeted for AI applications. In addition, quantum computing has captured intense and widespread interest in the last two years, in large part due to the deployment of several systems with diverse architectures. This workshop will provide a forum for exploration of both challenges and synergies that might arise from exchange of ideas across the many aspects of HPC and its applications.

 The rapid uptake of AI methods to tackle myriad applications has led to rethinking of the relevant algorithms and of the microarchitectures of computers that are optimized for such applications. Although machine and deep learning are the AI technologies that are in the headlines daily and flood submissions to conferences and journals, other aspects of AI are also maturing and in some cases require HPC resources.

 Similarly, the growing deployment of quantum computers, some of which are accessible to the open research community, is spurring experimentation with reformulation of problems, algorithms, and programming techniques for such computers. Quantum sensing and quantum communication are also beginning to have physical instantiations.

 The importance of Cloud Computing in HPC continues to grow. We are seeing more and more cloud testbeds and production facilities that are used by government agencies, industry and academia. . Commercial cloud service providers like Amazon Web Services, Bull extreme factory, Fujitsu TC Cloud, Gompute, Microsoft Azure, Nimbix, Nimbula, Penguin on Demand, UberCloud, and many more are now offering HPC-focused infrastructure, platform, and application services. However, careful application benchmarking of different cloud infrastructures still have to be performed to find out which HPC cloud architecture is best suited for a specific application.

 From an application standpoint, many of the most widely used application codes have undergone many generations of adaptation as new architectures have emerged, from vector to MPP to cluster to cloud, and more recently to multicore and hybrid. As exascale systems move toward millions of processing units the interplay between system and user software, compilers and middleware, even programmer and run-time environment must be reconsidered. For example, how much resilience and fault-tolerance can, or should, be embedded transparently in the system versus exposed to the programmer? Perhaps even greater challenges arise from the complexity of applications, which are increasingly multi-scale and multi-physics and are built from hundreds of building blocks, and from the difficulty of achieving portability across traditional architectures.

 Finally, discussions and presentations related to emerging and strategically challenging application areas will also be an important part of the workshop. A special emphasis will be given to the potential of computational modeling and advanced analytics related to urban systems, including the associated diverse data sources and streams. Similarly, the challenges of data integration and use for new types of data sources such as the Internet of Things, will be examined. These and other new application areas enabled by new sources of data, including IoT and sensor networks, represent an interesting new set of HPC challenges.

Summarizing, the aim of this special workshop is to shed some light on key topics in advanced high performance computing systems and, in particular, to address the aforementioned contemporary scheduling, scaling, fault tolerance, and emerging application topics. The four and a half day program of this workshop will include roughly fifty invited talks and associated panels by experts in the field.

Two CoEs PerMedCoE as well as CompBioMed will participate in Expert Panel. Mariano Vazquez from the Barcelona Supercomputing Center (BSC) will participate in the expert panel titled “Biomedical platforms” on Wednesday, 17 November 2021 from 13-14h CEST as part of the MEDICAL Health IT Forum, that will be held in Düsseldorf (Germany) from 15-18 November 2021. The panel has numerous IT medical experts and is moderated by Prof. Dr. Christoph Brochhausen-Delius, University of Regensburg, Faculty of Medicine, Institute for Pathology. See the full panel experts here:

  • Alexander van der Mey, CEO, Healex GmbH
  • Pierre Cholet, Head of Business Development – Europe, Decentriq
  • Dr. Philipp Mann, Senior Alliance Manager, Owkin
  • Mariano Vazquez, Research Group Leader, Barcelona Supercomputing Center BSC-CNS (virtual speaker)
  • Georges de Feu, CEO, LynxCare

 

The HiPEAC conference 2021 will take place in January 2021 in Budapest, Hungary.

The HiPEAC Computing System week will take place in 14-16th October 2020 in Tampere, Finland.

The ICEI/Fenix project is excited to announce the 2nd free-of-charge Fenix Infrastructure Webinar “How to exploit ICEI scalable computing services” to take place on Tuesday 10 December at 15:00 CET. Read all the details below and register at: https://zoom.us/webinar/register/WN_0bm1cWWSS_O_4ihNf6EZ6w

Date and Time: Tuesday 10 December 15:00-16:00 CET

Cost: Free of charge

Speaker: Sadaf Alam, Swiss National Supercomputing Centre (CSCS)

Description: The goal of this webinar is to introduce participants to the Fenix Scalable Computing Services (SCC). It will provide details on the ICEI infrastructure for SCC at the Swiss National Supercomputing Computing Centre (CSCS). A dedicated Q&A time will allow for questions at the end of the webinar. Information on the available resources can be found at: https://fenix-ri.eu/infrastructure/resources/available-resources

Who should attend?

  • HPC infrastructure users
  • Neuroscientists
  • Application and platform developers
  • Workflow engineers

Main takeaways

  • Usage details of a large-scale hybrid and heterogeneous scalable computing services
  • Opportunities for tuning and optimisation for users and platform developers

Agenda

  • Overview Piz Daint ecosystem (10 min)

        computing, storage & networking

        programming environment (compiler, math libraries, MPI, debugging and performance tools)

        resource management and scheduling

        data transfer and management

        HPC and data science tools and frameworks

  • Selected examples and how-tos (10 min)

        job submission, querying and user level tools

        compiling, profiling and debugging

        storage orchestration

  • Itemised list of projects from the Human Brain Project highlighting usage of Piz Daint in workflows (5 min)
  • Questions & Answers (20 min)

The webinar will be recorded and the full recording will be available on the Fenix Webinars page soon after it takes place. Register at: https://zoom.us/webinar/register/WN_0bm1cWWSS_O_4ihNf6EZ6w

The Mesoscopic simulation models and High-Performance Computing ESDW (Extended Software Development Workshop) is organised by CECAM-FI (Aalto University, Finland) in collaboration with the CSC IT Center for Science and will mix three different ingredients: (1) workshop on state-of-the-art challenges in computational science and software, (2) CSC -run school, and (3) coding sessions with the aid of CSC facilities and expertise. The plan is to involve and contact a number of groups in Europe interested in topics where the usage of this methodology is essential and/or current code development / usage is actively going on.