Research Directions and Projects

Current Projects and Research Directions

(Main) Research Directions

HPC Software for Coupled Molecular-Continuum Flow Simulation

The Macro-Micro-Coupling Tool (MaMiCo) is meant to ease the development of and increase flexibility and maintainability of coupled molecular-continuum flow simulations. These multiscale simulations allow to resolve only parts of the considered computational domain by computationally ex­pensive molecular dynamics (MD) simulations; other parts are treated by coarse-grained continuum solver (e.g., Lattice Boltzmann or Navier-Stokes).

MaMiCo strictly separates MD and continuum solver, and coupling compo­nents. This allows to easily exchange the solver implementations, and it enables the immediate reuse of coupling algo­rithms—once implemented within the tool—with one’s fa­vorite MD/continuum solver com­bination on a supercomputer. The tool currently interfaces eight different solver frameworks (four LB simu­lation codes, including Palabos and OpenLB, and four MD pack­ages, includ­ing ESPResSo and LAMMPS) and supports Single-LB-Multi-MD coupling: a single LB simulation is coupled to mul­tiple quasi-identical MD simula­tions. The simultaneous execution of several MD instances is fa­vorable in terms of parallelism, it enables the evaluation of aver­aged quantities over indepen­dent MD samples and thus yields a respec­tive reduction in coupling time intervals between LB and MD solver. Besides, MaMiCo incorporates noise filters to remove potentially unwanted thermodynamic fluctuations from the MD data before coupling them to the continuum solver.

HPC, Simulation and Machine Learning

Today’s developments and need for machine learning have a great impact on high performance computing in terms of hardware development. Besides, data analytics and machine learning have found their way into numerical simulation, e.g. replacing expensive non-linear kernels or simply for evaluating statistics in uncertainty quantification and related fields.

Some research at the chair is dedicated to detect synergies between HPC, numerical simulation and machine learning, for example

  • in the context of performance prediction for simulations whose run time depends on a big number of parameters (corresponding to a high-dimensional parameter space exploration), or
  • in molecular-continuum coupled simulations to model molecular behavior via machine learning methods.

Bioinformatics and HPC for Tumor Diagnostics

In collaboration with researchers from the University Medical Center Hamburg-Eppendorf (UKE), we work on novel high-performance approaches to analyze big and heterogeneous data sets that are relevant for the classification and diagnostics of brain tumors (omics data, gene expressions, NGS-data, etc.), with a focus on child tumors.

Funded Projects

SmartShip: Digital Twins for Intelligent Ships & Fleets (dtec.bw; 2020-2024)

In this project, ships (e.g., from the sea rescue) shall be equipped by novel senors/ camera systems and IT/AI-systems. Using the data generated and processed by these systems, digital twins of these ships shall evolve, that will be further enriched over the life cycle of the ship. In a second step, this shall be extended to fleets of ships. A verification is planned in terms of anomaly detection cases and fleet optimization. HPC challenges evolve from the corresponding real-time capabilities of the digital system.

MaST: Macro/Micro-Simulation of Phase Decomposition in the  Transcritical Regime (dtec.bw; 2021-2024)

In the interdisciplinary project MaST, innovative simulation software technology will be developed to assess phase decomposition processes, which are of big relevance in various engineering applications (e.g., injection of fuels in motors). The to-be-developed simulation software will span a wide range of scales, including molecular, statistical and continuum considerations. Insights generated through the simulations will be augmented by experimental investigations.

More information on the project can be found here.

hpc.bw: Competence Platform for Software Efficiency and Supercomputing (dtec.bw; 2021-2024)

hpc.bw will leverage synergies and establish a joint competence platform on various aspects of high performance computing at the universities of the armed forces. To this end, a container-based HPC cluster and an interactive scientific computing cloud are to be established at HSU. Compute-intensive applications are to be supported through performance engineering, tutorials and workshops will be provided and developed for HPC beginners as well as more advanced users. Besides, software and hardware sustainability will be adressed, amongst others with regard to efficient solvers and benchmark development to define a strategy for future HPC procurements at the universities of the armed forces.
These efforts will be complemented by outreach to industry with regard to HPC in industrial applications.

Project website: https://www.hsu-hh.de/hpccp/

SuMo:  Sustainability for Molecular Simulation in Process Engineering (DFG; 2021-2024)

ls1 mardyn is a highly efficient molecular dynamics software for process engineering applications. The overall objective in this project is to leverage and elaborate on recent efforts in ls1 mardyn’s software development and software infrastructure to make it sustainable and widely usable.
This implies

  • making existing feature-specific implementations and extensions of ls1 mardyn sustainable: this shall be reached through porting these implementations into the recently introduced plugin concept and through extending and improving the plugin concept, respectively,
  • improvements for ls1 mardyn with regard to functionality required by current users and an extended user base in the future: this shall be addressed by the incorporation of new, yet technical, methodology such as other particle-particle interaction potentials and improved methods for particle insertion and deletion in non-equilibrium MD,
  • enhancing the software development infrastructure, that is introducing tests to provide application-driven reproducibility in process engineering and performance reproducibility, and improving the documentation,
  • disseminating the software to ensure its uptake by a wider user base.

Advanced Simulation Methodology for Optimizing Aerodynamic Lenses used for Single-Particle Diffractive Imaging (DASHH; 2021-2024)

The objective of the present project is to bundle innovative multiscale simulation methodology and data analysis, both supported by high-performance computing, to enable physically more reliable predictions of the processes taking place in injection systems of single-particle diffractive imaging. For this purpose particle tracking and flow simulations are to be combined, considering the flow in aerodynamic lenses which stretches various Knudsen number ranges.

This is a collaborative project with the groups of Prof. Breuer/Fluid Dynamics at HSU and Prof. Küpper/DESY and Univ. Hamburg. It is funded in the scope of DASHH – Data Science in Hamburg Helmholtz Graduate School for the Structure of Matter.

EUMaster4HPC: HPC European Consortium Leading Education Activities (H2020, EuroHPC; 2022-2025)

The objective of EUMaster4HPC is to establish a joint Master’s degree in High Performance Computing over Europe.

We contribute as non-beneficiary to the project with the aim to participate in curriculum discussions and to join the European HPC Master’s programme in the future.

WindHPC: Wind Power Plant-Integrated Second Life Cycle Clusters for High Performance Computing (BMBF, 2022-2025)

The goal of the project is to develop software and hardware for more energy efficiency in high performance computing and explore the use of second life cycle HPC clusters. On the software side, energy consumption of single simulations and simulation workflows shall be investigated and optimized using digital twin technology. Here, the idea is to identify how much energy is required per „scientific insight“. For this purpose, novel methods and software components are to be developed to choose – depending on the accuracy requirements – the most efficient simulation method. On hardware side, the target platform will be given by an innovative combination of classical HPC computing center sites and second life cycle clusters. The latter shall be hosted in wind power plants and shall solely rely on locally generated electricity. The consortium will base all of its work on open-source software.
More information on the project can be found here.

3xa: Simulation Software for Exascale Systems to Calculate Three-Body-Interactions(BMBF, 2022-2025)

The project aims to develop scalable methods for particle systems. In an interdisciplinary, holistic approach,

  1. vectorized kernels and auto-tuning-based, system-independent multi- and many-core algorithms on intra-node-level,
  2. novel dynamic load balancing approaches, innovative zonal methods for optimal strong scaling and improved inter-GPU-communication at inter-node-level, as well as
  3. adaptive methods for particle representation, so-called adaptive resolution schemes,

shall be explored to establish a scalable three-body-potential-based particle simulation on exascale systems. The evolving prototypes will be demonstrated at examples from thermodynamics and process engineering.

Finished Projects

TaLPas: Task-based Load Balancing and Auto-Tuning in Particle Simulations (BMBF; 2016-2020)

The main goal of TaLPas is to provide a solution to fast and robust simulation of many, potentially dependent particle systems in a distributed environment. This is required in many applications, including, but not limited to,

  • sampling in molecular dynamics: so-called “rare events”, e.g. droplet formation, require a multitude of molecular dynamics simulations to investigate the actual conditions of phase transition,
  • uncertainty quantification: various simulations are performed using different parametrizations to investigate the sensitivity of the parameters on the actual solution,
  • parameter identification: given, e.g., a set of experimental data and a molecular model, an optimal set of model parameters needs to be found to fit the model to the experiment.

For this purpose, TaLPas targets

  • the development of innovative auto-tuning based particle simulation software in form of an open-source library to leverage optimal node-level performance. This will guarantee an optimal time-to-solution for small- to mid-sized particle simulations,
  • the development of a scalable task scheduler to yield an optimal distribution of potentially dependent simulation tasks on available HPC compute resources,
  • the combination of both auto-tuning based particle simulation and scalable task scheduler, augmented by an approach to resilience. This will guarantee robust, that is fault-tolerant, sampling evaluations on peta- and future exascale platforms.

Resilience and Dynamic Noise Reduction at Exascale for Multiscale Simulation Coupling (IFF; 2020-2022)

In this project, we investigate how to control errors in molecular-continuum flow simulations, that is

  • physical errors due to thermal fluctuations,
  • errors due to failing hardware/OS.

The to-be-developed methodology will be incorporated into the macro-micro-coupling tool (MaMiCo) and will build the foundation for further research at the edge of data science and high performance computing for multiscale simulation.

HSU

Letzte Änderung: 21. April 2024