TY - JOUR
AB - While FPGA accelerator boards and their respective high-level design tools are maturing, there is still a lack of multi-FPGA applications, libraries, and not least, benchmarks and reference implementations towards sustained HPC usage of these devices. As in the early days of GPUs in HPC, for workloads that can reasonably be decoupled into loosely coupled working sets, multi-accelerator support can be achieved by using standard communication interfaces like MPI on the host side. However, for performance and productivity, some applications can profit from a tighter coupling of the accelerators. FPGAs offer unique opportunities here when extending the dataflow characteristics to their communication interfaces.
In this work, we extend the HPCC FPGA benchmark suite by multi-FPGA support and three missing benchmarks that particularly characterize or stress inter-device communication: b_eff, PTRANS, and LINPACK. With all benchmarks implemented for current boards with Intel and Xilinx FPGAs, we established a baseline for multi-FPGA performance. Additionally, for the communication-centric benchmarks, we explored the potential of direct FPGA-to-FPGA communication with a circuit-switched inter-FPGA network that is currently only available for one of the boards. The evaluation with parallel execution on up to 26 FPGA boards makes use of one of the largest academic FPGA installations.
AU - Meyer, Marius
AU - Kenter, Tobias
AU - Plessl, Christian
ID - 38041
JF - ACM Transactions on Reconfigurable Technology and Systems
KW - General Computer Science
SN - 1936-7406
TI - Multi-FPGA Designs and Scaling of HPC Challenge Benchmarks via MPI and Circuit-Switched Inter-FPGA Networks
ER -
TY - CHAP
AU - Hansmeier, Tim
AU - Kenter, Tobias
AU - Meyer, Marius
AU - Riebler, Heinrich
AU - Platzner, Marco
AU - Plessl, Christian
ED - Haake, Claus-Jochen
ED - Meyer auf der Heide, Friedhelm
ED - Platzner, Marco
ED - Wachsmuth, Henning
ED - Wehrheim, Heike
ID - 45893
T2 - On-The-Fly Computing -- Individualized IT-services in dynamic markets
TI - Compute Centers I: Heterogeneous Execution Environments
VL - 412
ER -
TY - CONF
AU - Opdenhövel, Jan-Oliver
AU - Plessl, Christian
AU - Kenter, Tobias
ID - 46190
T2 - Proceedings of the 13th International Symposium on Highly Efficient Accelerators and Reconfigurable Technologies
TI - Mutation Tree Reconstruction of Tumor Cells on FPGAs Using a Bit-Level Matrix Representation
ER -
TY - CONF
AU - Faj, Jennifer
AU - Kenter, Tobias
AU - Faghih-Naini, Sara
AU - Plessl, Christian
AU - Aizinger, Vadym
ID - 46188
T2 - Proceedings of the Platform for Advanced Scientific Computing Conference
TI - Scalable Multi-FPGA Design of a Discontinuous Galerkin Shallow-Water Model on Unstructured Meshes
ER -
TY - CONF
AU - Prouveur, Charles
AU - Haefele, Matthieu
AU - Kenter, Tobias
AU - Voss, Nils
ID - 46189
T2 - Proceedings of the Platform for Advanced Scientific Computing Conference
TI - FPGA Acceleration for HPC Supercapacitor Simulations
ER -
TY - CONF
AB - The computation of electron repulsion integrals (ERIs) over Gaussian-type orbitals (GTOs) is a challenging problem in quantum-mechanics-based atomistic simulations. In practical simulations, several trillions of ERIs may have to be
computed for every time step.
In this work, we investigate FPGAs as accelerators for the ERI computation. We use template parameters, here within the Intel oneAPI tool flow, to create customized designs for 256 different ERI quartet classes, based on their orbitals. To maximize data reuse, all intermediates are buffered in FPGA on-chip memory with customized layout. The pre-calculation of intermediates also helps to overcome data dependencies caused by multi-dimensional recurrence
relations. The involved loop structures are partially or even fully unrolled for high throughput of FPGA kernels. Furthermore, a lossy compression algorithm utilizing arbitrary bitwidth integers is integrated in the FPGA kernels. To our
best knowledge, this is the first work on ERI computation on FPGAs that supports more than just the single most basic quartet class. Also, the integration of ERI computation and compression it a novelty that is not even covered by CPU or GPU libraries so far.
Our evaluation shows that using 16-bit integer for the ERI compression, the fastest FPGA kernels exceed the performance of 10 GERIS ($10 \times 10^9$ ERIs per second) on one Intel Stratix 10 GX 2800 FPGA, with maximum absolute errors around $10^{-7}$ - $10^{-5}$ Hartree. The measured throughput can be accurately explained by a performance model. The FPGA kernels deployed on 2 FPGAs outperform similar computations using the widely used libint reference on a two-socket server with 40 Xeon Gold 6148 CPU cores of the same process technology by factors up to 6.0x and on a new two-socket server with 128 EPYC 7713 CPU cores by up to 1.9x.
AU - Wu, Xin
AU - Kenter, Tobias
AU - Schade, Robert
AU - Kühne, Thomas
AU - Plessl, Christian
ID - 43228
T2 - 2023 IEEE 31st Annual International Symposium on Field-Programmable Custom Computing Machines (FCCM)
TI - Computing and Compressing Electron Repulsion Integrals on FPGAs
ER -
TY - JOUR
AB - The non-orthogonal local submatrix method applied to electronic structure–based molecular dynamics simulations is shown to exceed 1.1 EFLOP/s in FP16/FP32-mixed floating-point arithmetic when using 4400 NVIDIA A100 GPUs of the Perlmutter system. This is enabled by a modification of the original method that pushes the sustained fraction of the peak performance to about 80%. Example calculations are performed for SARS-CoV-2 spike proteins with up to 83 million atoms.
AU - Schade, Robert
AU - Kenter, Tobias
AU - Elgabarty, Hossam
AU - Lass, Michael
AU - Kühne, Thomas
AU - Plessl, Christian
ID - 45361
JF - The International Journal of High Performance Computing Applications
KW - Hardware and Architecture
KW - Theoretical Computer Science
KW - Software
SN - 1094-3420
TI - Breaking the exascale barrier for the electronic structure problem in ab-initio molecular dynamics
ER -
TY - CHAP
AU - Alt, Christoph
AU - Kenter, Tobias
AU - Faghih-Naini, Sara
AU - Faj, Jennifer
AU - Opdenhövel, Jan-Oliver
AU - Plessl, Christian
AU - Aizinger, Vadym
AU - Hönig, Jan
AU - Köstler, Harald
ID - 46191
SN - 0302-9743
T2 - Lecture Notes in Computer Science
TI - Shallow Water DG Simulations on FPGAs: Design and Comparison of a Novel Code Generation Pipeline
ER -
TY - GEN
AB - This preprint makes the claim of having computed the $9^{th}$ Dedekind
Number. This was done by building an efficient FPGA Accelerator for the core
operation of the process, and parallelizing it on the Noctua 2 Supercluster at
Paderborn University. The resulting value is
286386577668298411128469151667598498812366. This value can be verified in two
steps. We have made the data file containing the 490M results available, each
of which can be verified separately on CPU, and the whole file sums to our
proposed value.
AU - Van Hirtum, Lennart
AU - De Causmaecker, Patrick
AU - Goemaere, Jens
AU - Kenter, Tobias
AU - Riebler, Heinrich
AU - Lass, Michael
AU - Plessl, Christian
ID - 43439
T2 - arXiv:2304.03039
TI - A computation of D(9) using FPGA Supercomputing
ER -
TY - THES
AU - Lass, Michael
ID - 32414
TI - Bringing Massive Parallelism and Hardware Acceleration to Linear Scaling Density Functional Theory Through Targeted Approximations
ER -
TY - GEN
AB - Electronic structure calculations have been instrumental in providing many
important insights into a range of physical and chemical properties of various
molecular and solid-state systems. Their importance to various fields,
including materials science, chemical sciences, computational chemistry and
device physics, is underscored by the large fraction of available public
supercomputing resources devoted to these calculations. As we enter the
exascale era, exciting new opportunities to increase simulation numbers, sizes,
and accuracies present themselves. In order to realize these promises, the
community of electronic structure software developers will however first have
to tackle a number of challenges pertaining to the efficient use of new
architectures that will rely heavily on massive parallelism and hardware
accelerators. This roadmap provides a broad overview of the state-of-the-art in
electronic structure calculations and of the various new directions being
pursued by the community. It covers 14 electronic structure codes, presenting
their current status, their development priorities over the next five years,
and their plans towards tackling the challenges and leveraging the
opportunities presented by the advent of exascale computing.
AU - Gavini, Vikram
AU - Baroni, Stefano
AU - Blum, Volker
AU - Bowler, David R.
AU - Buccheri, Alexander
AU - Chelikowsky, James R.
AU - Das, Sambit
AU - Dawson, William
AU - Delugas, Pietro
AU - Dogan, Mehmet
AU - Draxl, Claudia
AU - Galli, Giulia
AU - Genovese, Luigi
AU - Giannozzi, Paolo
AU - Giantomassi, Matteo
AU - Gonze, Xavier
AU - Govoni, Marco
AU - Gulans, Andris
AU - Gygi, François
AU - Herbert, John M.
AU - Kokott, Sebastian
AU - Kühne, Thomas
AU - Liou, Kai-Hsin
AU - Miyazaki, Tsuyoshi
AU - Motamarri, Phani
AU - Nakata, Ayako
AU - Pask, John E.
AU - Plessl, Christian
AU - Ratcliff, Laura E.
AU - Richard, Ryan M.
AU - Rossi, Mariana
AU - Schade, Robert
AU - Scheffler, Matthias
AU - Schütt, Ole
AU - Suryanarayana, Phanish
AU - Torrent, Marc
AU - Truflandier, Lionel
AU - Windus, Theresa L.
AU - Xu, Qimen
AU - Yu, Victor W. -Z.
AU - Perez, Danny
ID - 33493
T2 - arXiv:2209.12747
TI - Roadmap on Electronic Structure Codes in the Exascale Era
ER -
TY - CONF
AU - Karp, Martin
AU - Podobas, Artur
AU - Kenter, Tobias
AU - Jansson, Niclas
AU - Plessl, Christian
AU - Schlatter, Philipp
AU - Markidis, Stefano
ID - 46193
T2 - International Conference on High Performance Computing in Asia-Pacific Region
TI - A High-Fidelity Flow Solver for Unstructured Meshes on Field-Programmable Gate Arrays: Design, Evaluation, and Future Challenges
ER -
TY - GEN
AB - The CP2K program package, which can be considered as the swiss army knife of
atomistic simulations, is presented with a special emphasis on ab-initio
molecular dynamics using the second-generation Car-Parrinello method. After
outlining current and near-term development efforts with regards to massively
parallel low-scaling post-Hartree-Fock and eigenvalue solvers, novel approaches
on how we plan to take full advantage of future low-precision hardware
architectures are introduced. Our focus here is on combining our submatrix
method with the approximate computing paradigm to address the immanent exascale
era.
AU - Kühne, Thomas
AU - Plessl, Christian
AU - Schade, Robert
AU - Schütt, Ole
ID - 32404
T2 - arXiv:2205.14741
TI - CP2K on the road to exascale
ER -
TY - JOUR
AB - A parallel hybrid quantum-classical algorithm for the solution of the quantum-chemical ground-state energy problem on gate-based quantum computers is presented. This approach is based on the reduced density-matrix functional theory (RDMFT) formulation of the electronic structure problem. For that purpose, the density-matrix functional of the full system is decomposed into an indirectly coupled sum of density-matrix functionals for all its subsystems using the adaptive cluster approximation to RDMFT. The approximations involved in the decomposition and the adaptive cluster approximation itself can be systematically converged to the exact result. The solutions for the density-matrix functionals of the effective subsystems involves a constrained minimization over many-particle states that are approximated by parametrized trial states on the quantum computer similarly to the variational quantum eigensolver. The independence of the density-matrix functionals of the effective subsystems introduces a new level of parallelization and allows for the computational treatment of much larger molecules on a quantum computer with a given qubit count. In addition, for the proposed algorithm techniques are presented to reduce the qubit count, the number of quantum programs, as well as its depth. The evaluation of a density-matrix functional as the essential part of our approach is demonstrated for Hubbard-like systems on IBM quantum computers based on superconducting transmon qubits.
AU - Schade, Robert
AU - Bauer, Carsten
AU - Tamoev, Konstantin
AU - Mazur, Lukas
AU - Plessl, Christian
AU - Kühne, Thomas
ID - 33226
JF - Phys. Rev. Research
TI - Parallel quantum chemistry on noisy intermediate-scale quantum computers
VL - 4
ER -
TY - JOUR
AU - Schade, Robert
AU - Kenter, Tobias
AU - Elgabarty, Hossam
AU - Lass, Michael
AU - Schütt, Ole
AU - Lazzaro, Alfio
AU - Pabst, Hans
AU - Mohr, Stephan
AU - Hutter, Jürg
AU - Kühne, Thomas
AU - Plessl, Christian
ID - 33684
JF - Parallel Computing
KW - Artificial Intelligence
KW - Computer Graphics and Computer-Aided Design
KW - Computer Networks and Communications
KW - Hardware and Architecture
KW - Theoretical Computer Science
KW - Software
SN - 0167-8191
TI - Towards electronic structure-based ab-initio molecular dynamics simulations with hundreds of millions of atoms
VL - 111
ER -
TY - JOUR
AU - Meyer, Marius
AU - Kenter, Tobias
AU - Plessl, Christian
ID - 27364
JF - Journal of Parallel and Distributed Computing
SN - 0743-7315
TI - In-depth FPGA Accelerator Performance Evaluation with Single Node Benchmarks from the HPC Challenge Benchmark Suite for Intel and Xilinx FPGAs using OpenCL
ER -
TY - JOUR
AB - N-body methods are one of the essential algorithmic building blocks of high-performance and parallel computing. Previous research has shown promising performance for implementing n-body simulations with pairwise force calculations on FPGAs. However, to avoid challenges with accumulation and memory access patterns, the presented designs calculate each pair of forces twice, along with both force sums of the involved particles. Also, they require large problem instances with hundreds of thousands of particles to reach their respective peak performance, limiting the applicability for strong scaling scenarios. This work addresses both issues by presenting a novel FPGA design that uses each calculated force twice and overlaps data transfers and computations in a way that allows to reach peak performance even for small problem instances, outperforming previous single precision results even in double precision, and scaling linearly over multiple interconnected FPGAs. For a comparison across architectures, we provide an equally optimized CPU reference, which for large problems actually achieves higher peak performance per device, however, given the strong scaling advantages of the FPGA design, in parallel setups with few thousand particles per device, the FPGA platform achieves highest performance and power efficiency.
AU - Menzel, Johannes
AU - Plessl, Christian
AU - Kenter, Tobias
ID - 28099
IS - 1
JF - ACM Transactions on Reconfigurable Technology and Systems
SN - 1936-7406
TI - The Strong Scaling Advantage of FPGAs in HPC for N-body Simulations
VL - 15
ER -
TY - CONF
AU - Kenter, Tobias
AU - Shambhu, Adesh
AU - Faghih-Naini, Sara
AU - Aizinger, Vadym
ID - 46194
T2 - Proceedings of the Platform for Advanced Scientific Computing Conference
TI - Algorithm-hardware co-design of a discontinuous Galerkin shallow-water model for a dataflow architecture on FPGA
ER -
TY - CONF
AU - Karp, Martin
AU - Podobas, Artur
AU - Jansson, Niclas
AU - Kenter, Tobias
AU - Plessl, Christian
AU - Schlatter, Philipp
AU - Markidis, Stefano
ID - 46195
T2 - 2021 IEEE International Parallel and Distributed Processing Symposium (IPDPS)
TI - High-Performance Spectral Element Methods on Field-Programmable Gate Arrays : Implementation, Evaluation, and Future Projection
ER -
TY - CHAP
AB - Solving partial differential equations on unstructured grids is a cornerstone of engineering and scientific computing. Nowadays, heterogeneous parallel platforms with CPUs, GPUs, and FPGAs enable energy-efficient and computationally demanding simulations. We developed the HighPerMeshes C++-embedded Domain-Specific Language (DSL) for bridging the abstraction gap between the mathematical and algorithmic formulation of mesh-based algorithms for PDE problems on the one hand and an increasing number of heterogeneous platforms with their different parallel programming and runtime models on the other hand. Thus, the HighPerMeshes DSL aims at higher productivity in the code development process for multiple target platforms. We introduce the concepts as well as the basic structure of the HighPerMeshes DSL, and demonstrate its usage with three examples, a Poisson and monodomain problem, respectively, solved by the continuous finite element method, and the discontinuous Galerkin method for Maxwell’s equation. The mapping of the abstract algorithmic description onto parallel hardware, including distributed memory compute clusters, is presented. Finally, the achievable performance and scalability are demonstrated for a typical example problem on a multi-core CPU cluster.
AU - Alhaddad, Samer
AU - Förstner, Jens
AU - Groth, Stefan
AU - Grünewald, Daniel
AU - Grynko, Yevgen
AU - Hannig, Frank
AU - Kenter, Tobias
AU - Pfreundt, Franz-Josef
AU - Plessl, Christian
AU - Schotte, Merlind
AU - Steinke, Thomas
AU - Teich, Jürgen
AU - Weiser, Martin
AU - Wende, Florian
ID - 21587
KW - tet_topic_hpc
SN - 0302-9743
T2 - Euro-Par 2020: Parallel Processing Workshops
TI - HighPerMeshes – A Domain-Specific Language for Numerical Algorithms on Unstructured Grids
ER -