Research on the intersection of Machine Learning, High-Performance Computing and Hardware

The Computing Systems Group (CSG) at the Institute of Computer Engineering at Ruprecht-Karls University of Heidelberg is focussing on vertically integrated research (thus considering the complete computing system) that bridges demanding applications such as deep neural networks (DNNs), high-performance computing (HPC) and data analytics (HPDA) with various forms of specialized computer hardware.

group photo
ZITI in Neuenheimer Feld 368

Today, research in computing systems is most concerned with specialized forms of computing in combination with seamless integration into existing systems. Specialized computing, for instance based on GPUs (as known for gaming) or FPGAs (field programmable gate arrays) or ASICs (not the shoe brand but “application-specific integrated circuits”), is motivated by diminishing returns from CMOS technology scaling and hard power constraints. Notably, for a given fixed power budget , energy efficiency defines performance:

As energy efficiency is usually improved by using specialized architectures (processor, memory, network), our research gears to bring future emerging technologies and architectures to demanding applications.

Particular research fields include

  • Embedded Machine Learning includes bringing state-of-the-art DNNs to resource-constraint embedded devices, as well as embedding DNNs in the real-world, requiring a treatment of uncertainty
  • Advanced hardware architecture and technology by understanding specialized forms such as GPU and FPGA accelerators, analog electrical and photonic processors, as well as resistive memory

To close the semantic gap in between demanding applications and various specializations of hardware, we are most concerned with creating abstractions, models, and associated tools that facilitate reasoning about various optimizations and decisions. Overall, this results in vertically integrated approaches to fast and efficient ML, HPC, and HPDA.

We gratefully acknowledge the generous sponsoring that we are receiving. Current and recent sponsors include DFG, Carl-Zeiss Stiftung, FWF, SAP, Helmholtz, BMBF, NVIDIA, and XILINX.

Please find on this website information about our team members, research projects, publications, teaching and tools. For administrative questions, please contact Andrea Seeger, and for research and teaching questions Holger Fröning.

Latest news

Public talk by Prof. Dr. Grace Li Zhang, TU Darmstadt

ZITI is very happy to welcome Prof. Dr. Grace Li Zhang from TU Darmstadt for a public talk on “Efficient Hardware for Neural Networks”. The talk will take place July 29, 16:00 in the ZITI lecture room (INF350, room U014).

Invited talk at National Supercomputing Center in Shenzhen, on 'On Accelerating Deep and Bayesian Neural Architectures'!

Also seeing the well-known Nebulae supercomputer, previously ranked number 2 in the TOP500 list and apparently still (with some upgrades) in operation!

ECML article on Walking Noise accepted for publication!

Conference article on “Walking Noise: On Layer-Specific Robustness of Neural Architectures against Noisy Computations and Associated Characteristic Learning Dynamics” accepted for publication at European Conference on Machine Learning in Vilnius! Preprint: link

Invited talk at Tsinghua University, Beijing on 'On Accelerating Deep and Bayesian Neural Architectures', sponsored by Yu Wang!
JMLR article on 'Resource-Efficient Neural Networks for Embedded Systems' accepted for publication!

Journal contribution on “Resource-Efficient Neural Networks for Embedded Systems” accepted at Journal of Machine Learning Research, jointly with colleages from Graz University of Technology! Read more: link or link

Older news can be found in the News Archive.