4th Workshop on Embedded Machine Learning - WEML2023
Co-organizers
- Holger Fröning, ZITI, Heidelberg University, Germany (holger.froening (at) ziti.uni-heidelberg.de)
- Gregor Schiele, University of Duisburg-Essen (gregor.schiele (at) uni-due.de)
- Franz Pernkopf, Graz University of Technology, Austria (pernkopf (at) tugraz.at)
- Manfred Mücke, Materials Center Leoben GmbH, Leoben, Austria (Manfred.Muecke (at) mcl.at)
Overview
The workshop series on embedded machine learning (WEML) is jointly organized by Heidelberg University, University Duisburg-Essen, Graz University of Technology, and Materials Center Leoben, and embraces our joint interest in bringing complex machine learning models and methods to resource-constrained devices like edge devices, embedded devices, and IoT. The workshop is rather informal, without proceedings, and is organized around a set of invited talks on topics associated with this interest.
Topics of interest include in general:
- Compression of neural networks for inference deployment, including methods for quantization (including binarization), pruning, knowledge distillation, structural efficiency and neural architecture search
- Hardware support for novel ML architectures beyond CNNs, e.g., transformer models
- Tractable models beyond neural networks
- Learning on edge devices, including federated and continuous learning
- Trading among prediction quality (accuracy), efficiency of representation (model parameters, data types for arithmetic operations and memory footprint in general), and computational efficiency (complexity of computations)
- Automatic code generation from high-level descriptions, including linear algebra and stencil codes, targeting existing and future instruction set extensions
- New and emerging applications that require ML on resource-constrained hardware
- Security/privacy of embedded ML
- New benchmarks suited to edge and embedded devices
In this regard, the workshop gears to gather experts from various domains and from both academia and industry, to stimulate discussions on recent advances in this area.
Schedule
Registration opens 09:00-ish
- 09:15 - 09:30 Workshop opening [slides]
Session 1: Model Architectures
- 09:30 - 10:15 Martin Andraud et al. (Aalto University), Energy-efficient probabilistic edge AI [slides]
- 10:15 - 10:45 Alexander Fuchs (TU Graz), Physics-Constrained Neural Networks [slides]
Coffee break
Session 2: Model Efficiency
- 11:15 - 12:00 Mark Deutel, Frank Hannig, and Jürgen Teich (FAU Erlangen-Nürnberg), Multi-Objective Bayesian Optimization of Deep Neural Networks for Deployment on Microcontrollers [slides]
- 12:00 - 12:30 Bernhard Klein (Heidelberg University), Galen: HW-specific Automatic Compression [slides] Lunch break
Session 3: Model Embedding & Applications
- 13:30 - 14:15 Jose Cano (Glasgow University), Moving Deep Learning to the Edge [slides]
- 14:15 - 14:45 Andreas Erbslöh (U. Duisburg-Essen), Sp:AI:ke - Deep Learning Support for Next Generation Medical Neuro-Impants [slides]
- 14:45 - 15:15 Christian Oswald (TU Graz), Neural Networks for Automotive Radar Denoising [slides]
Coffee break
Session 4: Model Sparsity
- 15:45 - 16:30 Heiko Schick (HiSilicon), Huawei Ascend AI architecture and acceleration for sparse matrix-matrix multiplication [slides]
- 16:30 - 17:00 Zeqi Zhu (Graimatterlabs), Inducing activation sparsity for fast and energy-efficient neural network inference [slides]
- 17:00 - 17:15 Closing remarks