This webpage aims to regroup publications and software produced as part of a joint project at Fraunhofer HHI, TU Berlin and SUTD Singapore on developing new method to understand nonlinear predictions of state-of-the-art machine learning models.

Machine learning models are usually characterized by very high predictive power, but in many case, are not easily interpretable by a human. Interpreting a nonlinear classifier is important to gain trust into the prediction, and to identify potential data selection biases or artefacts.

The project studies in particular techniques to decompose the prediction in terms of contributions of individual input variables such that the produced decomposition (i.e. explanation) can be visualized in the same way as the input data. These visualizations are called "heatmaps".

**2017-12-04**Tutorial on "Understanding Deep Neural Networks and their Predictions" at WIFS 2017 in Rennes, France**2017-09-12**Tutorial on "Interpretable Machine Learning" at GCPR 2017 in Basel, Switzerland

**2017-03-20**Demonstration at CeBIT 2017 in Hannover, Germany**2017-03-05**Tutorial at ICASSP 2017 on "Methods for Interpreting and Understanding Deep Neural Networks" in New Orleans, USA**2016-12-09**Presentation at the NIPS 2016 Workshop on Interpretable ML for Complex Systems in Barcelona, Spain**2016-11-24**ACCV 2016 Workshop on Interpretation and Visualization of Deep Neural Nets in Taipei, Taiwan**2016-09-26**The LRP Toolbox presented at the ICIP Visual Technology Showcase in Phoenix, Arizona**2016-09-06**ICANN 2016 Workshop on Machine Learning and Interpretability in Barcelona, Spain

A heatmap is a visualization of an input example (e.g. image) that highlights which features a trained machine learning model (e.g. a deep neural network) considers important to achieve a classification decision. An example for an image classified as "matches" by the GoogleNet neural network is shown here:

Input image |
Heatmap |

Check out our heatmap gallery for more examples. Heatmaps can also be produced for text, or scientific data such as EEG.

Draw a handwritten digit and see the heatmap being formed in real-time. Create your own heatmap for natural images or text.

- A Tutorial for Implementing Layer-Wise Relevance Propagation
- GitHub project page for the LRP Toolbox
- TensorFlow LRP Wrapper

- Slides from our ICASSP 2017 tutorial on interpreting deep neural networks [part1] [part2] [part3]
- A Quick Introduction to Deep Taylor Decomposition

- L Arras, F Horn, G Montavon, KR Müller, W Samek. "What is Relevant in a Text Document?": An Interpretable Machine Learning Approach

arXiv, 2016 [preprint, bibtex]

- G Montavon, S Lapuschkin, A Binder, W Samek, KR Müller. Explaining NonLinear Classification Decisions with Deep Taylor Decomposition

Pattern Recognition, 65:211–222, 2017 [preprint, bibtex] - I Sturm, S Bach, W Samek, KR Müller. Interpretable Deep Neural Networks for Single-Trial EEG Classification

Journal of Neuroscience Methods, 274:141–145, 2016 [preprint, bibtex] - W Samek, A Binder, G Montavon, S Bach, KR Müller. Evaluating the Visualization of What a Deep Neural Network has Learned

IEEE Transactions on Neural Networks and Learning Systems (TNNLS), 2016 [preprint, bibtex] - S Lapuschkin, A Binder, G Montavon, KR Müller, W Samek The Layer-wise Relevance Propagation Toolbox for Artificial Neural Networks

Journal of Machine Learning Research (JMLR), 17(114):1−5, 2016 [preprint, bibtex] - S Bach, A Binder, G Montavon, F Klauschen, KR Müller, W Samek. On Pixel-wise Explanations for Non-Linear Classifier Decisions by Layer-wise Relevance Propagation

PLOS ONE, 10(7):e0130140, 2015 [preprint, bibtex]

- V Srinivasan, S Lapuschkin, C Hellge, KR Müller, W Samek. Interpretable human action recognition in compressed domain

Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2017 [preprint, bibtex] - S Lapuschkin, A Binder, G Montavon, KR Müller, W Samek. Analyzing Classifiers: Fisher Vectors and Deep Neural Networks

Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2912-2920, 2016 [preprint, bibtex] - F Arbabzadah, G Montavon, KR Müller, W Samek. Identifying Individual Facial Expressions by Deconstructing a Neural Network

Pattern Recognition - 38th German Conference, GCPR 2016, Lecture Notes in Computer Science, 9796:344-354, 2016 [preprint, bibtex] - A Binder, G Montavon, S Lapuschkin, KR Müller, W Samek. Layer-wise Relevance Propagation for Neural Networks with Local Renormalization Layers

Artificial Neural Networks and Machine Learning – ICANN 2016, Part II, Lecture Notes in Computer Science, Springer-Verlag, 9887:63-71, 2016 [preprint, bibtex] - S Bach, A Binder, KR Müller, W Samek. Controlling Explanatory Heatmap Resolution and Semantics via Decomposition Depth

Proceedings of the IEEE International Conference on Image Processing (ICIP), 2271-75, 2016 [preprint, bibtex] - A Binder, S Bach, G Montavon, KR Müller, W Samek. Layer-wise Relevance Propagation for Deep Neural Network Architectures

Proceedings of the 7th International Conference on Information Science and Applications (ICISA), 6679:913-22, Springer Singapore, 2016 [preprint, bibtex]

- W Samek, G Montavon, A Binder, S Lapuschkin, and KR Müller. Interpreting the Predictions of Complex ML Models by Layer-wise Relevance Propagation

Interpretable ML for Complex Systems NIPS 2016 Workshop, 2016 [preprint, bibtex] - L Arras, F Horn, G Montavon, KR Müller, W Samek. Explaining Predictions of Non-Linear Classifiers in NLP

ACL Workshop on Representation Learning for NLP, 1-7, 2016 [preprint, bibtex] - G Montavon, S Bach, A Binder, W Samek, KR Müller. Deep Taylor Decomposition of Neural Networks

ICML Workshop on Visualization for Deep Learning, 2016 [preprint, bibtex] - A Binder, W Samek, G Montavon, S Bach, KR Müller. Analyzing and Validating Neural Networks Predictions

ICML Workshop on Visualization for Deep Learning, 2016 [preprint, bibtex]

- Pascal VOC 2012 Multilabel Model: [caffemodel] [prototxt]