Upcoming Events
2017-12-09 Workshop on "Interpreting, Explaining and Visualizing Deep Learning" at NIPS 2017 in Long Beach, CA
Past Events
2017-12-04 Tutorial on "Understanding Deep Neural Networks and their Predictions" at WIFS 2017 in Rennes, France
2017-03-05 Tutorial at ICASSP 2017 on "Methods for Interpreting and Understanding Deep Neural Networks" in New Orleans, USA
2016-09-26 The LRP Toolbox presented at the ICIP Visual Technology Showcase in Phoenix, Arizona

This webpage aims to regroup publications and software produced as part of a joint project at Fraunhofer HHI, TU Berlin and SUTD Singapore on developing new method to understand nonlinear predictions of state-of-the-art machine learning models.

Machine learning models, in particular deep neural networks (DNNs), are characterized by very high predictive power, but in many case, are not easily interpretable by a human. Interpreting a nonlinear classifier is important to gain trust into the prediction, and to identify potential data selection biases or artefacts.

The project studies in particular techniques to decompose the prediction in terms of contributions of individual input variables such that the produced decomposition (i.e. explanation) can be visualized in the same way as the input data.

Interactive LRP Demos

Draw a handwritten digit and see the heatmap being formed in real-time. Create your own heatmap for natural images or text. These demos are based on the LRP technique by Bach et al. (2015).


MNIST: A simple LRP demo based on a neural network that predicts handwritten digits and was trained using the MNIST data set.

Caffe: A more complex LRP demo based on a neural network implemented using Caffe. The neural network predicts the contents of the picture.

Text: A LRP demo that explains classification on natural language. The neural network predicts the type of document.

How and Why LRP ?

LRP is a method that identifies important pixels by running a backward pass in the neural network. The backward pass is a conservative relevance redistribution procedure, where neurons that contribute the most to the higher-layer receive most relevance from it. The LRP procedure is shown graphically in the figure below.

The method can be easily implemented in most programming languages and integrated to existing neural network frameworks. When applied to deep ReLU networks, LRP can be understood as a Deep Taylor Decomposition of the prediction function.

Software

Publications

Preprints

Journal Publications

Conference Publications

Workshop Papers


Downloads

BVLC Model Zoo Contributions