This page shows some heatmaps produced by relevance propagation on images classified by the GoogleNet neural network, or filtered using a network trained on CIFAR-10.

Heatmapping GoogleNet

Below are some heatmaps obtained by decomposing the predictions of the GoogleNet deep neural network for a selection of images. The network has excellent prediction performance (6.67% error) on the top-5 task of the ILSVRC2014 classification challenge. From left to right, we show the image given as input to the neural network, the heatmap (with color-coded values) resulting from application of deep Taylor decomposition, and filtered images, where irrelevant pixels according to the heatmap are set to gray. These filtered images can be used to verify that the objects to detect have been preserved and that most of the background has been removed.

Input image Heatmap Filtered image

Heatmapping CIFAR-10

In these experiments, we train a CIFAR-10 network up to reasonable accuracy (approx. 75%) and making sure that the network strictly adheres to the requirements of the deep Taylor decomposition method. In particular, we force biases to be negative and use sum-pooling instead of max-pooling.

Input image Heatmap Filtered image

For images with detected class-relevant structure, the heatmap indicates relevant portions of the image such as the object to be classified, or part of it. The filtered images only retain the class-relevant structure.

Using larger inputs

Here, we apply the convnet to larger images. This leads to label maps as output that we can also redistribute to the pixels using deep Taylor decomposition. Here, we perform the decomposition of the label map for the classes "horse", and "ship" respectively. The resulting heatmaps clearly identify the relevant structure to be detected. Interestingly, it focuses on the contour of the object, and regions of interest such as the legs and head of the horses. In particular, it ignores most of the surface of the objects to detect.

Input image Heatmap Filtered image

Decomposition vs. sensitivity

Here, we give an example where we compare an image heatmapped by sensitivity analysis and relevance propagation. Clearly relevance propagation better focuses on the relevant structure in the image and assigns the right amount of heat to each pixel.