0. Introduction
In this lesson, Pro.Lee teaches us a new field(actually for me, it is completely new) called explainable deep learning. In this field, I mainly want to introduce a special model, which is known as Lime.
1. Lime
The main thought of Lime is to use a model, which is easily to be understand or to say is interpretable, to explain our used neural network, which is absolutely uninterpretable. We want make the output of our interpretable model, which is usually linear model, to be similar with the output of our used neural network. However, it is difficult to achieve that because linear model does not have such power.
But if we just focus on a small segment of our data, linear model have the ability to resemble the result of used neural network. Take the following figure as illustration, the linear model can not fit the whole blue curve, but it can have a good performance of fitness for the orange points.
So, for our targets, images, what is the working process of Lime? Firstly, we will divide the complete figure into differents segments (fortunately, Python offers us a function to achieve that). Then, we wukk randomly delete some segements and put the incomplete figure into our black box (neural network) to predict the probability of being a treefog.
Then, we will put different incomplete figures to our linear model and want to make the output as close as possible. The input of linear model is shown in the following ppt.
The result of linear model is interpretable and is shown in the following ppt.