Design improvement

In artificial intelligence, neural networks are traditionally compared to black boxes. Explainability is therefore a fundamental issue.

Saimple: Detecting anomalies and studying behavior through relevance

Indeed, for an engineer, the results given by these algorithms cannot be directly explained. This is a real problem that can raise security issues regarding the reliability of AI and the trust placed in this technology.

Numalis is trying to answer this problem by using two mathematical notions to obtain elements reinforcing the confidence in the predictions of neural networks. These notions are dominance (recognition or not of the primacy of a class in relation to the others) and relevance (positive or negative influence of each input on the result of the neural network) which we will develop in this case study.

The use of these criteria is done with Saimple, which is the platform developed by Numalis in order to evaluate neural networks in terms of robustness and explicability to improve their quality and thus their acceptability.

1. Presentation of the Use case

Objectives :

This use case aims to present the usefulness of relevance for detecting anomalies and giving explainability elements to neural networks using Saimple.

Data set :

This dataset contains different types of flying vehicles, i.e. drone, helicopter, airliner, fighter plane, etc. (cf. https://www.kaggle.com/eabdul/flying-vehicles).

Representation of the classes in the dataset

Prerequisites :

What is relevance?

Relevance for an image classifier represents the influence of each pixel on the calculation of the output class model.

In the example below, the red relevance pixels correspond to pixels in the input image that have positive effects on the classification of the input as belonging to output class 1. The blue relevance pixels are those pixels in the input image that have a negative effect on the classification of the input as belonging to class 1 (i.e. decreasing the classification score of this class).

This relevance changes as the model is trained. In general, a poorly trained model will have a fuzzy relevance, while a better-trained model will have a sharper relevance, capable of drawing understandable shapes.

 

What is an anomaly?

Thereafter, we will speak of an anomaly when the model predicts a good class with the wrong characteristics of the image. That is to say when the strongest pixels of interest (which can be visualized via the relevance) are not positioned on the element to be detected.

Attention, even if the relevance does not represent the object to be detected as we perceive it, this does not always mean that there is an anomaly. Indeed, some models work differently. The relevance gives an indication of how the model works.

 

How to detect an anomaly in the dataset?

The easiest way is to evaluate a sample of the dataset. Then, look at the relevance of each evaluation, regardless of the classification.

To detect these anomalies, Saimple provides two modes of relevance visualization:

The first visualization mode is a layer whose transparency is adjusted and which displays the relevance of the input pixels.

The second visualization mode is an adjustable threshold that displays the relevance pixels, from the most influential to the least influential according to the threshold value.

In the examples below, we can see that the most intense relevance pixels are located on the contour of the mountain. However, the aim of this model is not to classify mountains but flying objects like an airliner. The recognition of this pattern does not therefore seem relevant for this application case. This case can be considered as an anomaly whose source we will try to understand.

2. First study

We are going to look at this image of an airliner against a background of mountains and clouds.

First, the model has correctly classified this image as an airliner (recall: class 4 corresponds to the label "airliner"). With Saimple we observe that the dominance score is close to 1 for class 4, while the score for the other classes is close to 0.

Now, let's look at this relevance image which presents an anomaly. Indeed, there are important pixels on the plane but also on the clouds and the mountains.

Has the model learned to recognize the clouds and mountains in addition to the plane?

A study of the dataset might help us to understand.

Study of the airliner dataset :

There are some similarities between the images:

- White aircraft;

- In flight or taking off;

- Blue sky, white cloud or vegetation background;

- Land/sky or cloud/sky boundary.

This dataset therefore appears to be biased.

To check if the model is not biased, we perform 2 evaluations on Saimple. One by making the bottom of the image uniform and the other by making the top of the image uniform.

The evaluation with the uniform top of the image still has an anomaly. The model still classifies this image as an airliner. However, there is no such aircraft. In addition, there is a strong interest in the boundary between the real bottom of the image and the uniform blue part.

 

What would happen if both parts of the image were uniform?

With this last evaluation, we still observe that the model classifies the image as an airliner. Moreover, there is relevance only on the border between the two areas.

The hypothesis we can make is that the model has learned to detect the sky/cloud and sky/land boundary in addition to the aircraft. The use of Saimple has made it possible to identify this anomaly, which can be corrected: the backgrounds of the images of flying objects should be standardized. It will then be possible to carry out further evaluations to ensure that the biases are eliminated.

3. Second Study

For this second study we will look at rockets.

On which criterion(s) is the model based to classify a rocket image?

At first sight this rocket image has been well classified. Indeed, Saimple reveals that the dominance score for class 5 which corresponds to the label "rocket" is higher than 0.9. Furthermore, it is observed that the relevance image does not show strong anomalies. The majority of the pixels of interest are concentrated on the object to be classified.

We will first examine the dataset.

 

Study of the rocket dataset :

There are some similarities between the images:

- The rockets are white;

- The background is a blue sky;

- The flame is white;

- The rockets are fired vertically.

Now, if we create an image using these 4 criteria, will the model be able to generalize and understand that it is a rocket?

The answer is yes. The model relies on the presence of a vertical white trace on a blue background to determine if the image is a rocket.

To verify that the presence of a vertical white flame contributes to the model's decision-making, we will evaluate the same rocket without a flame.

The results show that class 5 is still dominant but its score is reduced to 0.6. Moreover, class 3 (which corresponds to the label 'missile') seems to be a serious example of an adversary (example deceiving the neural network into thinking that it should be classified as a certain object when it is not).

To check whether having a vertical rocket affects the decision we will perform a final evaluation with the same rocket without flame tilted by 45 degrees.

The model classifies the image as a missile. So the verticality of the rocket counts in the classification. But why?

A final study of the dataset may give us a clue.

Study of the missile dataset:

All missiles are inclined. It is therefore likely that it is this characteristic that differentiates a missile from a rocket.

4. Conclusion

A model, even with high accuracy (percentage of correct classifications), may present anomalies or learning biases. Accuracy, therefore, should not be the only criterion for the proper functioning of a neural network. As we have seen, there are images that are correctly classified, however, the model is based on several criteria, some of which are undesirable (e.g. the land/sky border with the example of the plane on a mountainous background) and others that simply do not seem relevant (e.g. the tilt for missiles). The model may not have learned all the characteristics of an object, which can be problematic for their recognition. This type of information is important in order to decide whether or not the model is satisfactory in its current state and can therefore be deployed.

The provision of additional indicators concerning the explicability of neural networks is therefore necessary. In this case study we have seen that the relevance that can be visualized with Saimple is an interesting indicator.

 

 

If you are interested in Saimple and want to know more about the use case or if you want to have access to a Saimple demo environment

contact us: support@numalis.com

 

Picture credit : Markus Winkler (unsplash)

Numalis

We are a French innovative software editor company providing tools and services to make your neural networks reliable and explainable.

Contact us

Follow us