AI is not secure!

Written by Wilfried Kirschenmann, on 06 February 2018

On July 28th, an open letter was addressed to world leaders by leading researchers in artificial intelligence and robotics, as well as public figures (including Elon Musk and Stephen Hawking). They are calling for the banning of autonomous weapons. The most significant arguments are ethical in nature, precisely because autonomous weapons cannot make ethical considerations and therefore raise questions about potential collateral damage before firing. However, here, I would like to delve into a technical argument.

From the perspective of economic and societal impact, the significant advancement in AI in recent years involves learning systems. These systems are trained to recognize, classify, and even make decisions. There are various ways to teach these systems to recognize what is presented to them. The most popular method involves training a so-called "deep" neural network, meaning it has multiple layers stacked on top of each other.

Let's take the example of an image classifier. The most commonly used ones today are based on "deep convolutional neural networks" (CNNs). These are neural networks containing multiple layers of convolution. While the development was not trivial, the idea is relatively simple. The first layer learns to recognize noteworthy elements in the image, such as vertical, horizontal, or diagonal edges, or corners. To do this, a convolutional filter is applied across the entire image. These are very local descriptors. The second layer seeks to identify noteworthy elements in the first layer. In the original image, these correspond to curves, lines, or more complex shapes. Each layer thus looks for particular, very local elements in the previous layer, these elements corresponding to increasingly complex shapes or textures.

Two things should be noted:

  • Several different images can produce the same results in the first layer. This is also true for subsequent layers.
  • It is an accumulation of details that allows the neural network to recognize an element in a photo, and it is only in the higher layers that global information emerges.

Therefore, we understand how an image can be altered imperceptibly to us but mislead a neural network: it is enough to modify a few very local details so that the results in the first layers, and thus the higher layers, are different. These images are called "perturbations" by scientists in the field. The following figure illustrates how the addition of a cleverly generated perturbation can lead to error. It is interesting to note that this perturbation has the characteristics of white noise and cannot be differentiated by image processing.

Another example closer to the security theme of this article: these methods can be used to deceive a facial recognition system. Thus, a pair of glasses with questionable taste could allow a person to impersonate someone else. Its local characteristics induce enough confusion across the layers for the neural network to believe it is recognizing the other person.

Top: Researchers with deceptive glasses Bottom: The person whose identity they have taken Source: Mahmood Sharif, Sruti Bhagavatula, Lujo Bauer, and Michael K. Reiter

These few examples illustrate the security concerns that may arise with the widespread use of systems based on neural networks.

In the case of autonomous weapons, what would happen if enemies decided to turn a kindergarten into a military camp? This could be feasible simply by stretching a tarpaulin with patterns similar to those found on the glasses above.