news

Erasing Stop Signs: ShapeShifter Shows Self-Driving Cars Can Still Be Manipulated

Primary tabs

Georgia Tech researchers have confirmed that state-of-the-art image detection systems used in self-driving cars are vulnerable to attack.

According to new research, these systems are particularly vulnerable to a type of attack known as adversarial perturbation. In this type of attack, an object in the real world – like a stop sign – is intentionally altered to trick a machine learning system into identifying it as something else entirely different. 

The vulnerability was confirmed using ShapeShifter, an attack tool developed by Shang-Tse Chen, a Ph.D. student in the School of Computational Science and Engineering (CSE), and fellow researchers from CSE and Intel. ShapeShifteris the first targeted physical adversarial attack on Faster R-CNN object detectors.

“Our motivation comes from vandalism on traffic signs. Despite real vandalism not affecting DNNs (deep neural networks) greatly, in our work we show that we can craft adversarial perturbations that look like normal vandalism. But these perturbations can drastically change the output of a DNN model causing it to malfunction and identify things incorrectly,” said Chen.

The goal in creating this attack system is to reveal the weaknesses within image recognition systems using object detectors, and figuring out how to defend against real attacks in the future.

ShapeShifter tells us that self-driving cars that depend purely on vision-based input are not safe until we can defend this kind of attack,” said Chen. “ShapeShifter was created to, and has succeeded in, attacking self-driving cars that use the state-of-the-art Faster R-CNN object detection algorithm.”

There are many different types of object detectors, and it just happens that the current leading edge object detectors use deep neural networks (DNNs) internally. These detectors are able to recognize what objects are in an image and where they are located – much different than their simpler counterpart, image classifiers, that output a single label for an image.

“For example, for an input image of a park, an image classifier will say it’s a park. But, an object detector will tell us there are trees, people, and benches, and use bounding boxes to show their locations,” explained Chen. 

“In our work, we only consider manipulating things that are outside of the computer vision system. Which, in this case, is the physical environment. Therefore, we craft physical adversarial objects that, after the image is captured by a camera, goes through a sequence of pre-processing, is fed to the DNN model, and ultimately tells the system makes an incorrect decision,” said Chen.

An example of this can be seen in a video posted online. It shows ShapeShifter feeding false inputs to the system, causing it to misclassify a stop sign as a person.

Chen presented his ShapeShifterresearch at the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML-PKDD 2018) in Dublin, Ireland on Sept. 13. 

The code repository for the paper can be found here.

Status

  • Workflow Status:Published
  • Created By:Kristen Perez
  • Created:09/21/2018
  • Modified By:Kristen Perez
  • Modified:09/21/2018

Categories

  • No categories were selected.