news

Georgia Tech Teams up with Intel to Protect Artificial Intelligence from Malicious Attacks Using SHIELD

Primary tabs

What if a self-driving car read a stop sign as a yield sign, or worse yet, could not see the sign altogether? The consequences could be catastrophic.

A team of Georgia Tech researchers has developed a fast and practical way to protect artificial intelligence (AI) systems – such as those used in self-driving cars – from malicious attacks on image recognition software that could lead to such a disastrous event.

According to School of Computational Science and Engineering (CSE) Ph.D. student Nilaksh Das who leads this investigation, deep neural networks (DNNs), which are used to train AI systems, are highly vulnerable to maliciously generated images or pixel manipulation. 

This type of manipulation could result in targeted attacks, such as misleading machines into reading stop signs as yield signs, or untargeted attacks, which could render the system unable to see the stop sign at all. Simply put, pixel manipulation confuses machines from performing the right tasks, such as stopping, because the system is unable to read the symbol correctly. 

To combat these types of potential attacks, Das and his collaborators created SHIELD, an AI defense framework that stands for Secure Heterogeneous Image Ensemble with Local Denoising.

SHIELD, set to be presented in August at KDD 2018, the most prestigious data mining conference, offers a novel and efficient approach using JPEG compression to vaccinate DNNs from malicious pixel manipulation. 

“The threat of adversarial attack casts a shadow over deploying DNNs in security and safety-critical applications. There is an urgent need to resolve this threat with fast, practical approaches, for which we leverage JPEG compression in this work, which is already a widely-used and mature technique,” said Das.

JPEG compression is a commonly used method of compression for digital images, particularly used in digital photography. This method gives users the option to adjust the quality of an image while discarding information that the human eye cannot see. Higher quality images are less compressed. Lower are more compressed. This compression removes malicious pixel manipulation.   

“This is a fast and practical way to protect AI. There has been a lot of research into coming up with methods to harm or attack AI, but much less on how to protect them, and even less on designing fast and practical methods – and SHIELD targets this need,” said CSE Associate Professor and SHIELD researcher Polo Chau.

Chau continued, “To immunize a DNN model from artifacts introduced by compression, SHIELD vaccinates a model by re-training it with compressed images.”

Additionally, SHIELD provides an additional layer of protection that utilizes randomization at test time, making it harder for an adversary to estimate the transformation performed.  

The team plans to explore the feasibility of their approach on more hardware platforms and in more types of compression scenarios such as with audio compression for voice recognition software.

Chau and Das are joined in this research enterprise with team members and fellow CSE Ph.D. students Shang-Tse Chen and Fred HohmanSchool of Computer Science M.S. student Madhuri Shanbhogue, undergraduate researcher Siwei Li, co-primary investigator and lead at Intel Science and Technology Center for Adversary-Resilient Security Analytics Li Chen, and Intel Research Scientist Michael E. Kounavis

Status

  • Workflow Status:Published
  • Created By:Kristen Perez
  • Created:06/01/2018
  • Modified By:Kristen Perez
  • Modified:06/07/2018

Categories

  • No categories were selected.

Keywords

  • No keywords were submitted.