event

PhD Defense by Nilaksh Das

Primary tabs

Title: Understanding, Fortifying and Democratizing AI Security

 

Date: Wednesday, April 13, 2022

Time: 11am - 1pm ET [other time zones]

Location (virtual)https://gatech.zoom.us/j/95345008046?pwd=ZE5QMHlvbmpXS1BWL2dvdXMzUWNpQT09

 

Nilaksh Das

PhD Candidate

School of Computational Science and Engineering

College of Computing

Georgia Institute of Technology

nilakshdas.com

 

Committee

——————————

Dr. Duen Horng Chau (Advisor, School of Computational Science & Engineering, Georgia Tech)

Dr. Wenke Lee (School of Computer Science, Georgia Tech)

Dr. Xu Chu (School of Computer Science, Georgia Tech)

Dr. Srijan Kumar (School of Computational Science & Engineering, Georgia Tech)

Dr. Ponnurangam Kumaraguru (International Institute of Information Technology, Hyderabad, India)

Dr. Oliver Brdiczka (Adobe)

 

Abstract

——————————

As we steadily move towards an AI-powered utopia that could only be imagined in lofty fiction in the recent past, a formidable threat is emerging that endangers the acute capitalization of AI in our everyday lives. Recent adversarial machine learning research has revealed that deep neural networks — the workhorse of modern AI applications — are extremely vulnerable to adversarial examples. These are malicious inputs crafted by an attacker that can completely confuse deep neural networks into making incorrect predictions! Therefore, for people to have complete confidence in using AI applications, there is not only an urgent need to develop strong, practical solutions to defend real-world AI cyber-systems; there is also an equally pressing necessity to enable people to interpret AI vulnerabilities and understand how and why adversarial attacks and defenses work. It is also critical that the technologies for AI security be brought to the masses, and AI security research be as accessible and as pervasive as AI itself. After all, AI impacts people from all walks of life.

This thesis addresses these fundamental challenges through creating holistic interpretation techniques for better understanding of attacks and defenses, developing effective and principled defenses for protecting AI across input modalities, and building tools that enable scalable interactive experimentation with AI security and adversarial ML research. This thesis has a vision of enhancing trust in AI by making AI security more accessible and adversarial ML education more equitable, while focusing on three complementary research thrusts:

1. Exposing AI Vulnerabilities through Visualization & Interpretable Representations. We develop intuitive interpretation techniques for deciphering adversarial attacks.

2. Mitigating Adversarial Examples Across Modalities & Tasks. We develop robust defenses which are generalizable across diverse AI tasks and input modalities.

3. Democratizing AI Security Research & Pedagogy with Scalable Interactive Experimentation. We enable researchers, practitioners and students to perform in-depth security testing of AI models through interactive experimentation.

Our work has made a significant impact to industry and society: our research has produced novel defenses that have been tech-transferred to industry; our interactive visualization systems have significantly expanded the intuitive understanding of AI vulnerabilities; and our scalable AI security framework and research tools, becoming available to thousands of students, is transforming AI education at scale.

Status

  • Workflow Status:Published
  • Created By:Tatianna Richardson
  • Created:04/04/2022
  • Modified By:Tatianna Richardson
  • Modified:04/04/2022

Categories

Keywords