PhD Proposal by Nilaksh Das

Primary tabs

Title: Understanding, Fortifying and Democratizing AI Security

 

Nilaksh Das

School of Computational Science & Engineering

College of Computing

Georgia Tech

nilakshdas.com

 

Date: Wednesday, April 14, 2021

Time: 11am - 1pm ET (other time zones: shorturl.at/jqtP7)

Location (virtual): https://bluejeans.com/627016581

 

Committee

Dr. Duen Horng (Polo) Chau - Advisor, Associate Professor, School of Computational Science & Engineering, Georgia Tech

Dr. Wenke Lee - Professor, School of Computer Science, Georgia Tech

Dr. Xu Chu - Assistant Professor, School of Computer Science, Georgia Tech

Dr. Srijan Kumar - Assistant Professor, School of Computational Science & Engineering, Georgia Tech

Dr. Ponnurangam Kumaraguru ("PK") - Professor, Indraprastha Institute of Information Technology, Delhi, India

Dr. Oliver Brdiczka - Principal ML Scientist & Director, Adobe

 

Abstract

As we steadily move towards an AI-powered utopia that could only be imagined in lofty fiction in the recent past, a formidable threat is emerging that endangers the acute capitalization of AI in our everyday lives. Recent adversarial machine learning research has revealed that deep neural networks — the workhorse of modern AI applications — are extremely vulnerable to adversarial examples. These are malicious inputs crafted by an attacker that can completely confuse deep neural networks into making incorrect predictions! Therefore, for people to have complete confidence in using AI applications, there is not only an urgent need to develop strong, practical solutions to defend real-world AI cyber-systems; there is also an equally pressing necessity to enable people to interpret AI vulnerabilities and understand how and why adversarial attacks and defenses work. It is also critical that the technologies for AI security be brought to the masses, and AI security research be as accessible and as pervasive as AI itself. After all, AI impacts people from all walks of life.

 

This thesis aims to address these substantial challenges by developing new tools, techniques and paradigms that mitigate the threat of adversarial examples, enhance people’s understanding of AI vulnerabilities, and democratize AI security by making it accessible and available to everyone. Specifically, this thesis focuses on three complementary research thrusts:

 

(1) Practical Mitigation & Harnessing of Adversarial Examples across Modalities - developing fast, robust defenses (e.g., SHIELD, ADAGIO) which are generalizable across diverse AI tasks, and harnessing adversarial attacks for the benefit of society instead of causing harm (e.g., AdvAccAug)!

 

(2) Exposing AI Vulnerabilities through Visualization & Interpretable Representations - developing intuitive interpretation techniques for deciphering adversarial attacks (e.g., Bluff).

 

(3) Democratizing AI Security Research & Pedagogy with Scalable Interactive Experimentation - enabling researchers, practitioners and students to perform in-depth security testing of AI models through interactive experimentation (e.g., MLsploit).

 

Our work has made a significant impact to industry and society: our research has produced novel defenses that have been tech-transferred to industry (SHIELD); our interactive visualization systems have significantly expanded the intuitive understanding of AI vulnerabilities (Bluff); and our unified AI security framework and research tools, becoming available to thousands of students, is transforming AI education at scale (MLsploit).

Groups

Status

Categories

  • No categories were selected.

Keywords

  • No keywords were submitted.