PhD Defense by Wenqi Wei

Primary tabs

Title: Adversarial Resilient Deep learning with Privacy Enhancing Optimization 

 

Date: Tuesday, September 28th, 2021 

Time: 9:00am – 11:00am (EDT) 

Location: https://bluejeans.com/659828858/6139

 

Wenqi Wei

Ph.D. Student

School of Computer Science

College of Computing 

Georgia Institute of Technology 

 

 

Committee 

———————

Dr. Ling Liu (Advisor, School of Computer Science, Georgia Institute of Technology)

Dr. Calton Pu (School of Computer Science, Georgia Institute of Technology)

Dr. Shamkant Navathe (School of Computer Science, Georgia Institute of Technology)

Dr. Margaret Loper (Georgia Tech Research Institute)

Dr. James Caverlee (Department of Computer Science and Engineering, Texas A&M University)

Dr. Balaji Palanisamy (School of Computing and Information, University of Pittsburgh)

 

 

Abstract

———————

Deep learning is being deployed in the cloud and on edge devices for a wide range of domain-specific applications, ranging from healthcare, cyber-manufacturing, autonomic vehicles, to smart cities and smart planet initiatives. While deep learning creates new opportunities to business, engineering and scientific discoveries, it also introduces new attack surfaces to the modern computing systems that incorporate deep learning as a core component for algorithmic decision making and cognitive machine intelligence, ranging from data poisoning and gradient leakage during training phase and adversarial evasion attacks during model inference phase, aiming to cause the well-trained model to misbehave randomly or purposefully. This dissertation research addresses these problems with dual focuses: First, it aims to provide a fundamental understanding of the security and privacy vulnerabilities inherent in deep neural network training and inference. Second, it develops an adversarial resilient framework and a set of optimization techniques to safeguard the deep learning systems, services and applications against adversarial manipulations and model inversion induced privacy leakages, while maintaining the accuracy and convergence performance of deep learning systems. 

In this proposal exam, I will focus on the gradient leakage problems and our mitigation approaches in federated deep learning settings. 

Groups

Status

Categories

  • No categories were selected.

Keywords

  • No keywords were submitted.