event

PhD Defense by Wenqi Wei

Primary tabs

Title: Adversarial Resilient and Privacy Preserving Deep learning

 

Date: Tuesday, April 12th, 2022 

Time: 15:30pm – 17:30pm (EDT) 

Location: https://bluejeans.com/132127209/6249?src=calendarLink

 

Wenqi Wei

PhD Candidate

School of Computer Science

College of Computing 

Georgia Institute of Technology 

 

 

Committee 

———————

Dr. Ling Liu (Advisor, School of Computer Science, Georgia Institute of Technology)

Dr. Calton Pu (School of Computer Science, Georgia Institute of Technology)

Dr. Shamkant Navathe (School of Computer Science, Georgia Institute of Technology)

Dr. Margaret Loper (Georgia Tech Research Institute)

Dr. James Caverlee (D​epartment of Computer Science and Engineering, Texas A&M University)

Dr. Balaji Palanisamy (School of Computing and Information, University of Pittsburgh)

 

 

Abstract

———————

Deep learning is being deployed in the cloud and on edge devices for a wide range of domain-specific applications, ranging from healthcare, cyber-manufacturing, autonomic vehicles, to smart cities and smart planet initiatives. While deep learning creates new opportunities to business, engineering and scientific discoveries, it also introduces new attack surfaces to the modern computing systems that incorporate deep learning as a core component for algorithmic decision making and cognitive machine intelligence, ranging from data poisoning and model inversion during training phase and adversarial evasion attacks during model inference phase, aiming to cause the well-trained model to misbehave randomly or purposefully. This dissertation research addresses these problems with dual focuses: First, it aims to provide a fundamental understanding of the security and privacy vulnerabilities inherent in deep neural network training and inference. Second, it develops an adversarial resilient framework and a set of optimization techniques to safeguard the deep learning systems, services and applications against adversarial manipulations and gradient leakage induced privacy leakages, while maintaining the accuracy and convergence performance of deep learning systems. This dissertation research has made three unique contributions towards advancing the knowledge and technological foundation for privacy-preserving deep learning with adversarial robustness against deceptions.

 

The first main contribution is an in-depth investigation into security and privacy threats inherent in deep learning, represented by gradient leakage attacks during both centralized and distributed training and deception queries to well-trained models at inference phase, represented by adversarial examples and out-of-distribution inputs. By introducing a principled approach to investigating gradient leakage attacks and different attack optimization methods in both centralized model training and federated learning environments, we provide a comprehensive risk assessment framework for an in-depth analysis of different attack mechanisms and attack surfaces that an adversary may leverage to reconstruct the private training data. Similarly, we take a holistic approach to creating an in-depth understanding of both adversarial examples and out of distribution examples in terms of their adversarial transferability and their inherent divergence. Our research exposes the root causes for such adversarial vulnerabilities and provides transformative enlightenment on designing mitigation strategies and effective countermeasures.

 

The second main contribution of this dissertation is to develop a cross-layer strategic ensemble verification methodology (XEnsemble) for enhancing the adversarial robustness of DNN model inference in the presence of adversarial examples and out of distribution examples. XEnsemble by design has three unique capabilities. (i) XEnsemble builds diverse input denoising verifiers by leveraging different data cleaning techniques. (ii) XEnsemble develops a disagreement-diversity ensemble learning methodology for guarding the output of the prediction model against deception. (iii) XEnsemble provides a suite of algorithms to combine input verification and output verification to protect the DNN prediction models from both adversarial examples and out of distribution inputs.

 

The third contribution is the development of gradient leakage attack resilient deep learning for both centralized model training and distributed model training systems with privacy enhancing optimizations. To circumvent gradient leakage attacks, we investigate different strategies to add noise to the intermediate model parameter updates during model training (centralized or federated learning) with dual optimization goals: (i) the amount of noise added should be sufficient to remove the privacy leakages of private training data, and (ii) the amount of noise added should not be too much to hurt the overall accuracy and convergence of the trained model. We extend the conventional deep learning with differential privacy approach with the fixed privacy parameters for DP controlled noise injection by introducing adaptive privacy parameters to both centralized deep learning with differential privacy and federated deep learning with differential privacy. We also provide a theoretical formalization to certify the robustness provided differential privacy noise injection against gradient leakage attack. 

Status

  • Workflow Status:Published
  • Created By:Tatianna Richardson
  • Created:03/30/2022
  • Modified By:Tatianna Richardson
  • Modified:03/30/2022

Categories

Keywords