PhD Defense by Wenqi Wei

Event Details
  • Date/Time:
    • Tuesday September 28, 2021
      9:00 am - 11:00 am
  • Location: Atlanta, GA; REMOTE
  • Phone:
  • URL: Bluejeans
  • Email:
  • Fee(s):
  • Extras:
No contact information submitted.

Summary Sentence: Adversarial Resilient Deep learning with Privacy Enhancing Optimization

Full Summary: No summary paragraph submitted.

Title: Adversarial Resilient Deep learning with Privacy Enhancing Optimization 


Date: Tuesday, September 28th, 2021 

Time: 9:00am – 11:00am (EDT) 



Wenqi Wei

Ph.D. Student

School of Computer Science

College of Computing 

Georgia Institute of Technology 





Dr. Ling Liu (Advisor, School of Computer Science, Georgia Institute of Technology)

Dr. Calton Pu (School of Computer Science, Georgia Institute of Technology)

Dr. Shamkant Navathe (School of Computer Science, Georgia Institute of Technology)

Dr. Margaret Loper (Georgia Tech Research Institute)

Dr. James Caverlee (Department of Computer Science and Engineering, Texas A&M University)

Dr. Balaji Palanisamy (School of Computing and Information, University of Pittsburgh)





Deep learning is being deployed in the cloud and on edge devices for a wide range of domain-specific applications, ranging from healthcare, cyber-manufacturing, autonomic vehicles, to smart cities and smart planet initiatives. While deep learning creates new opportunities to business, engineering and scientific discoveries, it also introduces new attack surfaces to the modern computing systems that incorporate deep learning as a core component for algorithmic decision making and cognitive machine intelligence, ranging from data poisoning and gradient leakage during training phase and adversarial evasion attacks during model inference phase, aiming to cause the well-trained model to misbehave randomly or purposefully. This dissertation research addresses these problems with dual focuses: First, it aims to provide a fundamental understanding of the security and privacy vulnerabilities inherent in deep neural network training and inference. Second, it develops an adversarial resilient framework and a set of optimization techniques to safeguard the deep learning systems, services and applications against adversarial manipulations and model inversion induced privacy leakages, while maintaining the accuracy and convergence performance of deep learning systems. 

In this proposal exam, I will focus on the gradient leakage problems and our mitigation approaches in federated deep learning settings. 

Additional Information

In Campus Calendar

Graduate Studies

Invited Audience
Faculty/Staff, Public, Undergraduate students
Phd Defense
  • Created By: Tatianna Richardson
  • Workflow Status: Published
  • Created On: Sep 22, 2021 - 10:30am
  • Last Updated: Sep 22, 2021 - 10:30am