PhD Proposal by Yanzhao Wu

Event Details
  • Date/Time:
    • Monday September 20, 2021
      3:30 pm - 4:30 pm
  • Location: Atlanta, GA; REMOTE
  • Phone:
  • URL: Bluejeans
  • Email:
  • Fee(s):
  • Extras:
No contact information submitted.

Summary Sentence: Towards Deep Learning System and Algorithm Co-design

Full Summary: No summary paragraph submitted.

Title: Towards Deep Learning System and Algorithm Co-design


Date: Monday, September 20th, 2021

Time: 3:30-5:30pm (ET)



Yanzhao Wu

Ph.D. Student

School of Computer Science

Georgia Institute of Technology




Dr. Ling Liu (Advisor, School of Computer Science, Georgia Institute of Technology)

Dr. Calton Pu (Co-Advisor, School of Computer Science, Georgia Institute of Technology)

Dr. Greg Eisenhauer (School of Computer Science, Georgia Institute of Technology)

Dr. Shamkant Navathe (School of Computer Science, Georgia Institute of Technology)

Dr. Lakshmish Ramaswamy (Department of Computer Science, University of Georgia)





Big data powered deep learning (DL) systems and applications have blossomed in recent years. In addition to the demands for more accurate deep neural network (DNN) models, we also witness the growing interests in deploying model inference and model learning to the edge of the Internet, where data are generated, demanding deep learning system and algorithm co-design for performance optimization of deep learning systems and deep learning as a service. This dissertation research takes a holistic approach to promote the deep learning system and algorithm co-design with three original contributions.


First, we developed a methodical approach to configuration management of deep learning frameworks by exploring the intrinsic correlations between system-level parameters and algorithm specific hyperparameters and how different combinations may impact the performance of deep learning models. The core system parameters include configurations of CPU, memory, GPU, parallel processing and multi-thread management, and the DNN algorithm specific hyperparameters, including learning rate policies, loss functions and optimizers, batch size, and so forth. For example, we characterized the CPU/GPU resource usage patterns under different configurations and different DL frameworks to obtain an in-depth understanding of how varying batch sizes and learning rate policies may impact the model performance. We also provide a set of metrics for evaluating and selecting learning rate policies, including the classification confidence, variance, cost, and robustness. Two benchmarking tools, GTDLBench and LRBench, are made publicly available.


Second, we develop a systematic framework for creating ensembles of failure independent models by leveraging system and algorithm co-design for prediction fusion through diversity based hierarchical ensemble optimizations.  We introduce a hierarchical diversity concept to capture diversity through high failure independence and low negative correlation. We develop focal-model based ensemble diversity metrics to compose high quality ensembles with complimentary member models, which effectively boosts the overall accuracy of ensemble learning. We develop ensemble selection algorithms based on a suite of ensemble pruning strategies, which select ensemble teams of high diversity, and remove low diversity ensembles. Our formal analysis and empirical results demonstrate the effectiveness of our system and algorithm co-design for high diversity ensemble learning. Our EmsembleBench tool has been used in adversarial learning for improving robustness under single task learners such as image classifiers and multi-task learners such as real time video object detection.


Last but not least, we have leveraged the system and algorithm co-design through a suite of optimization techniques to enable DNN inference and DNN learning at the edge. For example, edge video analytics is a core component for many real time deep learning systems, such as autonomous driving, video surveillance, Internet of smart cameras. Edge server load surge and Wi-Fi network bandwidth saturation can further aggravate the mismatch between incoming video streaming rate in #frames per second (FPS) and the detection processing performance, which often results in random frame dropping. We explore both detection model parallel execution approach and utility-aware data reduction techniques. The former explores multi-model multi-device detection parallelism for fast object detection at the edge to meet the runtime performance requirements. The latter leverages adaptive importance sampling techniques to improve both the throughput and accuracy of edge video analytics.


In this proposal exam, I will present the design and development of diversity based ensemble algorithms, the implementation of EnsembleBench, and the extensive evaluation of our ensemble methods.


Additional Information

In Campus Calendar

Graduate Studies

Invited Audience
Faculty/Staff, Public, Graduate students, Undergraduate students
Phd proposal
  • Created By: Tatianna Richardson
  • Workflow Status: Published
  • Created On: Sep 9, 2021 - 11:48am
  • Last Updated: Sep 9, 2021 - 11:48am