event
Ph.D. Proposal Oral Exam - Afshin Abdi
Primary tabs
Title: Distributed Learning and Inference in Deep Models
Committee:
Dr. Fekri, Advisor
Dr. AlRegib, Chair
Dr. Romberg
Abstract:
The objective of the proposed research is developing new methods and analyzing their performance for distributed learning and inference of deep models, especially on nodes with limited computational power. We consider two classes of related problems; 1) distributed training of deep models, and 2) compression and restructuring of deep models for efficient deployment and reduced inference times on devices with limited resources. First, we argue that distributed training can be recast as the problem of central estimation officer (CEO) in information theory and based on it, we will develop a framework for compression, communication and parameter estimation of deep models. Next, for efficient implementations, we observe that neural networks with sparse and local connectivity structures are more suitable for extensive distribution and parallel implementation due to their lower communication requirements. Hence, we propose to restructure the neural network by rearranging neurons in each layer and partitioning the model into sub-models such that the number of connections among sub-models is minimized.
Status
- Workflow Status:Published
- Created By:Daniela Staiculescu
- Created:10/29/2019
- Modified By:Daniela Staiculescu
- Modified:11/18/2019
Categories
Keywords
Target Audience