event

Ph.D. Proposal Oral Exam - Devon Janke

Primary tabs

Title:  Overcoming Process Variations and Noise in Analog Neural Networks

Committee: 

Dr. Anderson, Advisor   

Dr. Lanterman, Chair

Dr. Raychowdhury

Abstract: The objective of the proposed research is to develop training algorithms and network architectures that limit the impact that device variations have on the performance of analog-hardware implemented neural network classifiers. Traditional machine learning algorithms and neural networks are implemented using powerful digital computational architectures such as GPUs, TPUs, and FPGAs, demonstrating high performance and successfully completing previously impossible tasks. Unfortunately, the power required to train and generate predictions with the neural networks is too high to be implemented in energy-constrained systems such as implants and edge devices. Implementing neural networks in analog hardware is one possible solution to this challenge, but analog devices suffer from process, voltage, temperature (PVT), and other variations that limit their precision. This work expects that there are some neural network architectures that will suppress noise better than others and seeks to discover how to prepare a neural network model that is bound for hardware implementation to be resilient to non-idealities.

Status

  • Workflow Status:Published
  • Created By:Daniela Staiculescu
  • Created:07/08/2020
  • Modified By:Daniela Staiculescu
  • Modified:07/08/2020

Categories

Target Audience