event

Ph.D. Proposal Oral Exam - Xiaoyu Sun

Primary tabs

Title:  Computing-in-Memory for Accelerating Deep Neural Networks

Committee: 

Dr. Yu, Advisor   

Dr. Lim, Chair

Dr. Raychowdhury

Abstract:

The objective of the proposed research is to accelerate deep neural networks (DNNs) with emerging non-volatile memories (eNVMs) based computing-in-memory (CIM) architecture. The research first focuses on the inference of DNNs and proposes a resistive random access memory (RRAM) based architecture with customized synaptic weight cell design for implementing binary neural networks (BNNs), showing great potential in terms of area- and energy-efficiency. A prototype chip that monolithically integrated an RRAM array and CMOS peripheral circuits was then fabricated using commercial 90nm process, which not only demonstrates the feasibility of the proposed CIM operation, but also validates the superiorities shown in the benchmark. Moreover, to overcome the challenges posed by the nonlinearity and asymmetry in eNVMs’ conductance tuning, this research proposes a novel 2-transistor-1-FeFET (ferroelectric field-effect transistor) based synaptic weight cell that exploits hybrid precision for in-situ training and inference, which achieves software-comparable classification accuracy on MNIST and CIFAR-10 dataset. The remaining part of this research will mainly focus on two topics: 1) a thorough investigation into the impact of eNVMs’ non-ideal characteristics on the training and inference of DNNs; 2) second-generation RRAM based inference chip design with configurable 1/2/4/8-bit activation/weight precision.

Status

  • Workflow Status:Published
  • Created By:Daniela Staiculescu
  • Created:12/02/2019
  • Modified By:Daniela Staiculescu
  • Modified:12/02/2019

Categories

Target Audience