event

PhD Proposal by Amir Yazdanbakhsh

Primary tabs

Title:

Neuro-General Computing: An Acceleration-Approximation Approach

 

 

Amir Yazdanbakhsh
Ph.D. Student
School of Computer Science
College of Computing
Georgia Institute of Technology

Date: Wednesday, November 29 2017
Time: 9:00AM - 11:00AM (EDT)
Location: Klaus 1123

Committee:

 

Dr. Hadi Esmaeilzadeh (Advisor, School of Computer Science, Georgia Institute of Technology)
Dr. Milos Prvulovic (School of Computer Science, Georgia Institute of Technology)
Dr. Hyesoon Kim (School of Computer Science, Georgia Institute of Technology)
Dr. Sudhakar Yalamanchili (School of Electrical and Computer Engineering, Georgia Institute of Technology)
Dr. Nam Sung Kim (External, Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign)


Abstract:

 

A growing number of commercial and enterprise systems rely on compute and power intensive tasks. While the demand of these tasks is growing, the performance benefits from general-purpose platforms are diminishing. Without continuous performance improvements, grand-challenge applications, such as computer vision, machine learning, and big data analytics may stay out of reach due to their need for significantly higher compute capacity. To address these convoluted challenges, there is a need to move beyond traditional techniques and explore unconventional paradigms in computing. This thesis leverages approximate computing---one of the unconventional yet promising paradigms in computing---to mitigate these challenges and enable the traditional performance improvements in general-purpose platforms. In this proposal, I will introduce a novel computing paradigm, called neuro-general computing, that trades off the application accuracy in return for increased performance and energy efficiency. In this new paradigm of computing, I leverage the approximability of various emerging applications, such as computer vision, machine learning, financial analysis, and scientific computing, to integrate neuro-computing models within the conventional von Neumann's general-purpose computing platforms. Then, I propose how to utilize the complementary specialization techniques of acceleration and approximation in the domain of deep convolutional neural networks. First, I propose SNAPEA, a co-design hardware-software solution, that leverages a unique algorithmic property of deep convolutional neural networks in the aim to cut the computation cost of convolution operation. Second, I study the rich and unexplored domain of unsupervised learning acceleration and propose GANAX, a unified SIMD-MIMD architecture, that aims to accelerate generative adversarial networks (GANs)---one of the leading classes of unsupervised learning.

Status

  • Workflow Status:Published
  • Created By:Tatianna Richardson
  • Created:11/28/2017
  • Modified By:Tatianna Richardson
  • Modified:11/28/2017

Categories

Keywords