PhD Defense by Divya Mahajan

Event Details
  • Date/Time:
    • Friday March 15, 2019
      12:00 pm - 2:00 pm
  • Location: Klaus 2100
  • Phone:
  • URL:
  • Email:
  • Fee(s):
  • Extras:
No contact information submitted.

Summary Sentence: : Balancing Generality and Specialization for Machine Learning in the Post ISA Era

Full Summary: No summary paragraph submitted.

Title: Balancing Generality and Specialization for Machine Learning in the Post ISA Era


Divya Mahajan

PhD Candidate

School of Computer Science

College of Computing

Georgia Institute of Technology




Date: Friday, March 15, 2019

Time: Noon - 2:00 PM

Location: Klaus 2100 






Dr. Hadi Esmaeilzadeh (Advisor), Department of Computer Science and Engineering, University of California, San Diego

Dr. Hyesoon Kim, School of Computer Science, Georgia Institute of Technology

Dr. Milos Prvulovic, School of Computer Science, Georgia Institute of Technology

Dr. Doug Burger, Microsoft Corporation

Dr. Dean Tullsen, Department of Computer Science and Engineering, University of California, San Diego






A growing number of commercial and enterprise systems are increasingly relying on compute-intensive machine learning algorithms. While the demand for these applications is growing, the performance benefits from general-purpose platforms is diminishing. This challenge has coincided with the explosion of data where the rate of data generation has reached an overwhelming level that is beyond the capabilities of current computing systems. Therefore, the ever-increasing compute needs of applications such as machine learning and robotics can benefit from hardware acceleration. 


Traditionally, to accelerate a set of workloads, we profile the code optimized for CPUs and offload the hot functions on compute units designed specially for that particular function, hence providing higher performance and energy efficiency. Instead in this work, we take a revolutionary approach where we delve into the algorithmic properties of an application domain and couple them with our hardware acceleration solutions. We leverage the property that a wide range of machine learning algorithms can be modeled as stochastic optimization problems; and use this property to devise comprehensive stacks that are built independent of the CPU. These stacks expose a high-level mathematical programming interface and can automatically generate accelerators for users who have limited knowledge about hardware design but can benefit from large performance and efficiency gains for their programs.  


Keeping these ambitious goals in mind, our work (1) strikes a balance between generality and specialization by breaking the long-held traditional abstraction of the Instruction Set Architecture (ISA) in favor of a more algorithm-centric approach; (2) develops hardware acceleration frameworks by co-designing a language, compiler, runtime system, and hardware to provide high performance and efficiency, in addition to flexibility and programmability; (3) segregates algorithmic specification from implementation to shield the programmer from continual hardware/software modifications while allowing them to benefit from the emerging heterogeneity of modern compute platforms; and (4) develops real cross-stack prototypes to evaluate these innovative solutions in a real-world setting and make them open-source to maximize community engagement and industry impact. Our work Tabla ( is public, and defines the very first open-source hardware platforms for machine learning and artificial intelligence.

Additional Information

In Campus Calendar

Graduate Studies

Invited Audience
Faculty/Staff, Public, Graduate students, Undergraduate students
Phd Defense
  • Created By: Tatianna Richardson
  • Workflow Status: Published
  • Created On: Mar 11, 2019 - 2:04pm
  • Last Updated: Mar 11, 2019 - 2:04pm