event

PhD Proposal by Sana Damani

Primary tabs

Title: Instruction Reordering and Work Scheduling for Thread-Parallel Architectures

 

Sana Damani

Ph.D. Student

School of Computer Science       

Georgia Institute of Technology

 

Date: Wednesday, December 1, 2021

Time: 11:00 AM - 1:00 PM EST

Location(Remote via BlueJeans): https://gatech.bluejeans.com/sdamani6

 

Committee:

Dr. Vivek Sarkar (Advisor), School of Computer Science, Georgia Institute of Technology

Dr. Hyesoon Kim, School of Computer Science, Georgia Institute of Technology

Dr. Tom Conte, School of Computer Science, Georgia Institute of Technology

Dr. Santosh Pande, School of Computer Science, Georgia Institute of Technology

 

Abstract:

While accelerators such as GPUs and near-memory processors show significant performance improvements for applications with high data parallelism and regular memory accesses, they experience synchronization and memory access overheads in applications with irregular control flow and memory access patterns resulting in reduced efficiency. Examples include graph applications, Monte Carlo simulations, ray tracing applications, and sparse matrix computations. This proposal aims at identifying inefficiencies in executing irregular programs on thread-parallel architectures, and recommends compiler transformations and architecture enhancements to address these inefficiencies. In particular, we describe instruction reordering and thread scheduling techniques that avoid serialization, reduce pipeline stalls and minimize redundant thread migrations, thereby reducing overall program latency and improving processor utilization.

 

Contributions:

  1. Common Subexpression Convergence, a compiler transformation that identifies and removes redundant code in divergent regions of GPU programs.
  2. Speculative Reconvergence, a compiler transformation that identifies new thread reconvergence points in divergent GPU programs to improve SIMT efficiency.
  3. Subwarp Interleaving, an architecture feature that schedules threads at a subwarp granularity on GPUs to reduce pipeline stalls in divergent regions of the program.
  4. Memory Access Scheduling, a software instruction scheduling approach that groups together co-located memory accesses to minimize thread migrations on migratory-thread architectures.

Status

  • Workflow Status:Published
  • Created By:Tatianna Richardson
  • Created:11/29/2021
  • Modified By:Tatianna Richardson
  • Modified:11/29/2021

Categories

Keywords