event
PhD Proposal by Sana Damani
Primary tabs
Title: Instruction Reordering and Work Scheduling for Thread-Parallel Architectures
Sana Damani
Ph.D. Student
School of Computer Science
Georgia Institute of Technology
Date: Wednesday, December 1, 2021
Time: 11:00 AM - 1:00 PM EST
Location(Remote via BlueJeans): https://gatech.bluejeans.com/sdamani6
Committee:
Dr. Vivek Sarkar (Advisor), School of Computer Science, Georgia Institute of Technology
Dr. Hyesoon Kim, School of Computer Science, Georgia Institute of Technology
Dr. Tom Conte, School of Computer Science, Georgia Institute of Technology
Dr. Santosh Pande, School of Computer Science, Georgia Institute of Technology
Abstract:
While accelerators such as GPUs and near-memory processors show significant performance improvements for applications with high data parallelism and regular memory accesses, they experience synchronization and memory access overheads in applications with irregular control flow and memory access patterns resulting in reduced efficiency. Examples include graph applications, Monte Carlo simulations, ray tracing applications, and sparse matrix computations. This proposal aims at identifying inefficiencies in executing irregular programs on thread-parallel architectures, and recommends compiler transformations and architecture enhancements to address these inefficiencies. In particular, we describe instruction reordering and thread scheduling techniques that avoid serialization, reduce pipeline stalls and minimize redundant thread migrations, thereby reducing overall program latency and improving processor utilization.
Contributions:
- Common Subexpression Convergence, a compiler transformation that identifies and removes redundant code in divergent regions of GPU programs.
- Speculative Reconvergence, a compiler transformation that identifies new thread reconvergence points in divergent GPU programs to improve SIMT efficiency.
- Subwarp Interleaving, an architecture feature that schedules threads at a subwarp granularity on GPUs to reduce pipeline stalls in divergent regions of the program.
- Memory Access Scheduling, a software instruction scheduling approach that groups together co-located memory accesses to minimize thread migrations on migratory-thread architectures.
Groups
Status
- Workflow Status:Published
- Created By:Tatianna Richardson
- Created:11/29/2021
- Modified By:Tatianna Richardson
- Modified:11/29/2021
Categories
Keywords
Target Audience