PhD Defense by Caleb Voss

Event Details
  • Date/Time:
    • Thursday November 5, 2020 - Friday November 6, 2020
      9:00 am - 10:59 am
  • Location: Remote: Blue Jeans
  • Phone:
  • URL: Bluejeans
  • Email:
  • Fee(s):
    N/A
  • Extras:
Contact
No contact information submitted.
Summaries

Summary Sentence: Addressing Logic Deadlocks through Task-Parallel Language Design

Full Summary: No summary paragraph submitted.

Title: Addressing Logic Deadlocks through Task-Parallel Language Design

 

Caleb Voss

School of Computer Science

Georgia Institute of Technology

 

Date: Thursday, November 5, 2020

Time: 9:00 am - 11:00 am (ET)

Location: remote, via BlueJeans at http://bluejeans.com/cvoss9

 

Committee:

Dr. Vivek Sarkar (advisor), School of Computer Science, Georgia Institute of Technology

Dr. Alessandro Orso, School of Computer Science, Georgia Institute of Technology

Dr. David Devecsery, School of Computer Science, Georgia Institute of Technology

Dr. Qirun Zhang, School of Computer Science, Georgia Institute of Technology

Dr. Tiago Cogumbreiro, College of Science and Mathematics, University of Massachusetts Boston

 

Abstract:

Task-parallel programming languages offer a variety of high-level mechanisms for synchronization that trade off between flexibility and deadlock safety. Certain approaches, such as spawn-sync and async-finish task organization, are deadlock-free by construction, but support limited synchronization patterns. However, more powerful approaches, such as promises and phasers, are trivial to deadlock. Viewing high-level task-parallel programming as the successor to low-level concurrent programming, it is imperative that language features offer both flexibility to avoid over-synchronization and also sufficient protection against logical deadlock bugs. Lack of flexibility leads to code that does not take full advantage of the available parallelism in the computation. Lack of deadlock protection leads to error-prone code in which a single bug can involve arbitrarily many tasks, making it difficult to reason about. We make advances in both flexibility and deadlock protection for existing task-parallel synchronization mechanisms by carefully designing dynamically verifiable usage policies and language constructs.

 

We first define a deadlock-freedom policy for futures. The rules of the policy follow naturally from the semantics of asynchronous task closures and correspond to a preorder traversal of the task tree. The policy admits an additional class of deadlock-free programs compared to past work. Each blocking wait for a future can be verified by a stateless, lock-free algorithm, resulting in low time and memory overheads at runtime.

 

In order to define and identify deadlocks for promises, we introduce a mechanism for promises to be owned by tasks. Simple annotations make it possible to ensure that each promise is eventually fulfilled by the responsible task or handed off to another task. Ownership semantics allows us to formally define two kinds of promise bugs: omitted sets and deadlock cycles. We present novel detection algorithms for both bugs. We further introduce an approximate deadlock-freedom policy for promises that, instead of precisely detecting cycles, raises an alarm when synchronization dependences occurring between trees of tasks are at risk of deadlocking. To establish both the safety and the flexibility of the approach, we prove that this over-approximation safely identifies all deadlocks, and we prove that deadlock-free programs can be made to comply with the policy without loss of parallelism through the use of a novel language feature, the guard block, which acts as a hint to the verifier.

 

Finally, we identify a lack of flexibility in the phaser, a synchronization primitive that, under certain restrictions, is deadlock-free by construction. The traditional restrictions cause undesirable interaction between unrelated phasers and tasks, leading to poor program design and unnecessary synchronization. We extend the semantics of phasers by introducing the concept of a subphase. By organizing phasers and their phases more carefully, we can eliminate some over-synchronization and anti-modularity that occurs in phaser programs to recover performance while still enjoying deadlock freedom.

Additional Information

In Campus Calendar
No
Groups

Graduate Studies

Invited Audience
Faculty/Staff, Public, Graduate students, Undergraduate students
Categories
Other/Miscellaneous
Keywords
Phd Defense
Status
  • Created By: Tatianna Richardson
  • Workflow Status: Published
  • Created On: Oct 23, 2020 - 10:41am
  • Last Updated: Oct 23, 2020 - 10:41am