event

PhD Proposal by Sanidhya Kashyap

Primary tabs

Title: Scaling Synchronization Primitives

 

Sanidhya Kashyap

Ph.D. student

School of Computer Science

College of Computing

Georgia Institute of Technology

https://gts3.org/~sanidhya/

 

Date:        Wednesday, December 5th, 2018

Time:       10AM to 12PM (EST)

Location: KACB 3100

 

Committee:

 

Dr. Taesoo Kim (Advisor, School of Computer Science, Georgia Institute of Technology)

Dr. Ada Gavrilovska (School of Computer Science, Georgia Institute of Technology)

Dr. Changwoo Min (The Bradley Department of Electrical and Computer Engineering, Virginia Tech)

 

 

Abstract:

 

For the past decade, we have seen tremendous growth in the adoption and commoditization of multicore machines that have up to 500 hardware threads. Thus, a fundamental question arises is how efficient are existing synchronization primitives---timestamping and locking---that developers use for designing concurrent, scalable, and performant applications. In this thesis, I focus on understanding the performance aspect of these primitives and coming up with new algorithms and approaches that improve the performance of such primitives. As a part of this thesis, I develop a scalable ordering primitive that overcomes the timestamping overhead in timestamp-based concurrent algorithms. In my second line of work, I venture into the realm of locking primitives. In particular, we first analyze and understand the multicore scalability of file systems. We find that some of these bottlenecks require file systems redesign, while some always contend on blocking locks, regardless of the design decision. We design and implement two new blocking locks that efficiently use the task scheduler to scale in both under- and over-subscribed scenarios. We further realize that the task scheduler also affects the scalability of applications in a virtualized scenario, as it leads to the convoy effect and the double scheduling problem. We mitigate these issues by bridging the missing scheduling information between the hypervisor and VMs with almost no overhead. Finally, I look at two extreme forms of lock design: shared memory and message passing. In the first part, I explore the set of design principles and constraints that govern the scalability of shared memory locking primitives. Later, I examine the notion of readers parallelism in message passing based lock design, which allows a writer to exploit higher cache-locality on a single core with hardware parallelism for readers.

 

Status

  • Workflow Status:Published
  • Created By:Tatianna Richardson
  • Created:11/30/2018
  • Modified By:Tatianna Richardson
  • Modified:11/30/2018

Categories

Keywords