Ph.D. Thesis Proposal: Hrishikesh Amur

Event Details
  • Date/Time:
    • Friday August 31, 2012 - Saturday September 1, 2012
      3:00 pm - 4:59 pm
  • Location: KACB 3402
  • Phone:
  • URL:
  • Email:
  • Fee(s):
  • Extras:

Hrishikesh Amur


Summary Sentence: Memory-Efficient Distributed Parallel Frameworks using Compressed Buffer Trees

Full Summary: No summary paragraph submitted.

Title: Memory-Efficient Distributed Parallel Frameworks using Compressed Buffer Trees

Hrishikesh Amur
School of Computer Science
College of Computing
Georgia Institute of Technology

Date: Friday August 31st, 2012
Time: 3:00PM - 5:00PM (EST) - UPDATED
Location: KACB 3402


  • Dr. Karsten Schwan (Advisor, School of Computer Science, Georgia Tech)
  • Dr. David Andersen (School of Computer Science, Carnegie Mellon University)
  • Dr. Greg Ganger (School of Computer Science, Carnegie Mellon University)
  • Dr. Ada Gavrilovska (School of Computer Science, Georgia Tech)
  • Dr. Matthew Wolf (School of Computer Science, Georgia Tech)

Memory is a valuable commodity in datacenters. DRAM is expensive and an expensive consumer of power. With the number of cores per socket growing faster than the memory capacity per socket, memory is increasingly scarce. Given the rise of data-intensive computing, this focus on memory gains increased relevance. Data-intensive computing systems are primarily to designed to operate on large amounts of data from storage. However, in order to overcome the high latencies associated with disk access, applications commonly use memory for performance-sensitive data. Therefore, scarcity of memory can impact the performance of distributed applications significantly.

In this thesis we introduce techniques for memory-efficiency without compromising performance. We introduce a novel data structure called the Compressed Buffer Tree (CBT) which stores data in memory-efficient form and allows computation to be executed on the data with high throughput. The CBT achieves memory-efficiency through the efficient application of data compression and offloading of state of disk. We demonstrate the utility of the CBT through implementations of high-performance, memory-efficient runtimes for the following programming models, listed in order of increasing complexity:

  • MapReduce aggregation
  • Graph processing: MapReduce cannot handle dependencies in data or support iterative execution naturally; such dependencies are naturally captured by graphs. In distributed graph-processing libraries, communication can be handled either synchronously or asynchronously and can be message-passing-based or use shared memory. We show that the CBT can be used to implement runtimes for:

        – a synchronous, message-passing model (Pregel)

        – an asynchronous, shared-memory model (GraphLab)

Additional Information

In Campus Calendar

College of Computing, School of Computer Science

Invited Audience
No audiences were selected.
No categories were selected.
No keywords were submitted.
  • Created By: Jupiter
  • Workflow Status: Published
  • Created On: Aug 27, 2012 - 5:16am
  • Last Updated: Oct 7, 2016 - 9:59pm