PhD Dissertation Defense by Ramakrishnan Kannan

Event Details
  • Date/Time:
    • Monday February 22, 2016
      12:00 pm - 2:00 pm
  • Location: Klaus 1212
  • Phone:
  • URL:
  • Email:
  • Fee(s):
  • Extras:
No contact information submitted.

Summary Sentence: Scalable and Distributed Constrained Low Rank Approximation

Full Summary: No summary paragraph submitted.



Haesun Park (Advisor, School of Computational Science and Engineering, Georgia Tech)

Rich Vuduc (School of Computational Science and Engineering, Georgia Tech)

Polo Chau (School of Computational Science and Engineering, Georgia Tech)

Charu Aggarwal (IBM Research)

Grey Malone Ballard (Sandia National Laboratories)


Abstract :

Low rank approximation is the problem of finding two low rank factors W and H such that the rank(WH) << rank(A), where the product WH is approximately equal to the given input matrix A. These low rank factors W and H are constrained for meaningful physical interpretation and referred as Constrained Low Rank Approximation (CLRA).  Like most of the constrained optimization problem, performing CLRA is computationally expensive than its unconstrained counterpart. A widely used CLRA is the Non-negative Matrix Factorization (NMF) which enforces non-negativity constraints in each of its low rank factors W and H. In this talk, we focus on scalable/distributed CLRA algorithms for constraints such as boundedness and non-negativity for large real world matrices that includes text, High Definition (HD) video, social networks and recommender systems.

In this talk, we begin with the Bounded Matrix Low Rank Approximation (BMA) which imposes a lower and an upper bound on every element of the lower rank matrix. BMA is more challenging than NMF as it imposes bounds on the product WH rather than on each of the low rank factors W and H. For very large input matrices, we extend our BMA algorithm to Block BMA that can scale to a large number of processors.  In applications, such as HD video, where the input matrix to be factored is extremely large, distributed computation is inevitable and the network communication becomes a major performance bottleneck. Towards this end, we propose a novel distributed Communication Avoiding NMF (CANMF) algorithm that communicates only the right low rank factor to its neighboring machine. Finally, a general distributed HPC-NMF framework that uses HPC techniques in communication intensive NMF operations and suitable for broader class of NMF algorithms.

Additional Information

In Campus Calendar

Graduate Studies

Invited Audience
PhD Dissertation Defense
  • Created By: Jacquelyn Strickland
  • Workflow Status: Published
  • Created On: Feb 10, 2016 - 10:50am
  • Last Updated: Oct 7, 2016 - 10:16pm