event
PhD Proposal by Benjamin Cobb
Primary tabs
Title: Novel Tensor Factorization Algorithms and Applications
Date: Monday, December 1st (12/1), 2025
Time: 10:30am-12:30pm ET
Location: Coda C1015 Vinings
Zoom: https://gatech.zoom.us/j/92964368885?pwd=LVQ933vmAz1BX0bk2DMgtSb3S5QBD4.1
Benjamin Cobb
Computer Science Ph.D. Student
School of Computational Science and Engineering
Georgia Institute of Technology
Committee:
Dr. Richard W. Vuduc (co-advisor), CSE, Georgia Institute of Technology
Dr. Haesun Park (co-advisor), CSE, Georgia Institute of Technology
Dr. Edmond Chow, CSE, Georgia Institute of Technology
Dr. Grey M. Ballard, CS, Wake Forest University
Dr. Ramakrishnan Kannan, Discrete Algorithms, Oak Ridge National Laboratory
Abstract:
As our capacity to collect and generate data continues to outpace our capacity to store and analyze said data, low-rank tensor factorizations offer a solution to alleviate this issue for tensorized data. The process of computing low-rank tensor factorizations is computationally expensive and requires efficiently leveraging computational resources to enable feasibility. This thesis proposes several novel algorithms for low-rank tensor factorizations to enable interpretable analysis and compression of massive multiway datasets. To this effect, we focus on three overarching themes: efficiently leveraging computational resources to compute large-scale tensor decompositions, efficiently enforcing nonnegativity constraints to improve interpretability, and incorporating additional information into the factorization to improve solution quality.
We start by proposing the Fused In-place Sequentially Truncated Higher Order Singular Value Decomposition (FIST-HOSVD) algorithm as the first in-place method for computing the Tucker decomposition which increases the problem size that can be factorized by up to 3x. We demonstrate the effectiveness of the proposed FIST-HOSVD algorithm on two combustion simulation compression applications. We then provide an in-depth study of several state-of-the-art methods for Nonnegative Matrix Factorization (NMF). As part of this, we provide a comprehensive survey of Nonnegative Least Squares (NNLS) solvers used to enforce the nonnegativity of the factors. In doing so, we propose a Fast Active-Set Thresholding NNLS (FAST-NNLS) solver which outperforms existing NNLS methods for broad classes of problems. We then propose the Low Rank Approximations with Constraints at Exascale (LORACX) framework. We demonstrate that LORACX yields unprecedented performance and scalability by achieving .67 Exaflops in double-precision on 8,192 nodes of the Frontier supercomputer. We then extend the NMF objective function to incorporate additional multiway data, culminating in a Joint Nonnegative Coupled Matrix-Tensor Factorization (Joint-NCMTF) framework. We demonstrated that the proposed Joint-NCMTF method yields improved clustering quality and additional dimensions of insight relative to traditional matrix-based methods.
In this thesis proposal, we propose several future avenues of research including preliminary work on a numerically stable QR-factorization based NNLS algorithm and an application of Hierarchical NMF to large-scale protein-kmer family clustering.
Groups
Status
- Workflow Status:Published
- Created By:Tatianna Richardson
- Created:11/19/2025
- Modified By:Tatianna Richardson
- Modified:11/19/2025
Categories
Keywords
Target Audience