event

PhD Defense by Yongchun Li

Primary tabs

Title: Sparse and Low-rank Constrained Optimization: Formulations, Theory, and Algorithms

Committee:

Dr. Diego Cifuentes, H. Milton Stewart School of Industrial and Systems Engineering, Georgia Tech

Dr. Santanu Dey, H. Milton Stewart School of Industrial and Systems Engineering, Georgia Tech

Dr. Robert Hildebrand, Grado Department of Industrial and Systems Engineering, Virginia Tech

Dr. Mohit Singh, H. Milton Stewart School of Industrial and Systems Engineering, Georgia Tech

Dr. Weijun Xie (advisor), H. Milton Stewart School of Industrial and Systems Engineering, Georgia Tech

 

Time: Thursday, April 11th, 2024, 12-2:00 pm ET

Location:  Zoom https://gatech.zoom.us/j/97560475443

 

Sparsity and low rank are fundamental concepts in data reduction. Sparse and low-rank constrained optimization has benefited various application areas by effectively managing the rapid growth of big data, including machine learning, finance, cloud computing, and healthcare. Despite its versatility, sparse and low-rank constrained optimization is known to be NP-hard. This thesis aims to mitigate this challenge by developing novel formulations, theory, and algorithms.

 

The first part (Chapters 2 and 3) of this thesis focuses on two classic sparse constrained optimization problems within the domains of optimization and interpretable machine learning. These problems involve selecting a fixed-size subset of matrix rows and/or columns to maximize metrics like entropy or accuracy, aiming to minimize the gap relative to their non-sparse counterparts. The submatrix selection problem can be naturally formulated as discrete optimization since the selection of each candidate element corresponds to a binary variable. This motivates us to explore discrete algorithms and methods. First, we derive equivalent mixed-integer convex programming formulations for both sparse constrained optimization problems, which pave the way for designing efficient branch-and-cut algorithms to achieve optimality. In addition, we develop scalable approximation algorithms, analyze their first-known theoretical guarantees, and provide their efficient implementations for approximately solving sparse constrained optimization problems.

 

In the second part (Chapters 4 and 5) of this thesis, we propose a general low-rank optimization framework. We show that various problems in optimization, statistics, and machine learning fall into our framework, including quadratically constrained quadratic programs, fair machine learning, and recommendation systems. The low-rank constrained optimization is generally NP-hard and not even mixed-integer convex representable. Hence, we leverage partial convexification to obtain a tractable convex relaxation of low-rank constrained optimization.  While partial convexification is often practical, its solution quality lacks theoretical guarantees. We fill this gap by developing a new theory in convex geometry, which offers a geometric perspective to analyze the performance of partial convexification. Specifically, (i) we establish the necessary and sufficient condition under which the partial convexification matches the original low-rank constrained optimization, and (ii) we derive an upper bound on the minimum rank among all the optimal solutions of the partial convexification and prove its tightness. To efficiently solve partial convexification, we develop a column generation algorithm combined with a rank-reduction algorithm. This combination ensures that the output solution satisfies the theoretical guarantees.

 

 

 

Yongchun Li

PhD Student

H. Milton Stewart School of Industrial and Systems Engineering
Georgia Institute of Technology 

Status

  • Workflow Status:Published
  • Created By:Tatianna Richardson
  • Created:03/29/2024
  • Modified By:Tatianna Richardson
  • Modified:03/29/2024

Categories

Keywords

Target Audience