event

PhD Defense by Yiling Luo

Primary tabs

Thesis Title: Stochastic Methods in Model Estimation: New Algorithms and New Properties

 

Thesis Committee:

1 Dr. Xiaoming Huo (Advisor, ISyE, Gatech)

2 Dr. Yajun Mei (Co-advisor, ISyE, Gatech)

3 Dr. Arkadi Nemirovski (ISyE, Gatech)

4 Dr. Vladimir Koltchinskii (Mathematics, Gatech)

5 Dr. Kai Zhang (Department of Statistics and Operations Research, University of North Carolina, Chapel Hill)

 

Date and Time: Wednesday, December 21st, 11:00 am (EST)

 

In-Person LocationGroseclose 303

Meeting LinkClick here to join the meeting

Meeting ID: 295 937 083 365

Passcode: tXvJLq

 

 

Abstract:

We study the properties of applying stochastic algorithms to solve optimization problems in model estimation. In particular, we investigate the statistical properties of estimators that are based on some stochastic algorithms in Chapters 2-5; we propose a new stochastic algorithm and study its optimization property in Chapter 6. 

 

We summarize the main contents in each chapter as follows. 

 

In Chapter 2, we explore a directional bias phenomenon in both stochastic gradient descent and gradient descent, and examine their implications for the resulting estimators. We would argue that the outcome from the stochastic gradient descent may lead to a better generalization error bound.

 

In Chapter 3, we study a property of implicit regularization by a variance reduction version of the stochastic mirror descent algorithm. The phenomenon of implicit regularization by applying certain algorithms has attracted a lot of attention, and its existence with the variance reduction based stochastic algorithm is new. 

 

In Chapter 4, we establish the equivalence between the variance reduced stochastic mirror descents with a technique that has been developed in information theory -- variance reduced stochastic natural gradient descent. The purpose of establishing such an equivalence is that the properties of both problems can automatically be shared with each other. 

 

In Chapter 5, we study a recent algorithm -- ROOT-SGD -- for the online learning problem, and we estimate the covariance of the estimator that is computed via the ROOT-SGD algorithm. Our covariance estimators quantify the uncertainty in the ROOT-SGD algorithm, which are useful for statistical inference. 

 

In Chapter 6, we study a constrained strongly convex problem -- the entropic OT, and we propose a primal-dual stochastic algorithm with variance reduction to solve it. We show that the computational complexity of our algorithm is better than other first-order algorithms for solving the entropic OT. 

 

Status

  • Workflow Status:Published
  • Created By:Tatianna Richardson
  • Created:12/07/2022
  • Modified By:Tatianna Richardson
  • Modified:12/07/2022

Categories

Keywords