event

PhD Defense by Yi Cheng

Primary tabs

Thesis Title: Risk Neutral and Risk Averse Stochastic Optimization

 

Thesis Committee:

Dr. Alexander Shapiro (advisor), School of Industrial and Systems Engineering, Georgia Institute of Technology

Dr. Guanghui Lan (co-advisor), School of Industrial and Systems Engineering, Georgia Institute of Technology

Dr. Alp Muharremoglu, Amazon.com

Dr. Lauren N. Steimle, School of Industrial and Systems Engineering, Georgia Institute of Technology

Dr. Enlu Zhou, School of Industrial and Systems Engineering, Georgia Institute of Technology

 

Date and Time: Wednesday, November 23rd, 8:30 am (EST)

 

Location (Meeting Link)https://gatech.zoom.us/j/95280360071?pwd=WmZ4L1ozaCtUOEpKMDZ2VHVnK2xZdz09

Meeting ID: 952 8036 0071

Passcode: 817954

 

 

Abstract:

 

In this thesis, we focus on the modeling, computational methods and applications of multistage/single-stage stochastic optimization, which entail risk aversion under certain circumstances. Chapters 2-4 concentrate on multistage stochastic programming while Chapter 5-6 deal with a class of single-stage functional constrained stochastic optimization problems. 

 

In Chapter 2, we investigate the deterministic upper bound of a Multistage Stochastic Linear Program (MSLP) and propose the Dual SDDP algorithm that computes a sequence of nonincreasing deterministic upper bounds for the optimal value of the problem, even without the presence of Relatively Complete Recourse. Sensitivity analysis of the problem parameters and application on an inventory problem are provided. We also propose the Periodical Dual SDDP algorithm that copes with the infinite-horizon MSLP and show its effectiveness in the upper bound construction when traditional statistical upper bound does not work. As a proof of concept of the developed tools, extensive numerical experiments are provided on the inventory problem and the Brazilian hydro-thermal planning problems under both finite-horizon and infinite-horizon settings.

 

In Chapter 3, we propose a construction of the statistical upper bound for the optimal value of risk-averse Stochastic Optimal Control (SOC) problems. This outlines an approach to a solution of a long standing problem in that area of research. The bound holds for a large class of convex and monotone conditional risk mappings. We show the validity of the statistical upper bound to solve a real-world stochastic hydro-thermal planning problem.

 

In Chapter 4, we address the sample complexity of solving stationary stochastic programs by the Sample Average Approximation (SAA) method. We investigate this in the framework of Stochastic Optimal Control setting. In particular, we derive a Central Limit Theorem type asymptotics for the optimal values of the SAA problems. The main conclusion is that the sample size,  required to attain a given relative error of the SAA solution, is not sensitive to the discount factor, even if the discount factor is very close to one. We consider the risk neutral and risk averse settings. The presented numerical experiments confirm the theoretical analysis.

 

Chapter 5 is motivated by the requirement of simultaneously enforcing risk and sparsity in many important applications and the challenges thereof. To address the issues and overcome the computational difficulties, we propose a novel projection-free method, referred to as Level Conditional Gradient (LCG) method, for solving convex functional constrained optimization. It leverages a level-set framework to update the estimation of the optimal value and an inner conditional gradient oracle (CGO) for solving mini-max subproblems. In particular, the proposed CGO is capable of solving a general convex saddle point problem and provides computable lower and upper bounds of the problem. We show that the method achieves $\mathcal{O}\big(\frac{1}{\epsilon^2}\log\frac{1}{\epsilon}\big)$ iteration complexity for solving both smooth and nonsmooth cases without dependency on a possibly large magnitude of optimal dual Lagrange multiplier.

 

In Chapter 6, to cope with nonconvex functional constrained optimization, we introduce the Level Inexact Proximal Point (IPP-LCG) method and the Direct Nonconvex Conditional Gradient (DNCG) method. The first approach taps into the advantage of LCG by transforming the problem into a series of convex subproblems and exhibits an $\mathcal{O}\big(\frac{1}{\epsilon^3}\log\frac{1}{\epsilon}\big)$  iteration complexity for finding an approximate KKT point. The DNCG is the first single-loop projection-free method, with iteration complexity bounded by $\mathcal{O}\big(\frac{1}{\epsilon^4}\big)$ for computing a so-called $\epsilon$-Wolfe point. Numerical study on a portfolio selection problem and the intensity modulated radiation therapy treatment planning problem shows all the proposed methods are efficient in jointly minimizing risk while promoting sparsity on the real-world and large-scale datasets.

 

Status

  • Workflow Status:Published
  • Created By:Tatianna Richardson
  • Created:11/09/2022
  • Modified By:Tatianna Richardson
  • Modified:11/09/2022

Categories

Keywords