event

PhD Thesis Defense - Liujia Hu

Primary tabs

TITLE: Convergent Algorithms in Simulation Optimization

ABSTRACT:

It is frequently the case that deterministic optimization models could be made more practical by explicitly incorporating uncertainty. The resulting stochastic optimization problems are in general more difficult to solve than their deterministic counterparts, because the objective function cannot be evaluated exactly and/or because there is no explicit relation between the objective function and the corresponding decision variables. This thesis develops random search algorithms for solving optimization problems with continuous decision variables when the objective function values can be estimated with some noise via simulation. Our algorithms will maintain a set of sampled solutions, and use simulation results at these solutions to guide the search for better solutions.

In the first part of the thesis, we propose an Adaptive Search with Resampling and Discarding (ASRD) approach for solving continuous stochastic optimization problems. Our ASRD approach is a framework for designing provably convergent algorithms that are adaptive both in seeking new solutions and in keeping or discarding already sampled solutions. The framework is an improvement over the Adaptive Search with Resampling (ASR) method of Andradottir and Prudius in that it spends less effort on inferior solutions (the ASR method does not discard already sampled solutions). We present conditions under which the ASRD method is convergent almost surely and carry out numerical studies aimed at comparing the algorithms. Moreover, we show that whether it is beneficial to resample or not depends on the problem, and analyze when resampling is desirable. Our numerical results show that the ASRD approach makes substantial improvements on ASR, especially for difficult problems with large numbers of local optima.

In traditional simulation optimization problems, noise is only involved in the objective functions. However, many real world problems involve stochastic constraints. Such problems are more difficult to solve because of the added uncertainty about feasibility. The second part of the thesis presents an Adaptive Search with Discarding and Penalization (ASDP) method for solving continuous simulation optimization problems involving stochastic constraints. Rather than addressing feasibility separately, ASDP utilizes the penalty function method from deterministic optimization to convert the original problem into a series of simulation optimization problems without stochastic constraints. We present conditions under which the ASDP algorithm converges almost surely from inside feasible region, and under which it converges to the optimal solution but without feasibility guarantee. We also conduct numerical studies aimed at assessing the efficiency and tradeoff under the two different convergence modes.

Finally, in the third part of the thesis, we propose a random search method named Gaussian Search with Resampling and Discarding (GSRD) for solving simulation optimization problems with continuous decision spaces. The method combines the ASRD framework with a sampling distribution based on a Gaussian process that not only utilizes the current best estimate of the optimal solution but also learns from past sampled solutions and their objective function observations. We prove that our GSRD algorithm converges almost surely, and carry out numerical studies aimed at studying the effects of utilizing the Gaussian sampling strategy. Our numerical results show that the GSRD framework performs well when the underlying objective function is multi-modal. However, it takes much longer to sample solutions, especially in higher dimensions.

Status

  • Workflow Status:Published
  • Created By:Anita Race
  • Created:03/23/2015
  • Modified By:Fletcher Moore
  • Modified:04/13/2017

Categories

Keywords

  • No keywords were submitted.