Sample Average Approximation Methods for stochastic MINLP's

Primary tabs

One approach to process design with uncertain parameters is to formulate a stochastic MINLP. When there are many uncertain parameters, the number of samples becomes unmanageably large and computing the solution to the MINLP can be difficult and very time consuming. In this talk, two new algorithms (the optimality gap method (OGM) and the confidence level method (CLM)) will be presented for solving convex stochastic MINLPs. At each iteration, the sample average approximation method is applied to the NLP sub-problem and MILP master problem. A smaller sample size problem is solved multiple times with different batches of i.i.d. samples to make decisions and a larger sample size problem (with continuous/discrete decision variables fixed) is solved to re-evaluate the objective values. In the first algorithm, the sample sizes are iteratively increased until the optimality gap intervals of the upper and lower bound are within a pre-specified tolerance. Instead of requiring a small optimality gap, the second algorithm uses tight bounds for comparing the objective values of NLP sub-problems and weak bounds for cutting off solutions in the MILP master problems, hence the confidence of finding the optimal discrete solution can be adjusted by the parameter used to tighten and weaken the bounds. The case studies show that the algorithms can significantly reduce the computational time required to find a solution with a given degree of confidence.


  • Workflow Status:
  • Created By:
    Barbara Christopher
  • Created:
  • Modified By:
    Fletcher Moore
  • Modified:


    No keywords were submitted.

Target Audience

    No target audience selected.