Ambiguous chance constrained problems and robust optimization

Event Details
  • Date/Time:
    • Thursday November 18, 2004 - Wednesday November 17, 2004
      10:00 am - 11:00 pm
  • Location: Executive Classroom 228 Main Bldg
  • Phone:
  • URL:
  • Email:
  • Fee(s):
    N/A
  • Extras:
Contact
Barbara Christopher
Industrial and Systems Engineering
Contact Barbara Christopher
404.385.3102
Summaries

Summary Sentence: Ambiguous chance constrained problems and robust optimization

Full Summary: Ambiguous chance constrained problems and robust optimization

Let x denote a vector of n decision variables and Y denote a random vector. A constraint of the form P(f(x,Y) > 0) < eps
is called a chance constraint on the decision variable x. Optimization problems with chance constraints are notoriously hard to solve.

A simple strategy for solving chance constrained problems is to generate N samples according to the distribution P and then impose the constraints f(x,Y_i) < 0, for all i = 1, ..., N (*)
Since Y_i are random samples, there is no hope that decision
variables x that are feasible for (*) are all feasible for the chance
constraint with probability 1. Therefore, one has to allow a probability of error delta. Question: How large should N be as a function of eps and
delta ?

Recently, Calafiore and Campi showed that when f(x,Y) is a convex function of x for every fixed Y we only need N =O(n/delta log(1/eps)). Nemirovski and Shapiro have shown that if the function f(x,Y) is bi-affine and the distribution P has a "concentration-of-measure" property the number of samples required drops to N = O(n log(1/(eps*delta))).

In many applications of chance constrained problems the distribution P is not completely known, i.e. the distribution is ambiguous. The natural
constraint to impose in this setting is the ambiguous chance constraint:
max_{P in M} {P(f(x,Y) > 0)} < eps, where M is an uncertainty set of distributions. In this talk we discuss how to extend many results known for chance constrained problems to this more general setting. Robust deterministic optimization naturally arises
in these extensions by way of the Strassen-Dudley representation theorem.

(Joint work with Emre Erdogan)

Brief Bio: Garud Iyengar received his PhD in Electrical Engineering from Stanford University in 1998. Since then he has been with the Industrial Engineering and Operations Research (IEOR) department at Columbia
University where he is currently an Associate Professor.

Additional Information

In Campus Calendar
No
Groups

H. Milton Stewart School of Industrial and Systems Engineering (ISYE)

Invited Audience
No audiences were selected.
Categories
Seminar/Lecture/Colloquium
Keywords
No keywords were submitted.
Status
  • Created By: Barbara Christopher
  • Workflow Status: Published
  • Created On: Oct 8, 2010 - 7:39am
  • Last Updated: Oct 7, 2016 - 9:52pm