PhD Proposal by Kaivalya Bakshi

Event Details
  • Date/Time:
    • Thursday November 2, 2017 - Friday November 3, 2017
      9:15 am - 10:59 am
  • Location: Montgomery Knight Building, Room 317
  • Phone:
  • URL:
  • Email:
  • Fee(s):
  • Extras:
No contact information submitted.

Summary Sentence: PDE Based Stochastic Control: Sampling Algorithms, Optimality Principles and Stability Analysis

Full Summary: No summary paragraph submitted.

Ph.D. Thesis Proposal by


Kaivalya Bakshi

(Advisor: Prof. Evangelos Theodorou)


Thursday 2 Nov 2017 @ 9:15 a.m.

Montgomery Knight Building, Room 317


“PDE Based Stochastic Control: Sampling Algorithms, Optimality Principles and Stability Analysis”

Abstract: Continuous time nonlinear stochastic differential equations (SDEs) with control multiplicative noise are used to model biological motion and sensorimotor systems. Application of the Dynamic Programming Principle (DPP) to the stochastic control problem of such SDEs leads to a fully nonlinear parabolic Hamilton-Jacobi-Bellman (HJB) partial differential equation (PDE) problem. Prior work on solving HJB PDEs, specifically, the linearly solvable framework by Todorov et al. is fundamentally unequipped to solve this fully nonlinear PDE.

In this doctoral thesis a nonlinear Feynman Kac lemma representing the fully nonlinear PDE by a second order Forward Backward SDE (2FBSDE) model is presented. Assumptions necessary for this construction are enunciated. We propose a data efficient sampling algorithm to solve 2FBSDEs which yields the control.

If the goal is optimal control of a large ensemble of stochastic agents, the problem is inherently infinite dimensional. The control in these PDE control problems may be open loop or closed loop, in relation to the state of each agent. In the former case, control feedback for each agent is the density of the ensemble. In the latter, control feedback for each agent is the state of the agent, corresponding to the Mean Field Games (MFG) control. Practical examples of MFG control problems are for air traffic, crowds, code division multiple access control, frequency synchronization in microgrids and unmanned aerial vehicle swarms. Stability properties of multiagent feedback MFG controllers have recently garnered interest in the physics, economics and control communities. Stability of simplistic single integrator MFG control systems has been treated recently by Lions et al. and Caines et. al.

In this work, we deduce a sampling algorithm to compute the open loop PDE control of ensembles governed by a general class of SDEs with jumps. This is done by using the Infinite Dimensional Minimum Principle (IDMP) and results from statistical physics. The relationship between the IDMP and DPP is quantitatively explained by showing the relationship between the costate function and infinite dimensional value function. Further, we propose a study to create a generalized methodology for stability analysis of feedback MFG controllers, for systems more general than single integrator systems. These methods would have direct applications in control problems involving non trivial passive dynamics as well as in study of second order synchronization, such as flocking.

Committee Members:

Prof. Evangelos Theodorou (Advisor)

Dr. Piyush Grover (Principal Research Scientist, MERL)

Prof. Eric Feron (AE)


Additional Information

In Campus Calendar

Graduate Studies

Invited Audience
Faculty/Staff, Public, Graduate students, Undergraduate students
Phd proposal
  • Created By: Tatianna Richardson
  • Workflow Status: Published
  • Created On: Oct 31, 2017 - 4:08pm
  • Last Updated: Oct 31, 2017 - 4:08pm