event
PhD Defense by Yuhao Wang
Primary tabs
Title: Simulation-Based Decision Making with Streaming Data
Candidate: Yuhao Wang
Affiliation: H. Milton Stewart School of Industrial and Systems Engineering, Georgia Institute of Technology
Date and Time: Friday, April 17, 2026, 9:00 AM – 11:00 AM
Location: Groseclose 404
Microsoft Teams: https://teams.microsoft.com/l/meetup-join/19%3ameeting_N2E3MjIxZGYtOTk4MS00MzhjLWI4YjctODA5OGM5YjdiZjFk%40thread.v2/0?context=%7b%22Tid%22%3a%22482198bb-ae7b-4b25-8b7a-6d7f32faa083%22%2c%22Oid%22%3a%2205b99bdc-ec60-4153-8a6d-ce83fbf47cd7%22%7d
Thesis Committee
Dr. Enlu Zhou (advisor), H. Milton Stewart School of Industrial and Systems Engineering, Georgia Institute of Technology
Dr. Seong-Hee Kim, H. Milton Stewart School of Industrial and Systems Engineering, Georgia Institute of Technology
Dr. Eunhye Song, H. Milton Stewart School of Industrial and Systems Engineering, Georgia Institute of Technology
Dr. Guanghui Lan, H. Milton Stewart School of Industrial and Systems Engineering, Georgia Institute of Technology
Dr. David Eckman, Industrial and Systems Engineering, Texas A&M University
Abstract
This dissertation studies simulation-based decision making in the presence of streaming data and model uncertainty. The central objective is to develop methodologies that integrate simulation, optimization, and learning in dynamic environments where data arrive sequentially over time.
The first part of the dissertation studies the ranking and selection problem under input uncertainty with streaming data. A data-driven framework is developed to simultaneously allocate resources to collect input data and conduct simulation experiments, while providing statistical guarantees on selection performance.
The second part focuses on simulation-based bilevel optimization problems, where each alternative system involves an internal continuous optimization problem. Both multi-stage and fully sequential procedures are proposed to integrate pruning of suboptimal systems with optimization of decision variables, significantly improving computational efficiency.
The third part investigates reinforcement learning under model uncertainty through a Bayesian risk framework. A Bayesian risk-averse formulation of Markov decision processes is developed, together with a Q-learning algorithm that updates posterior beliefs using streaming observations.
The final part extends this framework to online reinforcement learning, where policies are adaptively updated as new data arrive. The proposed methods achieve sublinear regret and demonstrate how Bayesian risk formulations naturally adjust the degree of risk aversion over time.
Overall, this dissertation contributes flexible and adaptive methodologies for simulation-based decision making in modern data-rich stochastic systems.
Groups
Status
- Workflow status: Published
- Created by: Tatianna Richardson
- Created: 04/06/2026
- Modified By: Tatianna Richardson
- Modified: 04/06/2026
Categories
Keywords
User Data
Target Audience