event
PhD Defense by Sheng Zhang
Primary tabs
Title: Finite-Sample Guarantees of Stochastic Approximation Algorithms in Average-Reward Reinforcement Learning and Decentralized Stochastic Optimization
Date: April 18th, 2025
Time: 1:00 PM – 3:00 PM (EDT)
Location: https://gatech.zoom.us/j/7545146010
Sheng Zhang
Machine Learning PhD Student
H. Milton Stewart School of Industrial and Systems Engineering
Georgia Institute of Technology
Committee
1. Prof. Justin Romberg (ECE, Advisor)
2. Prof. Ashwin Pananjady (ISYE & ECE, Advisor)
3. Prof. Siva Maguluri (ISYE)
4. Prof. Guanghui (George) Lan (ISYE)
5. Prof. Thinh Doan (Department of Aerospace Engineering and Engineering Mechanics, University of Texas Austin)
Abstract
Many problems in operations research and machine learning can be formulated as root-finding or fixed-point equations, often solved using iterative algorithms such as stochastic approximation (SA). This thesis focuses on the theoretical analysis of SA methods, particularly in the context of reinforcement learning (RL) and decentralized optimization. We first study the finite-sample behavior of average-reward RL algorithms, addressing the unique challenges posed by the absence of contraction properties. We then develop a general non-asymptotic theory for SA under seminorm-contractive operators, extending classical stability results and offering sharper performance guarantees. Finally, we propose and analyze stochastic dual accelerated algorithms for decentralized stochastic optimization, showing optimal convergence under communication and data constraints. Together, these contributions advance the understanding and design of efficient SA algorithms for large-scale, data-driven decision-making problems.
Groups
Status
- Workflow Status:Published
- Created By:Tatianna Richardson
- Created:04/17/2025
- Modified By:Tatianna Richardson
- Modified:04/17/2025
Categories
Keywords
Target Audience