event

Center for Signal and Information Processing (CSIP) Seminar

Primary tabs

Center for Signal and Information Processing (CSIP) Seminar Series presents:
A Decentralized Policy Gradient Approach to Multi-task Reinforcement Learning

Introducing Research from Sihan Zheng

Date: Friday, March 12, 2021
Time: 3:00pm
Bluejeans link: https://bluejeans.com/621436705

 

Sihan Zeng is a PhD student at Georgia Tech working with Dr. Justin Romberg. His research interests lie in reinforcement learning, distributed optimization, and inverse problems. Prior to coming to Georgia Tech, he received his B.S. in Electrical Engineering and B.A. in Statistics in 2017 from Rice University in Houston, Texas.


ABSTRACT: We develop a mathematical framework for solving multi-task reinforcement learning (MTRL) problems based on a type of policy gradient method. The goal in MTRL is to learn a common policy that operates effectively in different environments; these environments have similar (or overlapping) state spaces, but have different rewards and dynamics. We highlight two fundamental challenges in MTRL that are not present in its single task counterpart and illustrate them with simple examples. We then develop a decentralized entropy-regularized policy gradient method for solving the MTRL problem and study its finite-time convergence rate. We demonstrate the effectiveness of the proposed method using a series of numerical experiments. These experiments range from small-scale "GridWorld" problems that readily demonstrate the trade-offs involved in multi-task learning to large-scale problems, where common policies are learned to navigate an airborne drone in multiple (simulated) environments.

Status

  • Workflow Status:Published
  • Created By:Ashlee Gardner
  • Created:03/04/2021
  • Modified By:Ashlee Gardner
  • Modified:03/04/2021

Categories

  • No categories were selected.

Keywords

  • No keywords were submitted.