event

Phd Defense by Christopher Simpkins

Primary tabs

Title: Integrating Reinforcement Learning into a Programming Language

 

Christopher Simpkins

School of Interactive Computing

College of Computing

Georgia Institute of Technology

 

Date: Wednesday, 1 February 2017

Time: 12:00 PM - 2:00 PM

Location: CCB 345

 

 

Committee:

---

Dr. Charles Isbell (Advisor, School of Interactive Computing, Georgia Tech)

Dr. Douglas Bodner (Tennenbaum Institute, Georgia Tech)

Dr. Mark Riedl (School of Interactive Computing, Georgia Tech)

Dr. Spencer Rugaber (School of Computer Science, Georgia Tech)

Dr. Andrea Thomaz (Electrical and Computer Engineering, University of Texas at Austin)

 

Abstract:

---

 

Reinforcement learning is a promising solution to the intelligent agent problem, namely, given the state of the world, which action should an agent take to maximize goal attainment. However, reinforcement learning algorithms are slow to converge for larger state spaces and using reinforcement learning in agent programs requires detailed knowledge of reinforcement learning algorithms.

 

One approach to solving the curse of dimensionality in reinforcement learning is decomposition. Modular reinforcement learning, as it is called in the literature, decomposes an agent into concurrently running reinforcement learning modules that each learn a "selfish" solution to a subset of the original problem. For example, a bunny agent might be decomposed into a module that avoids predators and a module that finds food. Current approaches to modular reinforcement learning support decomposition but, because the reward scales of the modules must be comparable, they are not composable -- a module written for one agent cannot be reused in another agent without modifying its reward function.

 

This dissertation makes two contributions: (1) a command arbitration algorithm for modular reinforcement learning that enables composability by decoupling the reward scales of reinforcement learning modules, and (2) a Scala-embedded domain-specific language -- AFABL (A Friendly Adaptive Behavior Language) -- that integrates modular reinforcement learning in a way that allows programmers to use reinforcement learning without knowing much about reinforcement learning algorithms. We empirically demonstrate the reward comparability problem and show that our command arbitration algorithm solves it, and we present the results of a study in which programmers used AFABL and traditional programming to write a simple agent and adapt it to a new domain, demonstrating the promise of language-integrated reinforcement learning for practical agent software engineering.

Status

  • Workflow Status:Published
  • Created By:Tatianna Richardson
  • Created:01/13/2017
  • Modified By:Tatianna Richardson
  • Modified:02/01/2017

Categories

Keywords

Target Audience