Phd Defense by Christopher Simpkins

Event Details
  • Date/Time:
    • Wednesday February 1, 2017 - Thursday February 2, 2017
      12:00 pm - 1:59 pm
  • Location: CCB 345
  • Phone:
  • URL:
  • Email:
  • Fee(s):
  • Extras:
No contact information submitted.

Summary Sentence: Integrating Reinforcement Learning into a Programming Language

Full Summary: No summary paragraph submitted.

Title: Integrating Reinforcement Learning into a Programming Language


Christopher Simpkins

School of Interactive Computing

College of Computing

Georgia Institute of Technology


Date: Wednesday, 1 February 2017

Time: 12:00 PM - 2:00 PM

Location: CCB 345





Dr. Charles Isbell (Advisor, School of Interactive Computing, Georgia Tech)

Dr. Douglas Bodner (Tennenbaum Institute, Georgia Tech)

Dr. Mark Riedl (School of Interactive Computing, Georgia Tech)

Dr. Spencer Rugaber (School of Computer Science, Georgia Tech)

Dr. Andrea Thomaz (Electrical and Computer Engineering, University of Texas at Austin)





Reinforcement learning is a promising solution to the intelligent agent problem, namely, given the state of the world, which action should an agent take to maximize goal attainment. However, reinforcement learning algorithms are slow to converge for larger state spaces and using reinforcement learning in agent programs requires detailed knowledge of reinforcement learning algorithms.


One approach to solving the curse of dimensionality in reinforcement learning is decomposition. Modular reinforcement learning, as it is called in the literature, decomposes an agent into concurrently running reinforcement learning modules that each learn a "selfish" solution to a subset of the original problem. For example, a bunny agent might be decomposed into a module that avoids predators and a module that finds food. Current approaches to modular reinforcement learning support decomposition but, because the reward scales of the modules must be comparable, they are not composable -- a module written for one agent cannot be reused in another agent without modifying its reward function.


This dissertation makes two contributions: (1) a command arbitration algorithm for modular reinforcement learning that enables composability by decoupling the reward scales of reinforcement learning modules, and (2) a Scala-embedded domain-specific language -- AFABL (A Friendly Adaptive Behavior Language) -- that integrates modular reinforcement learning in a way that allows programmers to use reinforcement learning without knowing much about reinforcement learning algorithms. We empirically demonstrate the reward comparability problem and show that our command arbitration algorithm solves it, and we present the results of a study in which programmers used AFABL and traditional programming to write a simple agent and adapt it to a new domain, demonstrating the promise of language-integrated reinforcement learning for practical agent software engineering.

Additional Information

In Campus Calendar

Graduate Studies

Invited Audience
Phd Defense
  • Created By: Tatianna Richardson
  • Workflow Status: Published
  • Created On: Jan 13, 2017 - 10:12am
  • Last Updated: Feb 1, 2017 - 7:54am