event

PhD Defense by Wenhao Yu

Primary tabs

Title: Learning to Walk using Deep Reinforcement Learning and Transfer Learning

  

Wenhao Yu

School of Interactive Computing

College of Computing

Georgia Institute of Technology

 

 

Date: Tuesday, April 28, 2020

Time: 2:00 PM-4:00 PM (EST)

BlueJeans: https://bluejeans.com/206491397

**Note: this defense is remote-only due to the institute's guidelines on COVID-19**

 

Committee:

Dr. Greg Turk (Advisor, School of Interactive Computing, Georgia Tech)

Dr. C. Karen Liu (Advisor, School of Engineering, Stanford University / School of Interactive Computing, Georgia Tech)

Dr. Charlie Kemp (Department  of Biomedical Engineering / School of Interactive Computing, Georgia Tech)

Dr. Sergey Levine (Department  of  Electrical  Engineering and Computer Sciences, University of California, Berkeley)

Dr. Michiel van de Panne (Department of Computer Science, University of British Columbia)

 

 

 

Abstract:

In this dissertation, we seek to develop computational tools to reproduce the locomotion of humans and animals in complex and unpredictable environments. Such tools can have a significant impact in computer graphics, robotics, machine learning, and biomechanics. However, there are two main hurdles in achieving this goal. First, synthesizing a successful locomotion policy requires precise control of a high-dimensional under-actuated system and striking a balance among a set of conflicting goals such as walking forward, energy efficiency, and keeping balance. Second, the synthesized locomotion policy needs to generalize to new environments that were not present during optimization and training in order to cope with unexpected situations during execution.

 

To achieve this goal, we first introduce a Deep Reinforcement Learning (DRL) approach for learning locomotion controllers for simulated legged creatures without using motion data. We propose a loss term in DRL objective that encourages the agent to exhibit symmetric behavior and a curriculum learning approach that provides modulated physical assistance in order to achieve successful training of energy-efficient controllers. We demonstrate the results of this approach across a variety of simulated characters that, when we combine the two proposed ideas, achieve low-energy and symmetric locomotion gaits without relying on external motion data.

 

Next, we introduce a set of Transfer Learning (TL) algorithms that generalize the learned locomotion controllers to novel environments. Specifically, we focus on the problem of transferring a simulation-trained locomotion controller to a real legged robot, also known as the Sim-to-Real transfer problem. We first introduce a transfer learning algorithm that can successfully operate in unknown and changing dynamics within the training dynamics. To allow successful transfer outside the training environments, we further propose an algorithm that uses a limited amount of samples in the testing environments to adapt the simulation-trained policy. We demonstrate Sim-to-Real transfer for a biped robot, Robotis Darwin OP2, and a quadruped robot, Ghost Robotics Minitaur, respectively.

 

Finally, we consider the problem of safety during policy execution and transfer. We propose the training of a universal safe policy (USP) that controls the robot to avoid unsafe states from a diverse set of states, and an algorithm to combine a USP and a task policy to complete the task while acting safely. We demonstrate that the resulting algorithm can allow policies to adapt to notably different simulated dynamics with at most two failure trials, suggesting a promising path towards learning robust and safe control policies for sim-to-real transfer.

Status

  • Workflow Status:Published
  • Created By:Tatianna Richardson
  • Created:04/20/2020
  • Modified By:Tatianna Richardson
  • Modified:04/20/2020

Categories

Keywords