event

PhD Defense by Vivian Chu

Primary tabs

Title: Teaching Robots about Human Environments

 

Vivian Chu

Robotics Ph.D. Candidate

School of Interactive Computing

Georgia Institute of Technology

 

Date: December 6th, 2017 (Wednesday)

Time: 1:00pm to 3:00pm (EST)

Location: CCB 340

 

Committee:

-------------------

Dr. Andrea L. Thomaz (Co-Advisor), Department of Electrical and Computer Engineering, The University of Texas at Austin 

Dr. Sonia Chernova (Co-Advisor), School of Interactive Computing, Georgia Institute of Technology 

Dr. Henrik I. Christensen, Department of Computer Science and Engineering, University of California, San Diego

Dr. Charles C. Kemp, School of Biomedical Engineering, Georgia Institute of Technology

Dr. Siddhartha Srinivasa, School of Computer Science and Engineering, University of Washington

 

 

Abstract:

-------------------

 

The real world is complex, unstructured, and contains high levels of uncertainty. To operate in such environments, robots need to learn and adapt. One such framework that allows robots to learn and adapt is to model the world using affordances. By modeling the world with affordances, robots can reason about what actions they need to take to achieve a goal. This thesis provides a framework that allows robots to learn these models through interaction and human guidance. 

 

Within robotic affordance learning, there has been a large focus on learning visual skill representations due to the difficulty of getting robots to interact with the environment. Furthermore, utilizing different modalities (e.g. touch and sound) introduces challenges such as different sampling rates and data resolution. This thesis addresses these challenges by providing several methods to interactively gather multisensory data using human guided robot self-exploration and an approach to integrate visual, haptic, and auditory data for adaptive object manipulation

 

We take a human-centered approach to tackling the challenge of robots operating in unstructured environments. The following are the contributions this thesis makes to the field of robot learning: (1) a human-centered framework for robot affordance learning that demonstrates how human teachers can guide the robot in the modeling process throughout the entire pipeline of affordance learning; (2) a human-guided robot self-exploration framework that contributes several algorithms that use human guidance to enable robots to efficiently explore the environment and learn affordance models for a diverse range of manipulation tasks; (3) a multisensory affordance model that integrates visual, haptic, and audio input; and (4) a novel control framework that allows adaptation of affordances for object manipulation that utilizes multisensory data and human-guided exploration.

Status

  • Workflow Status:Published
  • Created By:Tatianna Richardson
  • Created:11/27/2017
  • Modified By:Tatianna Richardson
  • Modified:11/27/2017

Categories

Keywords