event

PhD Defense by Rohan R Paleja

Primary tabs

Title: Interpretable Artificial Intelligence for Personalized Human-Robot Collaboration

 

Date: Wednesday, August 16th, 2023

Time: 3:00-5:00 PM (EST)

Location: Kendeda 210 (In-Person) and https://gatech.zoom.us/j/92773462347

 

Committee:

Dr. Matthew Gombolay (Advisor) – School of Interactive Computing, Georgia Institute of Technology

Dr. Harish Ravichandar – School of Interactive Computing, Georgia Institute of Technology

Dr. Dorsa Sadigh – Department of Computer Science, Stanford University

Dr. Peter Stone – Department of Computer Science, University of Texas at Austin

Dr. Seth Hutchinson – School of Interactive Computing, Georgia Institute of Technology

 

Abstract:

Collaborative robots (i.e., "cobots") and machine learning-based virtual agents are increasingly entering the human workspace with the aim of increasing productivity, enhancing safety, and improving the quality of our lives. These agents will dynamically interact with a wide variety of people in dynamic and novel contexts, increasing the prevalence of human-machine teams in healthcare, manufacturing, and search-and-rescue. Within these domains, it is critical that collaborators have aligned objectives and maintain awareness over other agents' behaviors to avoid potential accidents.

 

In my thesis, I present several contributions that push the frontier of real-world robotics systems toward those that understand human behavior, maintain interpretability, communicate efficiently, and coordinate with high performance. Specifically, I first study the nature of collaboration in simulated, large-scale multi-agent systems, exploring techniques that utilize context-based communication among decentralized robots, and find that utilizing targeted communication and accounting for teammate heterogeneity is beneficial in generating effective coordination. Next, I transition to human-machine systems and develop a data-efficient, person-specific, and interpretable tree-based apprenticeship learning framework to enable cobots to infer and understand decision-making behavior across heterogeneous human end-users. Building on this, I extend neural tree-based architectures to support learning interpretable control policies for robots via gradient-based techniques. This not only allows end-users to inspect and understand learned behavior models but also provides developers with the means to verify control policies for safety guarantees. Lastly, I present two works that deploy Explainable AI (xAI) techniques in human-machine collaboration, aiming to 1) characterize the utility of xAI and its benefits towards shared mental development, and 2) allow end-users to interactively modify learned policies via a graphical user interface to support team development.

 

Status

  • Workflow Status:Published
  • Created By:Tatianna Richardson
  • Created:08/09/2023
  • Modified By:Tatianna Richardson
  • Modified:08/09/2023

Categories

Keywords

Target Audience