event

PhD Defense by Devleena Das

Primary tabs

Title: Human-Centered Explanations for Sequential Decision-Making Systems

Date: Tuesday, January 30th, 2024

Time: 3:00 PM – 5:00 PM EST

Location:  Online - https://gatech.zoom.us/my/ddevleena  -  Passcode: 634258

        In Person - Klaus 1212

 

 

Committee:

Dr. Sonia Chernova (Advisor) – School of Interactive Computing, Georgia Institute of Technology

Dr. Mark Riedl – School of Interactive Computing, Georgia Institute of Technology

Dr. Thomas Ploetz – School of Interactive Computing, Georgia Institute of Technology

Dr. Henny Admoni – Institute of Robotics, Carnegie Mellon University

Dr. Been Kim – Google DeepMind 

 

Abstract:

Everyday users (non-AI experts) are interacting with a growing number of AI systems that provide intelligent assistance, such as educational tutors and home robots. However, the increased computational and inferential capabilities of such AI systems often lend to differences between a user’s and an AI system’s mental models, making it difficult for users to understand AI solutions. Furthermore, AI systems are not perfect, and in real-world settings, suboptimal AI decision making, or failures are inevitable. We posit that for AI systems to provide meaningful support, these systems must be understandable to everyday users. This presents a need for human-centered explainability techniques that help increase everyday users’ understanding of complex AI systems. This thesis develops computational methods for explaining sequential decision-making AI systems such that both end users and AI systems benefit in improved understanding and performance. Findings from psychology state that humans naturally reason about complex problems by attributing abstractions to a given task. Inspired by these insights,  we show that explanations of AI systems that are grounded in abstractions of utility factors, environmental context, subgoal information and task concepts can improve everyday user task performance as well as AI performance. Specifically, this thesis makes the following contributions:  (1) an algorithm that leverages task utility factors to explain optimal AI systems for improved user performance; (2) a framework that grounds robot decision-making in environmental context to explain robot failures to non-roboticists; (3) an algorithm that extracts environmental-context via semantic scene graphs to provide scalable explanations of robot failures; (4) an algorithm that explains suboptimal, hierarchical AI systems by extracting appropriate subgoal information to help users avoid suboptimal decisions; (5) a framework that leverages concept-based explanations in a learned joint embedding space to improve both AI learning and user performance.

Status

  • Workflow Status:Published
  • Created By:Tatianna Richardson
  • Created:01/19/2024
  • Modified By:Tatianna Richardson
  • Modified:01/19/2024

Categories

Keywords

Target Audience