event

PhD Defense by Jin Xu

Primary tabs

Title: Human-Robot Trust in Time-Sensitive Scenarios

 

Date: Friday, April 15, 2022

Time: 12:30 PM - 2:30 PM ET

Location: https://gatech.zoom.us/j/97023434967

 

Jin Xu

Robotics Ph.D. Candidate

School of Electrical and Computer Engineering

Georgia Institute of Technology

 

Committee:

Dr. Ayanna Howard (Advisor) — College of Engineering, The Ohio State University

Dr. Karen M. Feigh — School of Aerospace Engineering, Georgia Institute of Technology

Dr. Matthew Gombolay — School of Interactive Computing, Georgia Institute of Technology

Dr. Alan R. Wagner — Department of Aerospace Engineering, Pennsylvania State University

Dr. Paul Robinette — Department of Electrical and Computer Engineering, University of Massachusetts Lowell

 

Abstract:

Trust is a key element for successful human-robot interaction. When robots are placed in human environments, it is essential to ensure a trusting relationship is established between the robots and the individuals they interact with. As such, this dissertation focuses on understanding human-robot trust in time-sensitive interaction scenarios in which a robot behaves as an assistant to the human by providing advice. The objective of this work is to develop algorithms that not only can infer human trust based on prior human-robot interactions but can also mitigate negative outcomes when trust violations occur. To achieve this, we begin by examining aspects of trust between humans and interactive robots during a cognitive problem-solving scenario and a simulated driving scenario. We then investigate and identify the key factors that contribute to humans’ decisions to trust robots in these time-sensitive scenarios. Next, we develop a computational framework for modeling human-robot trust in these scenarios. The experimental results show that this approach can be used to model behavior-based trust and to predict humans’ decisions to take advice from the robot in time-sensitive scenarios. However, the performance of the model degrades when the robot makes mistakes. To address this, we develop a set of trust repair strategies to repair broken trust such that potential negative outcomes can be mitigated. We validate the effectiveness of these trust repair strategies using the simulated driving scenario as the testbed and further evaluate the impact of trust repair on the trust model.

 

Status

  • Workflow Status:Published
  • Created By:Tatianna Richardson
  • Created:04/01/2022
  • Modified By:Tatianna Richardson
  • Modified:04/01/2022

Categories

Keywords