event

PhD Proposal by Qiaosi (Chelsea) Wang

Primary tabs

Title: Mutual Theory of Mind for Human-AI Communication in AI-Mediated Social Interaction

 

Date: Friday, April 8th, 2022

Time: 3:00-5:00 PM ET

Location (remote via Zoom): click here to join

 

Qiaosi (Chelsea) Wang

Ph.D. student in Human-Centered Computing

School of Interactive Computing

Georgia Institute of Technology

 

Committee:

Dr. Ashok K. Goel (Advisor) – School of Interactive Computing, Georgia Institute of Technology

Dr. Munmun De Choudhury – School of Interactive Computing, Georgia Institute of Technology

Dr. Elizabeth N. DiSalvo – School of Interactive Computing, Georgia Institute of Technology

Dr. Q. Vera Liao – FATE Group, Microsoft Research Montreal

Dr. Lauren G. Wilcox – People+AI Research, Google Research

 

Abstract:

Our social interactions are increasingly mediated through Artificial Intelligence (AI) that can deliver personalized social recommendations based on information embedded in our digital footprints. For example, in online educational programs where learners frequently feel socially isolated, an AI agent called SAMI can connect learners by extracting and analyzing their hobbies, locations, and other information from their self-introduction posts on the class discussion forums. At the core of this AI-mediated social interaction is the communication between the user and the AI, where the AI agent conveys its understanding of the user through social recommendations and the user conveys their understanding of the AI agent through feedback. However, this human-AI communication process is prone to failure due to the lack of mutual understanding between the human and the AI— the AI agent might have an incorrect understanding of the user’s social preferences or goals, and the user might have an incorrect understanding of the AI agent’s capability. 

 

Inspired by the basic human capability of surmising what is happening in others’ minds, also known as Theory of Mind (ToM), I posit Mutual Theory of Mind (MToM) as a framework to enhance mutual understanding in human-AI communication. ToM is a human characteristic that enables us to make conjectures about each other's goals, beliefs, and mental states through observable or latent verbal and behavioral cues. Having a MToM during communication, meaning both parties involved in the interaction possess a ToM, enables us to continuously refine our understanding of each other’s minds through behavioral and verbal feedback, helping us to maintain constructive and coherent communication. 

 

Using MToM as a framework to enhance mutual understanding during human-AI communication in AI-mediated social interaction, my research examines how three key elements of MToM— perception, feedback, mutuality— can together shape the mutual understanding between humans and AIs in three stages of human-AI communication: construction, recognition, and revision of AI’s ToM. In my already completed work, I conducted interviews and co-design studies to understand the human-centered design of AI-mediated social interaction in online education and pinpoint human-AI communication as the core process of AI-mediated social interaction. Using MToM as a framework, I then explored the construction of AI’s ToM through analysis of AI agents’ interactions with learners in online class discussion forums. My proposed work will continue the exploration of MToM in human-AI communication by first examining the impact of user’s recognition of AI’s incorrect ToM on the dynamics of human-AI communication, and then investigate AI’s ToM revision and the communication of this revision to the user to re-establish mutual understanding in the case of communication breakdowns. My work makes design and theoretical contributions to Human-AI Interaction, Computer Supported Cooperative Work (CSCW), and Cognitive Science.

 

Status

  • Workflow Status:Published
  • Created By:Tatianna Richardson
  • Created:03/29/2022
  • Modified By:Tatianna Richardson
  • Modified:03/29/2022

Categories

Keywords