PhD Proposal by Kantwon Rogers

Primary tabs


What Happens When a Robot Lies to You?  

Investigating Aspects of Prosocial Intelligent Agent Deception Towards Humans 


Date: December 6th, 2023 

Time: 3:00PM-5:00PM EST 

Location: Virtual Zoom Meeting


Meeting ID: 954 9001 6600 

Passcode: 416049 


Kantwon Rogers 

Ph.D. student in Computer Science 

School of Interactive Computing 

Georgia Institute of Technology 


Dr. Sonia Chernova – (co-advisor) College of Computing, Georgia Institute of Technology 

Dr. Ayanna Howard – (co-advisor) College of Computing, Georgia Institute of Technology / College of Engineering, Ohio State University

Dr. Ashok Goel – College of Computing, Georgia Institute of Technology 

Dr. Harish Ravichandar – College of Computing, Georgia Institute of Technology 

Dr. Selma Šabanović – School of Informatics and Computing, Indiana University Bloomington 

Dr. Marynel Vázquez – Department of Computer Science, Yale University 



People across many societies are explicitly taught some form of the adage “honesty is the best policy”, but is that a lie? Telling the truth is not always helpful, and lying is not always harmful. In truth, everyone lies. We lie to help ourselves, and we lie to help others. We lie in both serious and inconsequential situations. Lying is a foundational part of how people interact with each other, and accepted members of society are successfully able to navigate the highly nuanced norms of social deception. 

Robots and artificially intelligent systems are increasingly being placed within our societies, and in some contexts, they are expected to interact with humans socially. People must trust that robots are functionally competent to complete tasks while also being socially competent to understand social conventions that may favor particular strategies over others. If people often successfully choose lying to be the best policy in certain situations, it then follows that an intelligent agent, that is designed to learn from humans and exhibit social competency, may replicate expected lying behavior as it becomes fully integrated into social settings. 

In this thesis, I explore intelligent agents that lie to benefit others and how deception influences people’s interactions and perceptions of them. Additionally, my work studies the effect of the timing at which users realize an intelligent agent can, or has, lied to them: before an interaction, during an interaction, or after an interaction. 

The proposed work details my plan for creating and evaluating an autonomous deceptive agent using large language models within a longitudinal educational context.  


  • Workflow Status:Published
  • Created By:Tatianna Richardson
  • Created:12/01/2023
  • Modified By:Tatianna Richardson
  • Modified:12/01/2023



Target Audience