event

PhD Defense by Sidney Scott-Sharoni

Primary tabs

Name: Sidney Scott-Sharoni
Ph.D. Dissertation Defense Meeting
Date: Monday, November 10th, 2025
Time: 2:00-4:00 PM (Eastern Time)
Mode: Hybrid (via Teams)
Location:  Dissertation Defense Room in Price Gilbert Library (Room 4222).
Teams Link: click here

Dissertation Chair/Advisor:
Bruce Walker, Ph.D. (Georgia Tech)

Dissertation Committee Members:
Richard Catrambone, Ph.D. (Georgia Tech)
Mengyao Li, Ph.D. (Georgia Tech)
Shadan Sadeghian, Ph.D. (University of Siegen)
Brittany Holthausen, Ph.D. (John Deere)

Title: Challenging the Benefit of Anthropomorphism on Human-AI Collaboration with AI Voice Agents

Abstract: Contrary to conventional wisdom, theoretical frameworks, and current trends in AI design, for human collaboration, simple robotic-sounding agents may be better than more complex anthropomorphic, or human-like, agents. This dissertation tested two extremes of anthropomorphism and social intelligence in an AI voice agent across four studies that examined various types of social influence. The results uncovered a consistent discrepancy between the subjective ratings of the agent and the social behavior. In the trivia task in Study 1, participants conformed less when they perceived the AI agent as more anthropomorphic, despite viewing the more anthropomorphic agent as more likable. In the moral judgment task in Study 2, participants conformed less to the anthropomorphic agent than the robotic agent, regardless of the agent’s morality, which, again, contrasted with the subjective ratings. In the prisoner’s dilemma task in Study 3, participants cooperated less with the anthropomorphic agent as they applied human social behaviors to the AI (e.g., retaliating to the degree of lowering their game score) that were not found in interactions with the robotic agent. In the automated vehicle task, compliance varied by the agent type, agent driving style, and driving scenario despite the anthropomorphic agent being consistently preferred. Evidently, the implementation of human qualities in an AI agent does not guarantee more conformity, cooperation, or compliance to the agent. A possible theoretical explanation, garnered from these four studies, is that automation bias amplifies the effects predicted by the Computers are Social Actors theory, leading people to have higher subconscious social performance expectations of an anthropomorphic AI agent in interactive tasks than a nonanthropomorphic agent or other humans. Developers should consider the desired human behavior, contextual factors, performance of the technology, and social influence type before applying human-like features to AI technology. 
 

Status

  • Workflow Status:Published
  • Created By:Tatianna Richardson
  • Created:11/04/2025
  • Modified By:Tatianna Richardson
  • Modified:11/04/2025

Categories

Keywords

Target Audience