event

PhD Defense by De’Aira Bryant

Primary tabs

Title: Bias in the Eyes of the Beholder: Development of a Bias-Aware Facial Expression Recognition Algorithm for Autonomous Agents

 

Date: Friday, April 19, 2024

Time: 3:30pm – 5:30pm

Location:  Virtual Zoom Link

  • Meeting ID: 937 9595 0367
  • Passcode: 317851

 

De’Aira Bryant

Computer Science Ph.D. Candidate

School of Interactive Computing

Georgia Institute of Technology

www.deairabryant.com

 

Committee:

Ayanna Howard (Advisor) – School of Interactive Computing, Georgia Institute of Technology / College of Engineering, Ohio State University

Charles Isbell – School of Interactive Computing, Georgia Institute of Technology / University of Wisconsin - Madison

Sonia Chernova –  School of Interactive Computing, Georgia Institute of Technology

Jason Borenstein –  School of Public Policy, Georgia Institute of Technology

Tom Williams –  Department of Computer Science, Colorado School of Mines

 

Abstract: 

The field of human-robot interaction (HRI) is advancing rapidly, with autonomous agents becoming more pervasive across various sectors including education, healthcare, hospitality, and more. However, the susceptibility of data-driven perception algorithms to bias raises ethical concerns around fairness, inclusivity, and responsibility. Measuring and mitigating bias in AI systems have emerged as vital challenges facing the computing community. Yet, little attention has been given to its impact on autonomous agents and HRI.

 

This thesis investigates the effects of human, data, and algorithmic bias on HRI through the lens of facial expression recognition (FER). Beginning with an analysis of human bias, we explore normative expectations for expressive autonomous agents when considering factors like robot race, gender, and embodiment. These expectations help inform the design processes needed to develop effective agents. We then delve into data and algorithmic bias in FER algorithms, addressing challenges associated with some populations having scarce data. We contribute improved techniques for modeling facial expression perception and benchmarking FER algorithms. Central to our contributions is the introduction of a bias-aware approach for FER algorithmic development, leveraging self-training semi-supervised learning coupled with random class rebalancing. This novel approach enhances the performance and equity of FER algorithms. Validation experiments, conducted online and in-person, assess the impact of our bias-aware FER algorithm on human perceptions of fairness during interactions with autonomous agents. Our results demonstrate that users interacting with bias-free or bias-aware agents exhibit higher perceptions of fairness and enhanced responsiveness. These findings collectively highlight the importance of addressing bias in AI systems to promote positive interactions between humans and embodied agents, facilitated by bias-aware technology, algorithmic transparency, and AI literacy.

 

Status

  • Workflow Status:Published
  • Created By:Tatianna Richardson
  • Created:04/08/2024
  • Modified By:Tatianna Richardson
  • Modified:04/08/2024

Categories

Keywords

Target Audience