event

PhD Proposal by Kaely C. Hall

Primary tabs

Title: Designing for Representational Alignment in Human-AI Interaction

 

Kaely Hall

Ph.D. Student in Human-centered Computing 

School of Interactive Computing 

Georgia Institute of Technology 

 

Date: Friday, May 8th, 2026

Time:  12-3pm

Location: Coda 1215 Midtown  

Teams Info: https://teams.microsoft.com/meet/2255456448249?p=rIzXwFXWGRftAe5rTE 

Meeting ID: 225 545 644 824 9

Passcode: X7zT7AR3

 

Committee 

Dr. Jennifer Kim (advisor) - School of Interactive Computing, Georgia Institute of Technology

Dr. Munmun de Choudhury - School of Interactive Computing, Georgia Institute of Technology

Dr. Nasibeh Farahani - Mayo Clinic Platform, Mayo Clinic

Dr. Andrea Parker - School of Interactive Computing, Georgia Institute of Technology

Dr. Vedant Das Swain - Tanden School of Engineering, New York University

  

 

Abstract 

Artificial intelligence (AI) systems are increasingly used to support how individuals construct self-representations in everyday and decision-critical contexts. For example, people use AI to draft professional materials such as cover letters and bios, or to articulate preferences in medical settings, such as birth plans that communicate values and priorities for care. In these contexts, AI use extends beyond task assistance and becomes part of the process through which individuals construct and communicate representations of themselves. However, these representations often reflect the assumptions, conventions, and statistical patterns embedded in AI systems, resulting in outputs that may not faithfully capture users’ situated experiences, self-understandings, or how they intend to be understood. This dissertation introduces representational alignment as a framework for understanding this challenge: the extent to which AI-mediated representations preserve and convey a person’s intended meaning across contexts. Misalignment arises when a person’s intended self-representation diverges from the representation constructed by an AI system, often due to generalized or decontextualized interpretations of user input.

In my completed work, I examine representational alignment in two stages: breakdown and repair. The first study analyzes how misalignment emerges in practice through neurodivergent job-seekers’ use of an LLM-powered career support chatbot, showing that systems frequently misinterpret implicit cues and impose normative language and assumptions. As a result, the chatbot’s “support” is often grounded in misaligned representations of users’ experiences and intentions. The second study examines how users work to restore alignment through interaction. I introduce bi-directional alignment as an interaction paradigm in which users and systems iteratively shape representations together, and investigate this concept through LL.me, a research probe that supports iterative refinement of AI-generated professional self-representations. Findings show that alignment develops through reflection, reinterpretation, and refinement, as users incorporate tacit contextual knowledge, personal values, and anticipated audience expectations.

My proposed work extends representational alignment to clinical settings, where individuals must construct self-representations through the articulation of expectations preferences without full visibility into how they will be interpreted or executed within care institutions. Birth planning is a particularly challenging case, as patients must express delivery preferences in advance of uncertain and evolving conditions, where those preferences may later be reinterpreted, constrained, or overridden during care. I propose AIDoula, an interactive system designed to support patients in articulating and refining birth preferences by making clinical constraints, such as standard delivery intervention protocols, more visible and enabling iterative revision. In this context, AI-generated representations must function as boundary objects, carrying meaning between patients and clinicians while remaining actionable in care settings. Through a two-phase evaluation with patients and clinicians, I will examine how individuals construct conditional and flexible representations under partial constraint visibility, and how these representations are interpreted and reshaped within clinical workflows. Collectively, this dissertation advances a framework for representational alignment in human–AI systems, contributing empirical evidence of misalignment, interactional approaches for supporting alignment in practice, and a system that demonstrates how representations can remain meaningful under uncertain and evolving constraints.

 

Status

  • Workflow status: Published
  • Created by: Tatianna Richardson
  • Created: 04/21/2026
  • Modified By: Tatianna Richardson
  • Modified: 04/21/2026

Categories

Keywords

User Data

Target Audience