event
PhD Defense by Gennie Mansi
Primary tabs
Title: Understanding the Intersections of AI, User Needs, and Law
Date: Wednesday, April 29, 2026
Time: 10:30 AM - 12:30 PM, Eastern time (U.S.)
Location: Coda C1115 Druid Hills
Virtual Meeting (hybrid): https://gatech.zoom.us/j/91333564325
Gennie Mansi
HCC PhD Candidate
School of Interactive Computing, College of Computing
Georgia Institute of Technology
Committee:
Dr. Mark Riedl (Advisor) - School of Interactive Computing, Georgia Institute of Technology
Mr. Benjamin Sundholm, JD - Tulane Law School, Tulane University
Dr. Naveena Karusala - School of Interactive Computing, Georgia Institute of Technology
Dr. Andrea Parker - School of Interactive Computing, Georgia Institute of Technology
Dr. Agata Rozga - College of Computing, Georgia Institute of Technology
Summary:
As AI tools are incorporated into high-stakes decision-making environments, such as healthcare and education, people need to act meaningfully in response to AI outputs. This thesis advances actionability—how AI tools and their explanations enable pragmatic action by people in complex sociotechnical environments shaped by power dynamics. I make two central arguments: first, that we can improve people’s ability to act with AI tools by examining how their actions and information needs connect to the ways they care for themselves and others; and second, that understanding how laws and regulations impact people’s ability to care with AI tools, we can inform the creation and use of AI tools to support people navigating uneven or unknown risks.
This work deepens our understanding of actionability in 5 parts. I discuss how I created a user-centered catalog of information and actions to re-orient the design and evaluation of AI tools around users' needs. Building on this foundation, I draw out the complexities of enabling actionability through an in-depth investigation of physicians' needs, first uncovering how care—including laws, regulations, and collective responsibility for patient well-being—shapes actionability in ways current AI tools’ designs often overlook. Informed by the ways care highlights laws and regulations as a contextual factor, I investigate how doctors' perceptions of potential errors connect to their legal concerns for AI tools. I show that doctors do not connect legal risks to AI tools’ capabilities, and I discuss how this gap may result in unintentional harm in the form of defensive medical practices. To address these gaps, I describe an assets-based, co-design process with lawyers that used visualizations to surface tacit legal knowledge and generate strategies for stakeholders to predict and manage legal risks, revealing how power dynamics shape actionability. Finally, I ground these findings in practice through an analysis of 31 U.S. legal cases, identifying how a complex web of stakeholders and widely deployed AI tools negatively impact patient care, and proposing paths forward through revised liability structures and tools that support legal recourse for patients.
Groups
Status
- Workflow status: Published
- Created by: Tatianna Richardson
- Created: 04/18/2026
- Modified By: Tatianna Richardson
- Modified: 04/18/2026
Categories
Keywords
User Data
Target Audience