event
PhD Proposal by Jiawei Zhou
Primary tabs
Title: Generative Artificial Intelligence in Public Health Context: Assessing and Communicating Risks in Information Ecosystems
Date: Friday, July 18, 2025
Time: 10 AM — 12 PM ET
Location: Virtual - Zoom link
Jiawei Zhou
Ph.D. student in Human-Centered Computing
School of Interactive Computing
Georgia Institute of Technology
Committee:
Dr. Munmun De Choudhury (Advisor) — School of Interactive Computing, Georgia Institute of Technology
Dr. Andrea G. Parker — School of Interactive Computing, Georgia Institute of Technology
Dr. Srijan Kumar — School of Computational Science and Engineering, Georgia Institute of Technology
Dr. Nick Diakopoulos — School of Communication, Northwestern University
Dr. Q. Vera Liao — FATE Group, Microsoft Research Montreal & Computer Science and Engineering, University of Michigan
Abstract:
Technology is increasingly shaping the way we interact with information, where both the quality of that information and the affordances of information technologies shape people's attitudes and decision-making. Generative Artificial Intelligence (AI), such as large language models, differs fundamentally from prior information and communication technologies that were primarily task-centric, as it can produce new content in a probabilistic manner and at scale. This generative nature is both its power and its pitfall. On one hand, documented evidence indicates that it creates low-quality information, from hallucinated outputs to oversimplified answers. On the other hand, it is rapidly adopted as standalone applications or as features embedded into existing systems, often without users' full awareness of AI’s role in content generation. This imbalance between adoption speed and public understanding raises pressing questions about the influences of generative AI on information ecosystems.
I posit that understanding and mitigating the risks of AI-generated low-quality information requires contextualized assessment alongside traditional information sources, as well as adequate communication about its risks and distinctions. Situated within the context of a variety of public health crises and challenges, this dissertation adopts computational and qualitative methods alongside deep-seated domain collaborations with experts in the fields of public health, informatics, mass communication, and natural language processing.
In my completed work, I have shown that low-quality information is not only a harmful endpoint but also a source for further harm, and demonstrated how AI-generated outputs amplify these issues through high persuasiveness and detection difficulty. I have also synthesized stakeholder views on generative AI's ecological risks that extend beyond factual accuracy to broader impacts, and identified patterns and gaps in public discourse on how these distinctions and risks of generative AI are communicated. These insights together underscore the need to help the public navigate emerging AI technologies responsibly. To this end, my proposed work will explore ways to assist mindful and meaningful AI use for informational needs. This experimental study will examine the effects of risk communication on people's trust in using generative AI for a set of health informational needs, ranging from healthy lifestyle guidance to therapeutic mental health support. Collectively, this dissertation aims to advance a deeper understanding of generative AI in information environments and to inform more responsible adoption, communication, and governance.
Groups
Status
- Workflow Status:Published
- Created By:Tatianna Richardson
- Created:07/11/2025
- Modified By:Tatianna Richardson
- Modified:07/11/2025
Categories
Keywords
Target Audience