news
Researchers Look to Maker Safer AI Through Google Awards
Primary tabs
People seeking mental health support are increasingly turning to large language models (LLMs) for advice.
However, most popular AI-powered chatbots are not trained to recognize when someone is in crisis. LLMs also cannot determine when to refer someone to a human specialist.
New Georgia Tech research projects that address these issues may soon provide people seeking mental health support with safer experiences.
Google has awarded research grants to three faculty members from the School of Interactive Computing to study artificial intelligence (AI), trust, safety, and security. The grants were among dozens awarded by the company to researchers across the country.
Professor Munmun De Choudhury, Associate Professor Rosa Arriaga, and Associate Professor Alan Ritter are among the recipients of the 2025 Google Academic Research Awards.
Their projects will explore questions like:
- What harms could occur if people consult LLMs for mental health advice?
- Which groups are most at risk of receiving harmful guidance?
- When should an LLM stop responding and refer someone to a human professional?
De Choudhury and Arriaga will examine how LLMs might harm people seeking mental health care.
De Choudhury’s work focuses on spotting when chatbot conversations go wrong and lead users toward self-harm. She is also studying design changes that could prevent these situations.
Her project, Exiting Harmful Reliance: Identifying Crises & Care Escalation Needs, is in partnership with Angel Hsing-Chi Hwang from the University of Southern California. Together, they will review real and synthetic chat transcripts with clinicians to find language patterns that signal risk.
“A chatbot will always give a response and keep talking to you for however long you want,” De Choudhury said. “That may not be a good thing for someone in crisis. We need to know when the right response is to stop and suggest talking to a human.”
Understanding Risks for Low-Income Users
Arriaga’s project, Dull, Dirty, Dangerous: Investigating Trust of Digital Resources Among Low-SES Mental Health Care Seekers, looks at how LLMs affect people with low socioeconomic status (SES).
Dull, dirty, and dangerous is a phrase used to describe work that is well-suited for robot automation because they are repetitive, physically taxing, or hazardous for humans. Arriaga said she adapted these terms for her research to create a taxonomy of the harms AI can cause to people seeking mental health care.
Arriaga also wants to label the trust factors that chatbots have that attract low-SES users to seek their advice, and how these may differ for adults and adolescents across contexts.
“We know one of the reasons some users go to LLMs is because they aren’t insured and can’t afford a therapist,” she said. “LLMs are available 24-7. Maybe it doesn’t start as a trust issue. Maybe it starts with availability.
“Some of these human-AI conversations that result in harmful mental health advice didn’t begin on the topic of mental health. In one case, the person started going to the machine for help with homework.
“Then this relationship evolved into personal matters. Should we constrain the system to limit itself to helping someone with their homework and not wander off that subject into mental health matters?”
Managing Privacy Risks for Social Media
Ritter will use the Google award to advance research on social media privacy tools, including interactive AI agents that help people make more informed decisions about what they share online.
His project, AI Tools to Help Users Make Informed Decisions About Online Information Sharing, focuses on reducing privacy risks in both text and images by identifying when posts reveal more than users intend.
“We’ve been developing methods to assess risks in text, and now we’re extending that work to images,” Ritter said. “People post photos without realizing how easily they can be geolocated by advanced AI systems. A casual selfie near home might contain subtle cues about where you live, like a street sign, that reveal private details.”
The project aims to create AI agents that review content within user posts, flag elements that pose risk, and suggest safer alternatives. Ritter said he wants people to maintain control over their privacy without limiting freedom of expression.
Ritter will deploy advanced reasoning models capable of probabilistic privacy estimation. These systems can infer how identifiable a piece of text might be or how likely an image is to reveal a user’s location.
For images, Ritter and his collaborators will use models that identify geolocatable features, allowing users to edit or hide them before posting.
For more on Ritter’s research, read how an LLM he co-developed protects the privacy of users on social media.
Status
- Workflow status: Published
- Created by: Nathan Deen
- Created: 11/24/2025
- Modified By: Nathan Deen
- Modified: 11/24/2025