{"686615":{"#nid":"686615","#data":{"type":"news","title":"Researchers Look to Maker Safer AI Through Google Awards","body":[{"value":"\u003Cp\u003EPeople seeking mental health support are increasingly turning to large language models (LLMs) for advice.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EHowever, most popular AI-powered chatbots are not trained to recognize when someone is in crisis. LLMs also cannot determine when to refer someone to a human specialist.\u003C\/p\u003E\u003Cp\u003ENew Georgia Tech research projects that address these issues may soon provide people seeking mental health support with safer experiences.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EGoogle has awarded research grants to three faculty members from the School of Interactive Computing to study artificial intelligence (AI), trust, safety, and security. The grants were among dozens awarded by the company to researchers across the country.\u003C\/p\u003E\u003Cp\u003EProfessor \u003Ca href=\u0022http:\/\/www.munmund.net\/\u0022\u003E\u003Cstrong\u003EMunmun De Choudhury\u003C\/strong\u003E\u003C\/a\u003E, Associate Professor \u003Ca href=\u0022https:\/\/sites.google.com\/view\/riarriaga\/home\u0022\u003E\u003Cstrong\u003ERosa Arriaga\u003C\/strong\u003E\u003C\/a\u003E, and Associate Professor \u003Ca href=\u0022https:\/\/aritter.github.io\/\u0022\u003E\u003Cstrong\u003EAlan Ritter\u003C\/strong\u003E\u003C\/a\u003E are among the recipients of the \u003Ca href=\u0022https:\/\/research.google\/programs-and-events\/google-academic-research-awards\/google-academic-research-award-program-recipients\/\u0022\u003E\u003Cstrong\u003E2025 Google Academic Research Awards\u003C\/strong\u003E\u003C\/a\u003E.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003ETheir projects will explore questions like:\u003C\/p\u003E\u003Cul\u003E\u003Cli\u003EWhat harms could occur if people consult LLMs for mental health advice?\u003C\/li\u003E\u003Cli\u003EWhich groups are most at risk of receiving harmful guidance?\u003C\/li\u003E\u003Cli\u003EWhen should an LLM stop responding and refer someone to a human professional?\u003C\/li\u003E\u003C\/ul\u003E\u003Cp\u003EDe Choudhury and Arriaga will examine how LLMs might harm people seeking mental health care.\u003C\/p\u003E\u003Cp\u003EDe Choudhury\u2019s work focuses on spotting when chatbot conversations go wrong and lead users toward self-harm. She is also studying design changes that could prevent these situations.\u003C\/p\u003E\u003Cp\u003EHer project,\u0026nbsp;\u003Cem\u003EExiting Harmful Reliance: Identifying Crises \u0026amp; Care Escalation Needs\u003C\/em\u003E, is in partnership with Angel Hsing-Chi Hwang from the University of Southern California. Together, they will review real and synthetic chat transcripts with clinicians to find language patterns that signal risk.\u003C\/p\u003E\u003Cp\u003E\u201cA chatbot will always give a response and keep talking to you for however long you want,\u201d De Choudhury said. \u201cThat may not be a good thing for someone in crisis. We need to know when the right response is to stop and suggest talking to a human.\u201d\u003C\/p\u003E\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003EUnderstanding Risks for Low-Income Users\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003EArriaga\u2019s project,\u0026nbsp;\u003Cem\u003EDull, Dirty, Dangerous: Investigating Trust of Digital Resources Among Low-SES Mental Health Care Seekers\u003C\/em\u003E, looks at how LLMs affect people with low socioeconomic status (SES).\u003C\/p\u003E\u003Cp\u003EDull, dirty, and dangerous is a phrase used to describe work that is well-suited for robot automation because they are repetitive, physically taxing, or hazardous for humans. Arriaga said she adapted these terms for her research to create a taxonomy of the harms AI can cause to people seeking mental health care.\u003C\/p\u003E\u003Cp\u003EArriaga also wants to label the trust factors that chatbots have that attract low-SES users to seek their advice, and how these may differ for adults and adolescents across contexts.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cWe know one of the reasons some users go to LLMs is because they aren\u2019t insured and can\u2019t afford a therapist,\u201d she said. \u201cLLMs are available 24-7. Maybe it doesn\u2019t start as a trust issue. Maybe it starts with availability.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cSome of these human-AI conversations that result in harmful mental health advice didn\u2019t begin on the topic of mental health. In one case, the person started going to the machine for help with homework.\u003C\/p\u003E\u003Cp\u003E\u201cThen this relationship evolved into personal matters. Should we constrain the system to limit itself to helping someone with their homework and not wander off that subject into mental health matters?\u201d\u003C\/p\u003E\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003EManaging Privacy Risks for Social Media\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003ERitter will use the Google award to advance research on social media privacy tools, including interactive AI agents that help people make more informed decisions about what they share online.\u003C\/p\u003E\u003Cp\u003EHis project, \u003Cem\u003EAI Tools to Help Users Make Informed Decisions About Online Information Sharing\u003C\/em\u003E, focuses on reducing privacy risks in both text and images by identifying when posts reveal more than users intend.\u003C\/p\u003E\u003Cp\u003E\u201cWe\u2019ve been developing methods to assess risks in text, and now we\u2019re extending that work to images,\u201d Ritter said. \u201cPeople post photos without realizing how easily they can be geolocated by advanced AI systems. A casual selfie near home might contain subtle cues about where you live, like a street sign, that reveal private details.\u201d\u003C\/p\u003E\u003Cp\u003EThe project aims to create AI agents that review content within user posts, flag elements that pose risk, and suggest safer alternatives. Ritter said he wants people to maintain control over their privacy without limiting freedom of expression.\u003C\/p\u003E\u003Cp\u003ERitter will deploy advanced reasoning models capable of probabilistic privacy estimation. These systems can infer how identifiable a piece of text might be or how likely an image is to reveal a user\u2019s location.\u003C\/p\u003E\u003Cp\u003EFor images, Ritter and his collaborators will use models that identify geolocatable features, allowing users to edit or hide them before posting.\u003C\/p\u003E\u003Cp\u003EFor more on Ritter\u2019s research,\u0026nbsp;\u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/news\/new-large-language-model-can-protect-social-media-users-privacy\u0022\u003E\u003Cstrong\u003Eread how an LLM he co-developed protects the privacy of users on social media.\u003C\/strong\u003E\u003C\/a\u003E\u003C\/p\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EThree Georgia Tech faculty members from the School of Interactive Computing received Google Academic Research Awards to study how to make AI safer, focusing on minimizing harm to users seeking \u003Cstrong\u003Emental health support\u003C\/strong\u003E from large language models (LLMs) and improving \u003Cstrong\u003Esocial media privacy\u003C\/strong\u003E tools.\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"Three Georgia Tech faculty members received Google Academic Research Awards to study how to make AI safer."}],"uid":"36530","created_gmt":"2025-11-24 20:28:32","changed_gmt":"2026-01-09 13:38:21","author":"Nathan Deen","boilerplate_text":"","field_publication":"","field_article_url":"","location":"Atlanta, GA","dateline":{"date":"2025-11-24T00:00:00-05:00","iso_date":"2025-11-24T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"678716":{"id":"678716","type":"image","title":"437249_Google-Research-Award-Graphic.jpg","body":null,"created":"1764016128","gmt_created":"2025-11-24 20:28:48","changed":"1764016128","gmt_changed":"2025-11-24 20:28:48","alt":"Google Research Awards","file":{"fid":"262784","name":"437249_Google-Research-Award-Graphic.jpg","image_path":"\/sites\/default\/files\/2025\/11\/24\/437249_Google-Research-Award-Graphic.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2025\/11\/24\/437249_Google-Research-Award-Graphic.jpg","mime":"image\/jpeg","size":120957,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2025\/11\/24\/437249_Google-Research-Award-Graphic.jpg?itok=QmSwvwkp"}}},"media_ids":["678716"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1188","name":"Research Horizons"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"194606","name":"Artificial Intelligence"},{"id":"153","name":"Computer Science\/Information Technology and Security"},{"id":"135","name":"Research"}],"keywords":[{"id":"194701","name":"go-resarchnews"},{"id":"9153","name":"Research Horizons"},{"id":"192863","name":"go-ai"},{"id":"193860","name":"Artifical Intelligence"},{"id":"187812","name":"artificial intelligence (AI)"},{"id":"192524","name":"ChatGPT"},{"id":"184554","name":"Google Research Award"},{"id":"167007","name":"health \u0026 well-being"},{"id":"10343","name":"mental health"},{"id":"169137","name":"chatbot"},{"id":"167543","name":"social media"},{"id":"114791","name":"Data Privacy"}],"core_research_areas":[],"news_room_topics":[{"id":"71901","name":"Society and Culture"}],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[],"email":[],"slides":[],"orientation":[],"userdata":""}}}