{"686935":{"#nid":"686935","#data":{"type":"news","title":"AI Shouldn\u2019t Try to Be Your Friend, According to New Georgia Tech Research","body":[{"value":"\u003Cp\u003EWould you follow a chatbot\u2019s advice more if it sounded friendly?\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EThat question matters as artificial intelligence (AI) spreads into everything from customer service to self-driving cars. These autonomous agents often have human names \u2014 Alexa or Claude, for example \u2014 and speak conversationally, but too much familiarity can backfire.\u0026nbsp;Earlier this year, OpenAI scaled down its \u201c\u003Ca href=\u0022https:\/\/openai.com\/index\/sycophancy-in-gpt-4o\/\u0022 title=\u0022https:\/\/openai.com\/index\/sycophancy-in-gpt-4o\/\u0022\u003Esycophantic\u003C\/a\u003E\u201d ChatGPT model, which could cause problems for users with mental health issues.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003ENew research from Georgia Tech suggests that users may like more personable AI, but they are more likely to obey AI that sounds robotic. While following orders from Siri may not be critical, many AI systems, such as robotic guide dogs, require human compliance for safety reasons.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EThese surprising findings are from research by Sidney Scott-Sharoni, who recently received her Ph.D. from the\u0026nbsp;\u003Ca href=\u0022https:\/\/psychology.gatech.edu\/\u0022\u003ESchool of Psychology\u003C\/a\u003E. Despite years of previous research suggesting people would be socially influenced by AI they liked, Scott-Sharoni\u2019s research showed the opposite.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cEven though people rated humanistic agents better, that didn\u0027t line up with their behavior,\u201d she said.\u0026nbsp;\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003ELikability vs. Reliability\u0026nbsp;\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003EScott-Sharoni ran four experiments. In the first, participants answered trivia questions, saw the AI\u2019s response, and decided whether to change their answer. She expected people to listen to agents they liked.\u003C\/p\u003E\u003Cp\u003E\u201cWhat I found was that the more humanlike people rated the agent, the less they would change their answer, so, effectively, the less they would conform to what the agent said,\u201d she noted.\u003C\/p\u003E\u003Cp\u003ESurprised, Scott-Sharoni studied moral judgments with an AI voice agent next. For example, participants decided how to handle being undercharged on a restaurant bill.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EOnce again, participants liked the humanlike agent better but listened to the robotic agent more.\u0026nbsp;The unexpected pattern led Scott-Sharoni to explore why people behave this way.\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003EBias Breakthrough\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003EWhy the gap? Scott-Sharoni\u2019s findings point to automation bias \u2014 the tendency to see machines as more objective than humans.\u003C\/p\u003E\u003Cp\u003EScott-Sharoni continued to test this with a third experiment focused on the prisoner\u2019s dilemma, where participants cooperate with or retaliate against authority. In her task, participants played a game against an AI agent.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cI hypothesized that people would retaliate against the humanlike agent if it didn\u2019t cooperate,\u201d she said. \u201cThat\u2019s what I found: Participants interacting with the humanlike agent became less likely to cooperate over time, while those with the robotic agent stayed steady.\u201d\u003C\/p\u003E\u003Cp\u003EThe final study, a self-driving car simulation, was the most realistic and troubling for safety concerns. Participants didn\u2019t consistently obey either agent type, but across all experiments, humanlike AI proved less effective at influencing behavior.\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003EDesigning the Right AI\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003EThe implications are pivotal for AI engineers. As AI grows, designers may cater to user preferences \u2014 but what people want isn\u2019t always best.\u003C\/p\u003E\u003Cp\u003E\u201cMany people develop a trusting relationship with an AI agent,\u201d said\u0026nbsp;\u003Ca href=\u0022https:\/\/psychology.gatech.edu\/people\/bruce-n-walker\u0022\u003EBruce Walker\u003C\/a\u003E, a professor of psychology and interactive computing and Scott-Sharoni\u2019s Ph.D. advisor. \u201cSo, it\u2019s important that developers understand what role AI plays in the social fabric and design technical systems that ultimately make humans better. Sidney\u0027s work makes a critical contribution to that ultimate goal.\u201d\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EWhen safety and compliance are the point, robotic beats relatable.\u003C\/p\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003E\u003Cstrong\u003EA Ph.D. graduate\u2019s research shows that the more humanlike an AI agent is, the less likely a user is to follow it.\u003C\/strong\u003E\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"A Ph.D. graduate\u2019s research shows that the more humanlike an AI agent is, the less likely a user is to follow it."}],"uid":"34541","created_gmt":"2025-12-17 18:40:12","changed_gmt":"2026-01-09 13:34:32","author":"Tess Malone","boilerplate_text":"","field_publication":"","field_article_url":"","location":"Atlanta, GA","dateline":{"date":"2025-12-17T00:00:00-05:00","iso_date":"2025-12-17T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"678917":{"id":"678917","type":"image","title":"Sidney Scott-Sharoni","body":null,"created":"1767628889","gmt_created":"2026-01-05 16:01:29","changed":"1767628889","gmt_changed":"2026-01-05 16:01:29","alt":"Sidney Scott-Sharoni","file":{"fid":"263014","name":"Sidney-Scott-Sharoni.jpg","image_path":"\/sites\/default\/files\/2026\/01\/05\/Sidney-Scott-Sharoni.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2026\/01\/05\/Sidney-Scott-Sharoni.jpg","mime":"image\/jpeg","size":947371,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2026\/01\/05\/Sidney-Scott-Sharoni.jpg?itok=dYOo9RWi"}},"678870":{"id":"678870","type":"image","title":"50414610_00201_0273_Large.jpg","body":"\u003Cp\u003ESidney Scott-Sharoni at Ph.D. commencement December 2025\u003C\/p\u003E","created":"1765996863","gmt_created":"2025-12-17 18:41:03","changed":"1765996863","gmt_changed":"2025-12-17 18:41:03","alt":"Sidney Scott-Sharoni","file":{"fid":"262960","name":"50414610_00201_0273_Large.jpg","image_path":"\/sites\/default\/files\/2025\/12\/17\/50414610_00201_0273_Large.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2025\/12\/17\/50414610_00201_0273_Large.jpg","mime":"image\/jpeg","size":713143,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2025\/12\/17\/50414610_00201_0273_Large.jpg?itok=1aEFLR_7"}}},"media_ids":["678917","678870"],"groups":[{"id":"1278","name":"College of Sciences"},{"id":"66220","name":"Neuro"},{"id":"1188","name":"Research Horizons"},{"id":"443951","name":"School of Psychology"}],"categories":[],"keywords":[{"id":"187915","name":"go-researchnews"},{"id":"172970","name":"go-neuro"}],"core_research_areas":[{"id":"193655","name":"Artificial Intelligence at Georgia Tech"}],"news_room_topics":[{"id":"71881","name":"Science and Technology"}],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003ETess Malone, Senior Research Writer\/Editor\u003C\/p\u003E\u003Cp\u003Etess.malone@gatech.edu\u003C\/p\u003E","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}}}