<node id="686935">
  <nid>686935</nid>
  <type>news</type>
  <uid>
    <user id="34541"><![CDATA[34541]]></user>
  </uid>
  <created>1765996812</created>
  <changed>1767965672</changed>
  <title><![CDATA[AI Shouldn’t Try to Be Your Friend, According to New Georgia Tech Research]]></title>
  <body><![CDATA[<p>Would you follow a chatbot’s advice more if it sounded friendly?&nbsp;</p><p>That question matters as artificial intelligence (AI) spreads into everything from customer service to self-driving cars. These autonomous agents often have human names — Alexa or Claude, for example — and speak conversationally, but too much familiarity can backfire.&nbsp;Earlier this year, OpenAI scaled down its “<a href="https://openai.com/index/sycophancy-in-gpt-4o/" title="https://openai.com/index/sycophancy-in-gpt-4o/">sycophantic</a>” ChatGPT model, which could cause problems for users with mental health issues.&nbsp;</p><p>New research from Georgia Tech suggests that users may like more personable AI, but they are more likely to obey AI that sounds robotic. While following orders from Siri may not be critical, many AI systems, such as robotic guide dogs, require human compliance for safety reasons.&nbsp;</p><p>These surprising findings are from research by Sidney Scott-Sharoni, who recently received her Ph.D. from the&nbsp;<a href="https://psychology.gatech.edu/">School of Psychology</a>. Despite years of previous research suggesting people would be socially influenced by AI they liked, Scott-Sharoni’s research showed the opposite.&nbsp;</p><p>“Even though people rated humanistic agents better, that didn't line up with their behavior,” she said.&nbsp;</p><h4><strong>Likability vs. Reliability&nbsp;</strong></h4><p>Scott-Sharoni ran four experiments. In the first, participants answered trivia questions, saw the AI’s response, and decided whether to change their answer. She expected people to listen to agents they liked.</p><p>“What I found was that the more humanlike people rated the agent, the less they would change their answer, so, effectively, the less they would conform to what the agent said,” she noted.</p><p>Surprised, Scott-Sharoni studied moral judgments with an AI voice agent next. For example, participants decided how to handle being undercharged on a restaurant bill.&nbsp;</p><p>Once again, participants liked the humanlike agent better but listened to the robotic agent more.&nbsp;The unexpected pattern led Scott-Sharoni to explore why people behave this way.</p><h4><strong>Bias Breakthrough</strong></h4><p>Why the gap? Scott-Sharoni’s findings point to automation bias — the tendency to see machines as more objective than humans.</p><p>Scott-Sharoni continued to test this with a third experiment focused on the prisoner’s dilemma, where participants cooperate with or retaliate against authority. In her task, participants played a game against an AI agent.&nbsp;</p><p>“I hypothesized that people would retaliate against the humanlike agent if it didn’t cooperate,” she said. “That’s what I found: Participants interacting with the humanlike agent became less likely to cooperate over time, while those with the robotic agent stayed steady.”</p><p>The final study, a self-driving car simulation, was the most realistic and troubling for safety concerns. Participants didn’t consistently obey either agent type, but across all experiments, humanlike AI proved less effective at influencing behavior.</p><h4><strong>Designing the Right AI</strong></h4><p>The implications are pivotal for AI engineers. As AI grows, designers may cater to user preferences — but what people want isn’t always best.</p><p>“Many people develop a trusting relationship with an AI agent,” said&nbsp;<a href="https://psychology.gatech.edu/people/bruce-n-walker">Bruce Walker</a>, a professor of psychology and interactive computing and Scott-Sharoni’s Ph.D. advisor. “So, it’s important that developers understand what role AI plays in the social fabric and design technical systems that ultimately make humans better. Sidney's work makes a critical contribution to that ultimate goal.”&nbsp;</p><p>When safety and compliance are the point, robotic beats relatable.</p>]]></body>
  <field_subtitle>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_subtitle>
  <field_dateline>
    <item>
      <value>2025-12-17T00:00:00-05:00</value>
      <timezone><![CDATA[America/New_York]]></timezone>
    </item>
  </field_dateline>
  <field_summary_sentence>
    <item>
      <value><![CDATA[A Ph.D. graduate’s research shows that the more humanlike an AI agent is, the less likely a user is to follow it.]]></value>
    </item>
  </field_summary_sentence>
  <field_summary>
    <item>
      <value><![CDATA[<p><strong>A Ph.D. graduate’s research shows that the more humanlike an AI agent is, the less likely a user is to follow it.</strong></p>]]></value>
    </item>
  </field_summary>
  <field_media>
          <item>
        <nid>
          <node id="678917">
            <nid>678917</nid>
            <type>image</type>
            <title><![CDATA[Sidney Scott-Sharoni]]></title>
            <body><![CDATA[]]></body>
                          <field_image>
                <item>
                  <fid>263014</fid>
                  <filename><![CDATA[Sidney-Scott-Sharoni.jpg]]></filename>
                  <filepath><![CDATA[/sites/default/files/2026/01/05/Sidney-Scott-Sharoni.jpg]]></filepath>
                  <file_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/2026/01/05/Sidney-Scott-Sharoni.jpg]]></file_full_path>
                  <filemime>image/jpeg</filemime>
                  <image_740><![CDATA[]]></image_740>
                  <image_alt><![CDATA[Sidney Scott-Sharoni]]></image_alt>
                </item>
              </field_image>
            
                      </node>
        </nid>
      </item>
          <item>
        <nid>
          <node id="678870">
            <nid>678870</nid>
            <type>image</type>
            <title><![CDATA[50414610_00201_0273_Large.jpg]]></title>
            <body><![CDATA[<p>Sidney Scott-Sharoni at Ph.D. commencement December 2025</p>]]></body>
                          <field_image>
                <item>
                  <fid>262960</fid>
                  <filename><![CDATA[50414610_00201_0273_Large.jpg]]></filename>
                  <filepath><![CDATA[/sites/default/files/2025/12/17/50414610_00201_0273_Large.jpg]]></filepath>
                  <file_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/2025/12/17/50414610_00201_0273_Large.jpg]]></file_full_path>
                  <filemime>image/jpeg</filemime>
                  <image_740><![CDATA[]]></image_740>
                  <image_alt><![CDATA[Sidney Scott-Sharoni]]></image_alt>
                </item>
              </field_image>
            
                      </node>
        </nid>
      </item>
      </field_media>
  <field_contact_email>
    <item>
      <email><![CDATA[]]></email>
    </item>
  </field_contact_email>
  <field_location>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_location>
  <field_contact>
    <item>
      <value><![CDATA[<p>Tess Malone, Senior Research Writer/Editor</p><p>tess.malone@gatech.edu</p>]]></value>
    </item>
  </field_contact>
  <field_sidebar>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_sidebar>
  <field_boilerplate>
    <item>
      <nid><![CDATA[]]></nid>
    </item>
  </field_boilerplate>
  <!--  TO DO: correct to not conflate categories and news room topics  -->
  <!--  Disquisition: it's funny how I write these TODOs and then never
         revisit them. It's as though the act of writing the thing down frees me
         from the responsibility to actually solve the problem. But what can I
         say? There are more problems than there's time to solve.  -->
  <links_related> </links_related>
  <files> </files>
  <og_groups>
          <item>1278</item>
          <item>66220</item>
          <item>1188</item>
          <item>443951</item>
      </og_groups>
  <og_groups_both>
      </og_groups_both>
  <field_categories>
      </field_categories>
  <core_research_areas>
          <term tid="193655"><![CDATA[Artificial Intelligence at Georgia Tech]]></term>
      </core_research_areas>
  <field_news_room_topics>
          <item>
        <tid>71881</tid>
        <value><![CDATA[Science and Technology]]></value>
      </item>
      </field_news_room_topics>
  <links_related>
      </links_related>
  <files>
      </files>
  <og_groups>
          <item>1278</item>
          <item>66220</item>
          <item>1188</item>
          <item>443951</item>
      </og_groups>
  <og_groups_both>
          <item><![CDATA[College of Sciences]]></item>
          <item><![CDATA[Neuro]]></item>
          <item><![CDATA[Research Horizons]]></item>
          <item><![CDATA[School of Psychology]]></item>
      </og_groups_both>
  <field_keywords>
          <item>
        <tid>187915</tid>
        <value><![CDATA[go-researchnews]]></value>
      </item>
          <item>
        <tid>172970</tid>
        <value><![CDATA[go-neuro]]></value>
      </item>
      </field_keywords>
  <field_userdata><![CDATA[]]></field_userdata>
</node>
