<node id="682301">
  <nid>682301</nid>
  <type>news</type>
  <uid>
    <user id="27513"><![CDATA[27513]]></user>
  </uid>
  <created>1746799498</created>
  <changed>1746799637</changed>
  <title><![CDATA[AI Chatbots Aren’t Experts on Psych Medication Reactions — Yet]]></title>
  <body><![CDATA[<p>Asking artificial intelligence (AI) for advice can be tempting. Powered by large language models (LLMs), AI chatbots are available 24/7, are often free to use, and draw on troves of data to answer questions. Now, people with mental health conditions are asking AI for advice when experiencing potential side effects of psychiatric medicines — a decidedly higher-risk situation than asking it to summarize a report.</p><p>One question puzzling the AI research community is how AI performs when asked about mental health emergencies. Globally, including in the U.S., there is a significant gap in mental health treatment, with many individuals having limited to no access to mental healthcare. It’s no surprise that people have started turning to AI chatbots with urgent health-related questions.</p><p>Now, researchers at the Georgia Institute of Technology have developed a new framework to evaluate how well AI chatbots can detect potential adverse drug reactions in chat conversations, and how closely their advice aligns with human experts. The study was led by Institute for People and Technology (IPaT) faculty member Munmun De Choudhury, J.Z. Liang Associate Professor in the School of Interactive Computing, and Mohit Chandra, a third-year computer science Ph.D. student.<br><br>“People use AI chatbots for anything and everything,” said Chandra, the study’s first author. “When people have limited access to healthcare providers, they are increasingly likely to turn to AI agents to make sense of what’s happening to them and what they can do to address their problem. We were curious how these tools would fare, given that mental health scenarios can be very subjective and nuanced.”</p><p>De Choudhury, Chandra, and their colleagues will introduce their new framework at the 2025 Annual Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics, April 29–May 4.</p><p><a href="https://www.cc.gatech.edu/news/ai-chatbots-arent-experts-psych-medication-reactions-yet">Read more about this research here &gt;&gt;</a></p>]]></body>
  <field_subtitle>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_subtitle>
  <field_dateline>
    <item>
      <value>2025-05-09T00:00:00-04:00</value>
      <timezone><![CDATA[America/New_York]]></timezone>
    </item>
  </field_dateline>
  <field_summary_sentence>
    <item>
      <value><![CDATA[Asking artificial intelligence (AI) for advice can be tempting. ]]></value>
    </item>
  </field_summary_sentence>
  <field_summary>
    <item>
      <value><![CDATA[<p>Asking artificial intelligence (AI) for advice can be tempting. Powered by large language models (LLMs), AI chatbots are available 24/7, are often free to use, and draw on troves of data to answer questions. Now, people with mental health conditions are asking AI for advice when experiencing potential side effects of psychiatric medicines — a decidedly higher-risk situation than asking it to summarize a report.</p>]]></value>
    </item>
  </field_summary>
  <field_media>
          <item>
        <nid>
          <node id="677054">
            <nid>677054</nid>
            <type>image</type>
            <title><![CDATA[ Mohit Chandra, a third-year computer science Ph.D. student.]]></title>
            <body><![CDATA[]]></body>
                          <field_image>
                <item>
                  <fid>260916</fid>
                  <filename><![CDATA[pic_Mohit-Chandra2.jpg]]></filename>
                  <filepath><![CDATA[/sites/default/files/2025/05/09/pic_Mohit-Chandra2.jpg]]></filepath>
                  <file_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/2025/05/09/pic_Mohit-Chandra2.jpg]]></file_full_path>
                  <filemime>image/jpeg</filemime>
                  <image_740><![CDATA[]]></image_740>
                  <image_alt><![CDATA[ Mohit Chandra, a third-year computer science Ph.D. student.]]></image_alt>
                </item>
              </field_image>
            
                      </node>
        </nid>
      </item>
      </field_media>
  <field_contact_email>
    <item>
      <email><![CDATA[]]></email>
    </item>
  </field_contact_email>
  <field_location>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_location>
  <field_contact>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_contact>
  <field_sidebar>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_sidebar>
  <field_boilerplate>
    <item>
      <nid><![CDATA[]]></nid>
    </item>
  </field_boilerplate>
  <!--  TO DO: correct to not conflate categories and news room topics  -->
  <!--  Disquisition: it's funny how I write these TODOs and then never
         revisit them. It's as though the act of writing the thing down frees me
         from the responsibility to actually solve the problem. But what can I
         say? There are more problems than there's time to solve.  -->
  <links_related> </links_related>
  <files> </files>
  <og_groups>
          <item>69599</item>
      </og_groups>
  <og_groups_both>
      </og_groups_both>
  <field_categories>
      </field_categories>
  <core_research_areas>
          <term tid="39501"><![CDATA[People and Technology]]></term>
      </core_research_areas>
  <field_news_room_topics>
      </field_news_room_topics>
  <links_related>
      </links_related>
  <files>
      </files>
  <og_groups>
          <item>69599</item>
      </og_groups>
  <og_groups_both>
          <item><![CDATA[IPaT]]></item>
      </og_groups_both>
  <field_keywords>
          <item>
        <tid>188084</tid>
        <value><![CDATA[go-ipat]]></value>
      </item>
      </field_keywords>
  <field_userdata><![CDATA[]]></field_userdata>
</node>
