<node id="671933">
  <nid>671933</nid>
  <type>external_news</type>
  <uid>
    <user id="34434"><![CDATA[34434]]></user>
  </uid>
  <created>1704733869</created>
  <changed>1704733869</changed>
  <title><![CDATA[In Defense of AI Hallucinations]]></title>
  <body><![CDATA[<p>No one knows whether artificial intelligence will be a boon or curse in the far future. But right now, there’s almost universal discomfort and contempt for one habit of these chatbots and agents: hallucinations, those made-up facts that appear in the outputs of large language models like ChatGPT.&nbsp;It's a big problem when chatbots spew untruths. But <em>Wired</em> writer Steven Levy says we should also celebrate these hallucinations as prompts for human creativity and a barrier to machines taking over. <a href="https://math.gatech.edu/people/santosh-vempala">Santosh Vempala</a>, professor in the <a href="https://scs.gatech.edu">School of Computer Science</a>, the <a href="https://www.isye.gatech.edu">H. Milton Stewart School of Industrial and Systems Engineering</a>, and adjunct professor in the <a href="https://math.gatech.edu">School of Mathematics</a>, has studied AI hallucinations and is quoted in the article.&nbsp;</p>
]]></body>
  <field_article_url>
    <item>
      <url><![CDATA[https://www.wired.com/story/plaintext-in-defense-of-ai-hallucinations-chatgpt/]]></url>
      <title><![CDATA[]]></title>
    </item>
  </field_article_url>
  <field_publication>
    <item>
      <value><![CDATA[ Wired ]]></value>
    </item>
  </field_publication>
  <field_dateline>
    <item>
      <value>2024-01-05</value>
      <timezone></timezone>
    </item>
  </field_dateline>
  <field_media>
        </field_media>
  <og_groups>
          <item>1278</item>
          <item>1279</item>
      </og_groups>
  <og_groups_both>
          <item><![CDATA[College of Sciences]]></item>
          <item><![CDATA[School of Mathematics]]></item>
      </og_groups_both>
    <field_userdata><![CDATA[]]></field_userdata>
</node>
