<node id="685420">
  <nid>685420</nid>
  <type>external_news</type>
  <uid>
    <user id="36583"><![CDATA[36583]]></user>
  </uid>
  <created>1759268433</created>
  <changed>1759348670</changed>
  <title><![CDATA[Will we know artificial general intelligence when we see it?]]></title>
  <body><![CDATA[<p dir="ltr">We may never agree on what AGI or “humanlike” AI means, or what suffices to prove it. As AI advances, machines will still make mistakes, and people will point to these and say the AIs aren’t really intelligent.&nbsp;<a href="https://psychology.gatech.edu/people/anna-ivanova"><strong>Anna Ivanova</strong></a>, an assistant professor in the&nbsp;<a href="https://psychology.gatech.edu/">School of Psychology</a> at Georgia Tech, was on a panel recently, and the moderator asked about AGI timelines. “We had one person saying that it might never happen,” Ivanova told me, “and one person saying that it already happened.” So the term “AGI” may be convenient shorthand to express an aim—or a fear—but its practical use may be limited. In most cases, it should come with an asterisk, and a benchmark.</p>]]></body>
  <field_article_url>
    <item>
      <url><![CDATA[https://spectrum.ieee.org/agi-benchmark]]></url>
      <title><![CDATA[]]></title>
    </item>
  </field_article_url>
  <field_publication>
    <item>
      <value><![CDATA[ IEEE Spectrum ]]></value>
    </item>
  </field_publication>
  <field_dateline>
    <item>
      <value>2025-09-22</value>
      <timezone></timezone>
    </item>
  </field_dateline>
  <field_media>
        </field_media>
  <og_groups>
          <item>1278</item>
          <item>443951</item>
      </og_groups>
  <og_groups_both>
          <item><![CDATA[College of Sciences]]></item>
          <item><![CDATA[School of Psychology]]></item>
      </og_groups_both>
    <field_userdata><![CDATA[]]></field_userdata>
</node>
