<node id="683255">
  <nid>683255</nid>
  <type>news</type>
  <uid>
    <user id="27592"><![CDATA[27592]]></user>
  </uid>
  <created>1753367470</created>
  <changed>1753368507</changed>
  <title><![CDATA[Georgia Tech Research in Computer Vision Signals Next Innovations in AI]]></title>
  <body><![CDATA[<p>Computer vision enables AI to see the world. It’s already being used for self-driving vehicles, medical imaging, face recognition, and more.&nbsp;</p><p>Georgia Tech faculty and student experts advancing this field were in action in June at the globally renowned <a href="https://cvpr.thecvf.com/">CVPR conference</a> from IEEE and the Computer Vision Foundation. Georgia Tech was in the top 10% of all organizations for lead authors and the top 4% for number of papers. More than 2000 organizations had research accepted into CVPR's main program.</p><p><a href="https://youtu.be/chIP-Qg_D-w">Watch the video</a> and hear from Tech experts about what’s new and what’s coming next. Featured students include College of Computing experts Fiona Ryan, Chengyue Huang, Brisa Maneechotesuwan, and Lex Whalen.</p><p>These researchers in computer vision are showing how they are extending AI capabilities with image and video data.</p><p>HIGHLIGHTS:</p><p>- College of Computing faculty, from the Schools of Interactive Computing (IC) and Computer Science (CS), represented the majority of Tech's faculty in the CVPR papers program (8 of 10 faculty).</p><p>- IC faculty Zsolt Kira and Bo Zhu each coauthored an oral paper, the top 3% of accepted papers. IC faculty member Judy Hoffman coauthored two highlight papers, the top 20% of acceptances.</p><p>- Georgia Tech is in the top 10% of all organizations for number of first authors and the top 4% for number of papers. More than 2,000 organizations had research in the main program.</p><p>- Tech experts were on 30 research paper teams across 16 research areas. Topics with more than one Tech expert included:</p><p>• Image/video synthesis &amp; generation<br>• Efficient and scalable vision<br>• Multi-modal learning<br>• Datasets and evaluation<br>• Humans: Face, body, gesture, etc.<br>• Vision, language, and reasoning&nbsp;<br>• Autonomous driving<br>• Computational imaging</p><p>&nbsp;</p>]]></body>
  <field_subtitle>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_subtitle>
  <field_dateline>
    <item>
      <value>2025-06-24T00:00:00-04:00</value>
      <timezone><![CDATA[America/New_York]]></timezone>
    </item>
  </field_dateline>
  <field_summary_sentence>
    <item>
      <value><![CDATA[Computer vision enables AI to see the world. It’s already being used for self-driving vehicles, medical imaging, face recognition, and more. Watch the video and hear from Tech experts about what’s new and what’s coming next. ]]></value>
    </item>
  </field_summary_sentence>
  <field_summary>
    <item>
      <value><![CDATA[<p>Computer vision enables AI to see the world. It’s already being used for self-driving vehicles, medical imaging, face recognition, and more. <a href="https://youtu.be/chIP-Qg_D-w">Watch the video</a> and hear from Tech experts about what’s new and what’s coming next.&nbsp;</p><p>Georgia Tech faculty and student experts advancing this field were in action in June at the globally renowned <a href="https://cvpr.thecvf.com/">CVPR conference</a> from IEEE and the Computer Vision Foundation. Georgia Tech was in the top 10% of all organizations for lead authors and the top 4% for number of papers. More than 2000 organizations had research accepted into CVPR's main program.</p>]]></value>
    </item>
  </field_summary>
  <field_media>
          <item>
        <nid>
          <node id="677478">
            <nid>677478</nid>
            <type>image</type>
            <title><![CDATA[CVPR 2025]]></title>
            <body><![CDATA[<p>CVPR 2025</p>]]></body>
                          <field_image>
                <item>
                  <fid>261380</fid>
                  <filename><![CDATA[_MG_1920.JPG]]></filename>
                  <filepath><![CDATA[/sites/default/files/2025/07/24/_MG_1920.JPG]]></filepath>
                  <file_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/2025/07/24/_MG_1920.JPG]]></file_full_path>
                  <filemime>image/jpeg</filemime>
                  <image_740><![CDATA[]]></image_740>
                  <image_alt><![CDATA[CVPR 2025]]></image_alt>
                </item>
              </field_image>
            
                      </node>
        </nid>
      </item>
      </field_media>
  <field_contact_email>
    <item>
      <email><![CDATA[jpreston7@gatech.edu]]></email>
    </item>
  </field_contact_email>
  <field_location>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_location>
  <field_contact>
    <item>
      <value><![CDATA[<p><a href="mailto:jpreston7@gatech.edu">Joshua Preston</a><br>Communications Manager, Marketing and Research<br>College of Computing<br>jpreston7@gatech.edu</p>]]></value>
    </item>
  </field_contact>
  <field_sidebar>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_sidebar>
  <field_boilerplate>
    <item>
      <nid><![CDATA[]]></nid>
    </item>
  </field_boilerplate>
  <!--  TO DO: correct to not conflate categories and news room topics  -->
  <!--  Disquisition: it's funny how I write these TODOs and then never
         revisit them. It's as though the act of writing the thing down frees me
         from the responsibility to actually solve the problem. But what can I
         say? There are more problems than there's time to solve.  -->
  <links_related> </links_related>
  <files> </files>
  <og_groups>
          <item>47223</item>
      </og_groups>
  <og_groups_both>
          <item>
        <![CDATA[Artificial Intelligence]]>
      </item>
      </og_groups_both>
  <field_categories>
          <item>
        <tid>194606</tid>
        <value><![CDATA[Artificial Intelligence]]></value>
      </item>
      </field_categories>
  <core_research_areas>
          <term tid="193655"><![CDATA[Artificial Intelligence at Georgia Tech]]></term>
      </core_research_areas>
  <field_news_room_topics>
      </field_news_room_topics>
  <links_related>
      </links_related>
  <files>
      </files>
  <og_groups>
          <item>47223</item>
      </og_groups>
  <og_groups_both>
          <item><![CDATA[College of Computing]]></item>
      </og_groups_both>
  <field_keywords>
          <item>
        <tid>192863</tid>
        <value><![CDATA[go-ai]]></value>
      </item>
      </field_keywords>
  <field_userdata><![CDATA[]]></field_userdata>
</node>
