<node id="658913">
  <nid>658913</nid>
  <type>news</type>
  <uid>
    <user id="27592"><![CDATA[27592]]></user>
  </uid>
  <created>1655307657</created>
  <changed>1655307931</changed>
  <title><![CDATA[Georgia Tech Presents Latest in Machine Learning Research at Computer Vision and Pattern Recognition Conference June 19-24]]></title>
  <body><![CDATA[<p>Georgia Institute of Technology researchers will present new technical findings in artificial intelligence, machine learning, and computer vision research and applications at the Computer Vision and Pattern Recognition (CVPR) conference taking place from June 19-24, 2022, in New Orleans, Louisiana, and virtually.</p>

<p>The institute is a leading contributor in the technical program and researchers will present 11 papers in the following tracks:</p>

<ul>
	<li>3D from multi-view and sensors</li>
	<li>Datasets and evaluation</li>
	<li>Navigation and autonomous driving</li>
	<li>Recognition: detection, categorization, retrieval</li>
	<li>Self-&amp; semi-&amp; meta- &amp; unsupervised learning</li>
	<li>Vision + language</li>
	<li>Vision applications and systems</li>
</ul>

<p>&ldquo;Researchers in the Machine Learning Center at Georgia Tech aim to research and develop innovative and sustainable technologies using machine learning and artificial intelligence that serve broader communities in socially and ethically responsible ways,&rdquo; said Irfan Essa, director of the center and senior associate dean in the College of Computing. &ldquo;The GT research at CVPR reflects this broader goal, and we are actively building pathways to connect our experts to explore the implications of this technology in the world.&rdquo;</p>

<p>Georgia Tech researchers at CVPR are collaborating in their current work with more than 100 peer authors from dozens of organizations that span industry, government, and academia.</p>

<p>The conference will draw leading authors, academics, and experts in key areas of artificial intelligence with an expected crowd of more than 7,500 attendees this year. Hosted by the IEEE Computer Society (IEEE CS) and the Computer Vision Foundation (CVF), CVPR is the premier annual computer vision event comprising the main conference and several co-located workshops and short courses.</p>

<p>ML@GT has created an <a href="https://public.tableau.com/views/CVPR2022/Dashboard1?:showVizHome=no">interactive visual analysis</a> of the CVPR 2022 papers program to show current trends in the field. The analysis breaks down the number of papers and authors by research area and allows users to explore areas of interest, including oral and poster papers on a particular topic. Research can also be narrowed down to particular institutions.</p>

<p>To learn more about Georgia Tech work at CVPR, details and paper links are below.</p>

<p>&nbsp;</p>

<h2><strong>Georgia Tech Research at CVPR 2022</strong></h2>

<p>&nbsp;</p>

<p><strong>3D FROM MULTI-VIEW AND SENSORS</strong></p>

<p><a href="https://openaccess.thecvf.com/content/CVPR2022/html/Hruby_Learning_To_Solve_Hard_Minimal_Problems_CVPR_2022_paper.html"><strong>Learning To Solve Hard Minimal Problems</strong></a><br />
<em>Petr Hruby, Timothy Duff, Anton Leykin, Tomas Pajdla</em></p>

<p><a href="https://openaccess.thecvf.com/content/CVPR2022/html/Kundu_Panoptic_Neural_Fields_A_Semantic_Object-Aware_Neural_Scene_Representation_CVPR_2022_paper.html"><strong>Panoptic Neural Fields: A Semantic Object-Aware Neural Scene Representation</strong></a><br />
<em>Abhijit Kundu, Kyle Genova, Xiaoqi Yin, Alireza Fathi, Caroline Pantofaru, Leonidas J. Guibas, Andrea Tagliasacchi, Frank Dellaert, Thomas Funkhouser</em></p>

<p>&nbsp;</p>

<p><strong>DATASETS AND EVALUATION</strong></p>

<p><a href="https://openaccess.thecvf.com/content/CVPR2022/html/Grauman_Ego4D_Around_the_World_in_3000_Hours_of_Egocentric_Video_CVPR_2022_paper.html"><strong>Ego4D: Around the World in 3,000 Hours of Egocentric Video</strong></a><br />
<em>Kristen Grauman, Andrew Westbury, Eugene Byrne, Zachary Chavis, Antonino Furnari, Rohit Girdhar, Jackson Hamburger, Hao Jiang, Miao Liu, Xingyu Liu, Miguel Martin, Tushar Nagarajan, Ilija Radosavovic, Santhosh Kumar Ramakrishnan, Fiona Ryan, Jayant Sharma, Michael Wray, Mengmeng Xu, Eric Zhongcong Xu, Chen Zhao, Siddhant Bansal, Dhruv Batra, Vincent Cartillier, Sean Crane, Tien Do, Morrie Doulaty, Akshay Erapalli, Christoph Feichtenhofer, Adriano Fragomeni, Qichen Fu, Abrham Gebreselasie, Cristina Gonz&aacute;lez, James Hillis, Xuhua Huang, Yifei Huang, Wenqi Jia, Weslie Khoo, J&aacute;chym Kol&aacute;ř, Satwik Kottur, Anurag Kumar, Federico Landini, Chao Li, Yanghao Li, Zhenqiang Li, Karttikeya Mangalam, Raghava Modhugu, Jonathan Munro, Tullie Murrell, Takumi Nishiyasu, Will Price, Paola Ruiz, Merey Ramazanova, Leda Sari, Kiran Somasundaram, Audrey Southerland, Yusuke Sugano, Ruijie Tao, Minh Vo, Yuchen Wang, Xindi Wu, Takuma Yagi, Ziwei Zhao, Yunyi Zhu, Pablo Arbel&aacute;ez, David Crandall, Dima Damen, Giovanni Maria Farinella, Christian Fuegen, Bernard Ghanem, Vamsi Krishna Ithapu, C. V. Jawahar, Hanbyul Joo, Kris Kitani, Haizhou Li, Richard Newcombe, Aude Oliva, Hyun Soo Park, James M. Rehg, Yoichi Sato, Jianbo Shi, Mike Zheng Shou, Antonio Torralba, Lorenzo Torresani, Mingfei Yan, Jitendra Malik</em></p>

<p><a href="https://openaccess.thecvf.com/content/CVPR2022/html/Bryant_Multi-Dimensional_Nuanced_and_Subjective_-_Measuring_the_Perception_of_Facial_CVPR_2022_paper.html"><strong>Multi-Dimensional, Nuanced and Subjective &ndash; Measuring the Perception of Facial Expressions</strong></a><br />
<em>De&#39;Aira Bryant, Siqi Deng, Nashlie Sephus, Wei Xia, Pietro Perona</em></p>

<p>&nbsp;</p>

<p><strong>NAVIGATION AND AUTONOMOUS DRIVING</strong></p>

<p><a href="https://openaccess.thecvf.com/content/CVPR2022/html/Partsey_Is_Mapping_Necessary_for_Realistic_PointGoal_Navigation_CVPR_2022_paper.html"><strong>Is Mapping Necessary for Realistic PointGoal Navigation?</strong></a><br />
<em>Ruslan Partsey, Erik Wijmans, Naoki Yokoyama, Oles Dobosevych, Dhruv Batra, Oleksandr Maksymets</em></p>

<p>&nbsp;</p>

<p><strong>RECOGNITION: DETECTION, CATEGORIZATION, RETRIEVAL</strong></p>

<p><a href="https://openaccess.thecvf.com/content/CVPR2022/html/Li_Cross-Domain_Adaptive_Teacher_for_Object_Detection_CVPR_2022_paper.html"><strong>Cross-Domain Adaptive Teacher for Object Detection</strong></a><br />
<em>Yu-Jhe Li, Xiaoliang Dai, Chih-Yao Ma, Yen-Cheng Liu, Kan Chen, Bichen Wu, Zijian He, Kris Kitani, Peter Vajda</em></p>

<p><a href="https://openaccess.thecvf.com/content/CVPR2022/html/Zhang_Group_R-CNN_for_Weakly_Semi-Supervised_Object_Detection_With_Points_CVPR_2022_paper.html"><strong>Group R-CNN for Weakly Semi-Supervised Object Detection With Points</strong></a><br />
<em>Shilong Zhang, Zhuoran Yu, Liyang Liu, Xinjiang Wang, Aojun Zhou, Kai Chen</em></p>

<p>&nbsp;</p>

<p><strong>SELF-&amp; SEMI-&amp; META- &amp; UNSUPERVISED LEARNING</strong></p>

<p><a href="https://openaccess.thecvf.com/content/CVPR2022/html/Liu_Unbiased_Teacher_v2_Semi-Supervised_Object_Detection_for_Anchor-Free_and_Anchor-Based_CVPR_2022_paper.html"><strong>Unbiased Teacher v2: Semi-Supervised Object Detection for Anchor-Free and Anchor-Based Detectors</strong></a><br />
<em>Yen-Cheng Liu, Chih-Yao Ma, Zsolt Kira</em></p>

<p>&nbsp;</p>

<p><strong>VISION + LANGUAGE</strong></p>

<p><a href="https://openaccess.thecvf.com/content/CVPR2022/html/Kuo_Beyond_a_Pre-Trained_Object_Detector_Cross-Modal_Textual_and_Visual_Context_CVPR_2022_paper.html"><strong>Beyond a Pre-Trained Object Detector: Cross-Modal Textual and Visual Context for Image Captioning</strong></a><br />
<em>Chia-Wen Kuo, Zsolt Kira</em></p>

<p><a href="https://openaccess.thecvf.com/content/CVPR2022/html/Ramrakhya_Habitat-Web_Learning_Embodied_Object-Search_Strategies_From_Human_Demonstrations_at_Scale_CVPR_2022_paper.html"><strong>Habitat-Web: Learning Embodied Object-Search Strategies From Human Demonstrations at Scale</strong></a><br />
<em>Ram Ramrakhya, Eric Undersander, Dhruv Batra, Abhishek Das</em></p>

<p>&nbsp;</p>

<p><strong>VISION APPLICATIONS AND SYSTEMS</strong></p>

<p><a href="https://openaccess.thecvf.com/content/CVPR2022/html/Datta_Episodic_Memory_Question_Answering_CVPR_2022_paper.html"><strong>Episodic Memory Question Answering</strong></a><br />
<em>Samyak Datta, Sameer Dharur, Vincent Cartillier, Ruta Desai, Mukul Khanna, Dhruv Batra, Devi Parikh</em></p>

<p>&nbsp;</p>

<p><strong>DEMO</strong></p>

<p><a href="https://openaccess.thecvf.com/content/CVPR2022/html/Vellaichamy_DetectorDetective_Investigating_the_Effects_of_Adversarial_Examples_on_Object_Detectors_CVPR_2022_paper.html"><strong>DetectorDetective: Investigating the Effects of Adversarial Examples on Object Detectors</strong></a><br />
<em>Sivapriya Vellaichamy, Matthew Hull, Zijie J. Wang, Nilaksh Das, ShengYun Peng, Haekyu Park, Duen Horng (Polo) Chau</em></p>

<p><a href="https://openaccess.thecvf.com/content/CVPR2022/html/Lee_VisCUIT_Visual_Auditor_for_Bias_in_CNN_Image_Classifier_CVPR_2022_paper.html"><strong>VisCUIT: Visual Auditor for Bias in CNN Image Classifier</strong></a><br />
<em>Seongmin Lee, Zijie J. Wang, Judy Hoffman, Duen Horng (Polo) Chau</em></p>

<p>&nbsp;</p>

<p><strong>WORKSHOP</strong></p>

<p><a href="https://sites.google.com/view/mabe22/">Multi-Agent Behavior: Representation, Modeling, Measurement, and Applications</a></p>

<p><strong>Learning Behavior Representations Through Multi-Timescale Bootstrapping</strong><br />
<em>Mehdi Azabou, Michael Mendelson, Maks Sorokin, Shantanu Thakoor, Nauman Ahad, Carolina Urzay, Mohammad Gheshlaghi Azar, Eva L. Dyer</em></p>

<p>&nbsp;</p>

<p>&nbsp;</p>

<p>&nbsp;</p>
]]></body>
  <field_subtitle>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_subtitle>
  <field_dateline>
    <item>
      <value>2022-06-15T00:00:00-04:00</value>
      <timezone><![CDATA[America/New_York]]></timezone>
    </item>
  </field_dateline>
  <field_summary_sentence>
    <item>
      <value><![CDATA[Georgia Institute of Technology researchers will present new technical findings in artificial intelligence, machine learning, and computer vision research and applications at the Computer Vision and Pattern Recognition (CVPR) conference June 19-24.]]></value>
    </item>
  </field_summary_sentence>
  <field_summary>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_summary>
  <field_media>
          <item>
        <nid>
          <node id="658915">
            <nid>658915</nid>
            <type>image</type>
            <title><![CDATA[CVPR 2022 visual analysis]]></title>
            <body><![CDATA[]]></body>
                          <field_image>
                <item>
                  <fid>249767</fid>
                  <filename><![CDATA[Data Viz_CVPR.jpg]]></filename>
                  <filepath><![CDATA[/sites/default/files/images/Data%20Viz_CVPR.jpg]]></filepath>
                  <file_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Data%20Viz_CVPR.jpg]]></file_full_path>
                  <filemime>image/jpeg</filemime>
                  <image_740><![CDATA[]]></image_740>
                  <image_alt><![CDATA[]]></image_alt>
                </item>
              </field_image>
            
                      </node>
        </nid>
      </item>
      </field_media>
  <field_contact_email>
    <item>
      <email><![CDATA[]]></email>
    </item>
  </field_contact_email>
  <field_location>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_location>
  <field_contact>
    <item>
      <value><![CDATA[<p><a href="mailto:jpreston7@gatech.edu?subject=CVPR%20news">Josh Preston</a><br />
Research Communications Manager<br />
College of Computing</p>
]]></value>
    </item>
  </field_contact>
  <field_sidebar>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_sidebar>
  <field_boilerplate>
    <item>
      <nid><![CDATA[]]></nid>
    </item>
  </field_boilerplate>
  <!--  TO DO: correct to not conflate categories and news room topics  -->
  <!--  Disquisition: it's funny how I write these TODOs and then never
         revisit them. It's as though the act of writing the thing down frees me
         from the responsibility to actually solve the problem. But what can I
         say? There are more problems than there's time to solve.  -->
  <links_related> </links_related>
  <files> </files>
  <og_groups>
          <item>576481</item>
      </og_groups>
  <og_groups_both>
      </og_groups_both>
  <field_categories>
      </field_categories>
  <core_research_areas>
          <term tid="39431"><![CDATA[Data Engineering and Science]]></term>
          <term tid="39521"><![CDATA[Robotics]]></term>
      </core_research_areas>
  <field_news_room_topics>
      </field_news_room_topics>
  <links_related>
      </links_related>
  <files>
      </files>
  <og_groups>
          <item>576481</item>
      </og_groups>
  <og_groups_both>
          <item><![CDATA[ML@GT]]></item>
      </og_groups_both>
  <field_keywords>
      </field_keywords>
  <field_userdata><![CDATA[]]></field_userdata>
</node>
