{"658913":{"#nid":"658913","#data":{"type":"news","title":"Georgia Tech Presents Latest in Machine Learning Research at Computer Vision and Pattern Recognition Conference June 19-24","body":[{"value":"\u003Cp\u003EGeorgia Institute of Technology researchers will present new technical findings in artificial intelligence, machine learning, and computer vision research and applications at the Computer Vision and Pattern Recognition (CVPR) conference taking place from June 19-24, 2022, in New Orleans, Louisiana, and virtually.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe institute is a leading contributor in the technical program and researchers will present 11 papers in the following tracks:\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003E3D from multi-view and sensors\u003C\/li\u003E\r\n\t\u003Cli\u003EDatasets and evaluation\u003C\/li\u003E\r\n\t\u003Cli\u003ENavigation and autonomous driving\u003C\/li\u003E\r\n\t\u003Cli\u003ERecognition: detection, categorization, retrieval\u003C\/li\u003E\r\n\t\u003Cli\u003ESelf-\u0026amp; semi-\u0026amp; meta- \u0026amp; unsupervised learning\u003C\/li\u003E\r\n\t\u003Cli\u003EVision + language\u003C\/li\u003E\r\n\t\u003Cli\u003EVision applications and systems\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Researchers in the Machine Learning Center at Georgia Tech aim to research and develop innovative and sustainable technologies using machine learning and artificial intelligence that serve broader communities in socially and ethically responsible ways,\u0026rdquo; said Irfan Essa, director of the center and senior associate dean in the College of Computing. \u0026ldquo;The GT research at CVPR reflects this broader goal, and we are actively building pathways to connect our experts to explore the implications of this technology in the world.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EGeorgia Tech researchers at CVPR are collaborating in their current work with more than 100 peer authors from dozens of organizations that span industry, government, and academia.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe conference will draw leading authors, academics, and experts in key areas of artificial intelligence with an expected crowd of more than 7,500 attendees this year. Hosted by the IEEE Computer Society (IEEE CS) and the Computer Vision Foundation (CVF), CVPR is the premier annual computer vision event comprising the main conference and several co-located workshops and short courses.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EML@GT has created an \u003Ca href=\u0022https:\/\/public.tableau.com\/views\/CVPR2022\/Dashboard1?:showVizHome=no\u0022\u003Einteractive visual analysis\u003C\/a\u003E of the CVPR 2022 papers program to show current trends in the field. The analysis breaks down the number of papers and authors by research area and allows users to explore areas of interest, including oral and poster papers on a particular topic. Research can also be narrowed down to particular institutions.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETo learn more about Georgia Tech work at CVPR, details and paper links are below.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Ch2\u003E\u003Cstrong\u003EGeorgia Tech Research at CVPR 2022\u003C\/strong\u003E\u003C\/h2\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003E3D FROM MULTI-VIEW AND SENSORS\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022https:\/\/openaccess.thecvf.com\/content\/CVPR2022\/html\/Hruby_Learning_To_Solve_Hard_Minimal_Problems_CVPR_2022_paper.html\u0022\u003E\u003Cstrong\u003ELearning To Solve Hard Minimal Problems\u003C\/strong\u003E\u003C\/a\u003E\u003Cbr \/\u003E\r\n\u003Cem\u003EPetr Hruby, Timothy Duff, Anton Leykin, Tomas Pajdla\u003C\/em\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022https:\/\/openaccess.thecvf.com\/content\/CVPR2022\/html\/Kundu_Panoptic_Neural_Fields_A_Semantic_Object-Aware_Neural_Scene_Representation_CVPR_2022_paper.html\u0022\u003E\u003Cstrong\u003EPanoptic Neural Fields: A Semantic Object-Aware Neural Scene Representation\u003C\/strong\u003E\u003C\/a\u003E\u003Cbr \/\u003E\r\n\u003Cem\u003EAbhijit Kundu, Kyle Genova, Xiaoqi Yin, Alireza Fathi, Caroline Pantofaru, Leonidas J. Guibas, Andrea Tagliasacchi, Frank Dellaert, Thomas Funkhouser\u003C\/em\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EDATASETS AND EVALUATION\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022https:\/\/openaccess.thecvf.com\/content\/CVPR2022\/html\/Grauman_Ego4D_Around_the_World_in_3000_Hours_of_Egocentric_Video_CVPR_2022_paper.html\u0022\u003E\u003Cstrong\u003EEgo4D: Around the World in 3,000 Hours of Egocentric Video\u003C\/strong\u003E\u003C\/a\u003E\u003Cbr \/\u003E\r\n\u003Cem\u003EKristen Grauman, Andrew Westbury, Eugene Byrne, Zachary Chavis, Antonino Furnari, Rohit Girdhar, Jackson Hamburger, Hao Jiang, Miao Liu, Xingyu Liu, Miguel Martin, Tushar Nagarajan, Ilija Radosavovic, Santhosh Kumar Ramakrishnan, Fiona Ryan, Jayant Sharma, Michael Wray, Mengmeng Xu, Eric Zhongcong Xu, Chen Zhao, Siddhant Bansal, Dhruv Batra, Vincent Cartillier, Sean Crane, Tien Do, Morrie Doulaty, Akshay Erapalli, Christoph Feichtenhofer, Adriano Fragomeni, Qichen Fu, Abrham Gebreselasie, Cristina Gonz\u0026aacute;lez, James Hillis, Xuhua Huang, Yifei Huang, Wenqi Jia, Weslie Khoo, J\u0026aacute;chym Kol\u0026aacute;\u0159, Satwik Kottur, Anurag Kumar, Federico Landini, Chao Li, Yanghao Li, Zhenqiang Li, Karttikeya Mangalam, Raghava Modhugu, Jonathan Munro, Tullie Murrell, Takumi Nishiyasu, Will Price, Paola Ruiz, Merey Ramazanova, Leda Sari, Kiran Somasundaram, Audrey Southerland, Yusuke Sugano, Ruijie Tao, Minh Vo, Yuchen Wang, Xindi Wu, Takuma Yagi, Ziwei Zhao, Yunyi Zhu, Pablo Arbel\u0026aacute;ez, David Crandall, Dima Damen, Giovanni Maria Farinella, Christian Fuegen, Bernard Ghanem, Vamsi Krishna Ithapu, C. V. Jawahar, Hanbyul Joo, Kris Kitani, Haizhou Li, Richard Newcombe, Aude Oliva, Hyun Soo Park, James M. Rehg, Yoichi Sato, Jianbo Shi, Mike Zheng Shou, Antonio Torralba, Lorenzo Torresani, Mingfei Yan, Jitendra Malik\u003C\/em\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022https:\/\/openaccess.thecvf.com\/content\/CVPR2022\/html\/Bryant_Multi-Dimensional_Nuanced_and_Subjective_-_Measuring_the_Perception_of_Facial_CVPR_2022_paper.html\u0022\u003E\u003Cstrong\u003EMulti-Dimensional, Nuanced and Subjective \u0026ndash; Measuring the Perception of Facial Expressions\u003C\/strong\u003E\u003C\/a\u003E\u003Cbr \/\u003E\r\n\u003Cem\u003EDe\u0026#39;Aira Bryant, Siqi Deng, Nashlie Sephus, Wei Xia, Pietro Perona\u003C\/em\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003ENAVIGATION AND AUTONOMOUS DRIVING\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022https:\/\/openaccess.thecvf.com\/content\/CVPR2022\/html\/Partsey_Is_Mapping_Necessary_for_Realistic_PointGoal_Navigation_CVPR_2022_paper.html\u0022\u003E\u003Cstrong\u003EIs Mapping Necessary for Realistic PointGoal Navigation?\u003C\/strong\u003E\u003C\/a\u003E\u003Cbr \/\u003E\r\n\u003Cem\u003ERuslan Partsey, Erik Wijmans, Naoki Yokoyama, Oles Dobosevych, Dhruv Batra, Oleksandr Maksymets\u003C\/em\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003ERECOGNITION: DETECTION, CATEGORIZATION, RETRIEVAL\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022https:\/\/openaccess.thecvf.com\/content\/CVPR2022\/html\/Li_Cross-Domain_Adaptive_Teacher_for_Object_Detection_CVPR_2022_paper.html\u0022\u003E\u003Cstrong\u003ECross-Domain Adaptive Teacher for Object Detection\u003C\/strong\u003E\u003C\/a\u003E\u003Cbr \/\u003E\r\n\u003Cem\u003EYu-Jhe Li, Xiaoliang Dai, Chih-Yao Ma, Yen-Cheng Liu, Kan Chen, Bichen Wu, Zijian He, Kris Kitani, Peter Vajda\u003C\/em\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022https:\/\/openaccess.thecvf.com\/content\/CVPR2022\/html\/Zhang_Group_R-CNN_for_Weakly_Semi-Supervised_Object_Detection_With_Points_CVPR_2022_paper.html\u0022\u003E\u003Cstrong\u003EGroup R-CNN for Weakly Semi-Supervised Object Detection With Points\u003C\/strong\u003E\u003C\/a\u003E\u003Cbr \/\u003E\r\n\u003Cem\u003EShilong Zhang, Zhuoran Yu, Liyang Liu, Xinjiang Wang, Aojun Zhou, Kai Chen\u003C\/em\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003ESELF-\u0026amp; SEMI-\u0026amp; META- \u0026amp; UNSUPERVISED LEARNING\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022https:\/\/openaccess.thecvf.com\/content\/CVPR2022\/html\/Liu_Unbiased_Teacher_v2_Semi-Supervised_Object_Detection_for_Anchor-Free_and_Anchor-Based_CVPR_2022_paper.html\u0022\u003E\u003Cstrong\u003EUnbiased Teacher v2: Semi-Supervised Object Detection for Anchor-Free and Anchor-Based Detectors\u003C\/strong\u003E\u003C\/a\u003E\u003Cbr \/\u003E\r\n\u003Cem\u003EYen-Cheng Liu, Chih-Yao Ma, Zsolt Kira\u003C\/em\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EVISION + LANGUAGE\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022https:\/\/openaccess.thecvf.com\/content\/CVPR2022\/html\/Kuo_Beyond_a_Pre-Trained_Object_Detector_Cross-Modal_Textual_and_Visual_Context_CVPR_2022_paper.html\u0022\u003E\u003Cstrong\u003EBeyond a Pre-Trained Object Detector: Cross-Modal Textual and Visual Context for Image Captioning\u003C\/strong\u003E\u003C\/a\u003E\u003Cbr \/\u003E\r\n\u003Cem\u003EChia-Wen Kuo, Zsolt Kira\u003C\/em\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022https:\/\/openaccess.thecvf.com\/content\/CVPR2022\/html\/Ramrakhya_Habitat-Web_Learning_Embodied_Object-Search_Strategies_From_Human_Demonstrations_at_Scale_CVPR_2022_paper.html\u0022\u003E\u003Cstrong\u003EHabitat-Web: Learning Embodied Object-Search Strategies From Human Demonstrations at Scale\u003C\/strong\u003E\u003C\/a\u003E\u003Cbr \/\u003E\r\n\u003Cem\u003ERam Ramrakhya, Eric Undersander, Dhruv Batra, Abhishek Das\u003C\/em\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EVISION APPLICATIONS AND SYSTEMS\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022https:\/\/openaccess.thecvf.com\/content\/CVPR2022\/html\/Datta_Episodic_Memory_Question_Answering_CVPR_2022_paper.html\u0022\u003E\u003Cstrong\u003EEpisodic Memory Question Answering\u003C\/strong\u003E\u003C\/a\u003E\u003Cbr \/\u003E\r\n\u003Cem\u003ESamyak Datta, Sameer Dharur, Vincent Cartillier, Ruta Desai, Mukul Khanna, Dhruv Batra, Devi Parikh\u003C\/em\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EDEMO\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022https:\/\/openaccess.thecvf.com\/content\/CVPR2022\/html\/Vellaichamy_DetectorDetective_Investigating_the_Effects_of_Adversarial_Examples_on_Object_Detectors_CVPR_2022_paper.html\u0022\u003E\u003Cstrong\u003EDetectorDetective: Investigating the Effects of Adversarial Examples on Object Detectors\u003C\/strong\u003E\u003C\/a\u003E\u003Cbr \/\u003E\r\n\u003Cem\u003ESivapriya Vellaichamy, Matthew Hull, Zijie J. Wang, Nilaksh Das, ShengYun Peng, Haekyu Park, Duen Horng (Polo) Chau\u003C\/em\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022https:\/\/openaccess.thecvf.com\/content\/CVPR2022\/html\/Lee_VisCUIT_Visual_Auditor_for_Bias_in_CNN_Image_Classifier_CVPR_2022_paper.html\u0022\u003E\u003Cstrong\u003EVisCUIT: Visual Auditor for Bias in CNN Image Classifier\u003C\/strong\u003E\u003C\/a\u003E\u003Cbr \/\u003E\r\n\u003Cem\u003ESeongmin Lee, Zijie J. Wang, Judy Hoffman, Duen Horng (Polo) Chau\u003C\/em\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EWORKSHOP\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022https:\/\/sites.google.com\/view\/mabe22\/\u0022\u003EMulti-Agent Behavior: Representation, Modeling, Measurement, and Applications\u003C\/a\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003ELearning Behavior Representations Through Multi-Timescale Bootstrapping\u003C\/strong\u003E\u003Cbr \/\u003E\r\n\u003Cem\u003EMehdi Azabou, Michael Mendelson, Maks Sorokin, Shantanu Thakoor, Nauman Ahad, Carolina Urzay, Mohammad Gheshlaghi Azar, Eva L. Dyer\u003C\/em\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Georgia Institute of Technology researchers will present new technical findings in artificial intelligence, machine learning, and computer vision research and applications at the Computer Vision and Pattern Recognition (CVPR) conference June 19-24."}],"uid":"27592","created_gmt":"2022-06-15 15:40:57","changed_gmt":"2022-06-15 15:45:31","author":"Joshua Preston","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2022-06-15T00:00:00-04:00","iso_date":"2022-06-15T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"658915":{"id":"658915","type":"image","title":"CVPR 2022 visual analysis","body":null,"created":"1655307891","gmt_created":"2022-06-15 15:44:51","changed":"1655307891","gmt_changed":"2022-06-15 15:44:51","alt":"","file":{"fid":"249767","name":"Data Viz_CVPR.jpg","image_path":"\/sites\/default\/files\/images\/Data%20Viz_CVPR.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Data%20Viz_CVPR.jpg","mime":"image\/jpeg","size":349122,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Data%20Viz_CVPR.jpg?itok=cFPyI3zO"}}},"media_ids":["658915"],"groups":[{"id":"576481","name":"ML@GT"}],"categories":[],"keywords":[],"core_research_areas":[{"id":"39431","name":"Data Engineering and Science"},{"id":"39521","name":"Robotics"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003E\u003Ca href=\u0022mailto:jpreston7@gatech.edu?subject=CVPR%20news\u0022\u003EJosh Preston\u003C\/a\u003E\u003Cbr \/\u003E\r\nResearch Communications Manager\u003Cbr \/\u003E\r\nCollege of Computing\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}}}