{"636432":{"#nid":"636432","#data":{"type":"news","title":"New AI Method Lets Robots Get By With a Little Help From Their Friends","body":[{"value":"\u003Cp\u003ENew artificial intelligence (AI) research is using deep learning to improve the efficiency of communications between AI-enabled agents \u0026ndash; like robots, drones, and self-driving cars \u0026ndash; that are working together to solve computer vision and perception tasks.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWithin the new multi-stage communications deep learning framework developed by researchers from the\u0026nbsp;\u003Ca href=\u0022https:\/\/ml.gatech.edu\/\u0022 rel=\u0022noopener noreferrer\u0022 target=\u0022_blank\u0022\u003EMachine Learning Center at Georgia Tech\u003C\/a\u003E, a robot drone with a blocked view of a collapsed building, for example, can query its teammates until it finds one with relevant details to fill in the blanks.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;The current model is for all agents to be talking to all agents at the same time, a fully connected network, via high bandwidth connection, even when it\u0026rsquo;s not necessary. Our goal is to maximize accuracy of perception tasks, while minimizing bandwidth usage,\u0026rdquo; said\u0026nbsp;\u003Cstrong\u003EZsolt Kira\u003C\/strong\u003E, associate director of the Machine Learning Center and an assistant professor in the College of Computing.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFor the specific perception task of classifying each pixel of an image, known as semantic segmentation, Kira says that the new framework has achieved this goal by improving accuracy by as much as 20 percent while using just a fourth of the bandwidth required by current state-of-the-art models.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Rather than communicating all at once, in our model, each agent learns when, with whom, and what to communicate in order to complete an assigned perception task as efficiently as possible,\u0026rdquo; said Kira, who advises machine learning (ML) Ph.D. student\u0026nbsp;\u003Cstrong\u003EYen-Cheng Liu\u003C\/strong\u003E, lead researcher on the project.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe new approach limits the amount of communications needed by breaking down the process into three stages within the team\u0026rsquo;s deep learning framework.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIn the request stage, the robot with the blocked view or with a degraded sensor sends an extremely small query to each of its teammates. In the matching stage, the other agents evaluate the query to see if they have relevant information to share with the initiator.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;We refer to this as a handshake mechanism to determine if communication is even needed in the first place and if it is, what information to transmit, and who to send it to,\u0026rdquo; said Kira.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDuring the connect stage the initiating robot integrates details provided to it from its teammates to fill the gaps in its own observations. The agent then uses this update to improve its estimates for the overall task.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFor the study, the research team \u0026ndash; which includes ML Ph.D. students\u0026nbsp;\u003Cstrong\u003EJunjiao Tian\u003C\/strong\u003E, and\u0026nbsp;\u003Cstrong\u003ENathaniel Glaser\u003C\/strong\u003E\u0026nbsp;\u0026ndash; developed a training dataset within AirSim, an open-source simulator for drones, cars, and other autonomous or semiautonomous vehicles. The multi-view dataset, known as AirSim-MAP, is based on data gathered from a team of five virtual drones flying through a dynamic landscape. Data captured includes RGB image, depth maps, pose, and semantic segmentation masks for each agent.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;We\u0026rsquo;ll be releasing this dataset soon to allow other researchers to explore similar problems,\u0026rdquo; said Kira, director the Robotics Perception and Learning lab, where the study was completed.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe results of the team\u0026rsquo;s work are being presented virtually this week at the 2020 IEEE International Conference on Robotics and Automation in a paper titled\u0026nbsp;\u003Cem\u003EWho2com: Collaborative Perception via Learnable Handshake Communications\u003C\/em\u003E. The research is supported by a grant from the Office of Naval Research (N00014-18-1-2829.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"A new low-bandwidth approach lets robots working as a team share information to complete a common task."}],"uid":"32045","created_gmt":"2020-06-23 17:03:16","changed_gmt":"2020-06-23 17:07:25","author":"Ben Snedeker","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2020-06-23T00:00:00-04:00","iso_date":"2020-06-23T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"636433":{"id":"636433","type":"image","title":"Robots sharing info approach","body":null,"created":"1592931945","gmt_created":"2020-06-23 17:05:45","changed":"1592931945","gmt_changed":"2020-06-23 17:05:45","alt":"screenshot of drones working as a team in a virtual environment","file":{"fid":"242161","name":"Screen Shot 2020-06-23 at 1.01.02 PM.png","image_path":"\/sites\/default\/files\/images\/Screen%20Shot%202020-06-23%20at%201.01.02%20PM.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Screen%20Shot%202020-06-23%20at%201.01.02%20PM.png","mime":"image\/png","size":1849045,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Screen%20Shot%202020-06-23%20at%201.01.02%20PM.png?itok=59QnGD09"}}},"media_ids":["636433"],"groups":[{"id":"47223","name":"College of Computing"}],"categories":[{"id":"152","name":"Robotics"}],"keywords":[{"id":"667","name":"robotics"},{"id":"46361","name":"GT computing"},{"id":"105641","name":"zsolt kira"}],"core_research_areas":[{"id":"39521","name":"Robotics"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EAlbert Snedeker, Sr. Communications Manager\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Ca href=\u0022mailto:albert.snedeker@cc.gatech.edu?subject=Robotics%20Research\u0022\u003Ealbert.snedeker@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["albert.snedeker@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}}}