{"620601":{"#nid":"620601","#data":{"type":"news","title":"Could a Robot Save People from a Burning Building? Georgia Tech is Pushing New Robotics Research in that Direction","body":[{"value":"\u003Cp\u003EFor several years, scientists have been training intelligent agents on images and other data so that machines can learn to recognize what they see. Researchers are now starting to work toward training robots equipped with these stores of data to be able to make better autonomous decisions.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EZsolt Kira\u003C\/strong\u003E, associate director of the \u003Ca href=\u0022http:\/\/ml.gatech.edu\/\u0022\u003EMachine Learning Center at Georgia Tech\u003C\/a\u003E, and Georgia Tech Ph.D. student \u003Cstrong\u003EChih-Yao Ma\u003C\/strong\u003E have published new research that improves on how autonomous robots move in their surroundings. This work is in collaboration with \u003Cstrong\u003ECaiming Xiong,\u003C\/strong\u003E director of \u003Ca href=\u0022https:\/\/www.salesforce.com\/research\/\u0022\u003ESalesforce Research\u003C\/a\u003E, and researchers from the \u003Ca href=\u0022https:\/\/www.umd.edu\/\u0022\u003EUniversity of Maryland, College Park\u003C\/a\u003E.\u0026nbsp;Georgia Tech professor \u003Cstrong\u003EGhassan AlRegib\u0026nbsp;\u003C\/strong\u003Eand Ph.D. student\u0026nbsp;\u003Cstrong\u003EJiasen Lu \u003C\/strong\u003Eare also paper authors.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EExisting methods have allowed robots to navigate unknown environments by combining a 360- degree panoramic view of their surroundings and programmed instructions that describe how to accomplish a goal. The goal could be to locate a doctor\u0026rsquo;s office in an office complex or find the fastest exit route in a building.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe new Georgia Tech research by Kira and his team improves the accuracy with which a robot completes an assigned navigation task by 8 percent, a significant increase for autonomous navigation systems. The research method includes new mechanisms that add reasoning skills to autonomous systems, as well as the ability for them to essentially correct their mistakes.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;By teaching robots to more effectively navigate unknown environments, robots could be used in the household or for autonomous vehicles,\u0026rdquo; said Kira.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe researchers say their method\u0026rsquo;s improved accuracy for navigation could be particularly helpful in the future for scenarios that might be too dangerous for humans, such as a robot performing search and rescue or entering a burning building. Or robots could simply take up more mundane (but essential) tasks like making and serving morning coffee.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EKira\u0026rsquo;s team began with an existing robotics technique, the attention mechanism. The mechanism teaches robots to autonomously move in their environment using written instructions. The mechanism also helps it identify which step should be completed next.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EKira and Ma added a reasoning component to allow the robot to estimate how well it was doing in completing the task, and how close it was to finishing it. They also added a new \u0026ldquo;rollback\u0026rdquo; function. Rollback uses a neural network trained to help the agent determine if it has made a mistake while following instructions. If it determines a mistake has been made, the agent reverts to its most recent successfully completed task in an effort to correct the error. This improvement significantly reduces the number of steps needed to reach the goal.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;These added components helped increase the accuracy of the attention mechanism and led to higher success rates in performance or completing the set of instructions,\u0026rdquo; said Kira.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe work is published in two papers, \u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/1901.03035.pdf\u0022\u003E\u0026ldquo;Self-Monitoring Navigation Agent Via Auxillary Progress Estimation\u0026rdquo;\u003C\/a\u003E and \u003Ca href=\u0022https:\/\/arxiv.org\/pdf\/1903.01602.pdf\u0022\u003E\u0026ldquo;The Regretful Agent: Heuristic-Aided Navigation through Progress Estimation.\u0026rdquo;\u003C\/a\u003E The papers will be presented respectively at the \u003Ca href=\u0022https:\/\/iclr.cc\/\u0022\u003EInternational Conference on Learning Representations (ICLR)\u003C\/a\u003E May 6-9 and the \u003Ca href=\u0022http:\/\/cvpr2019.thecvf.com\/\u0022\u003EComputer Vision and Pattern Recognition (CVPR)\u003C\/a\u003E conference June 16-20.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"New research on autonomous robots from the Machine Learning Center at Georgia Tech, Salesforce Research, and the University of Maryland will be presented at two major AI conferences this summer. "}],"uid":"34773","created_gmt":"2019-04-17 20:21:55","changed_gmt":"2019-05-07 16:48:03","author":"ablinder6","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-04-17T00:00:00-04:00","iso_date":"2019-04-17T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"620599":{"id":"620599","type":"image","title":"New research on autonomous robots from the Machine Learning Center at Georgia Tech in collaboration with Salesforce Research and the University of Maryland will be presented at two major AI conferences this summer, ICLR and CVPR. ","body":null,"created":"1555532208","gmt_created":"2019-04-17 20:16:48","changed":"1555532227","gmt_changed":"2019-04-17 20:17:07","alt":"","file":{"fid":"236316","name":"Screen Shot 2019-02-20 at 3.43.27 PM.png","image_path":"\/sites\/default\/files\/images\/Screen%20Shot%202019-02-20%20at%203.43.27%20PM.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Screen%20Shot%202019-02-20%20at%203.43.27%20PM.png","mime":"image\/png","size":1026779,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Screen%20Shot%202019-02-20%20at%203.43.27%20PM.png?itok=MnkVIG9U"}},"620600":{"id":"620600","type":"image","title":"The graphs on the left represent a baseline on how well an agent is moving through a set of tasks. The two graphs on the right represent the work discussed in \u201cSelf-Monitoring Navigation Agent Via Auxillary Progress Estimation\u201d. The darker the green is an","body":null,"created":"1555532275","gmt_created":"2019-04-17 20:17:55","changed":"1555532275","gmt_changed":"2019-04-17 20:17:55","alt":"","file":{"fid":"236317","name":"Screen Shot 2019-02-20 at 9.43.05 AM.png","image_path":"\/sites\/default\/files\/images\/Screen%20Shot%202019-02-20%20at%209.43.05%20AM.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Screen%20Shot%202019-02-20%20at%209.43.05%20AM.png","mime":"image\/png","size":38510,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Screen%20Shot%202019-02-20%20at%209.43.05%20AM.png?itok=6QURy2PL"}}},"media_ids":["620599","620600"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"576481","name":"ML@GT"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"153","name":"Computer Science\/Information Technology and Security"},{"id":"152","name":"Robotics"}],"keywords":[],"core_research_areas":[{"id":"39501","name":"People and Technology"},{"id":"39521","name":"Robotics"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EAllie McFadden\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n\r\n\u003Cp\u003Eallie.mcfadden@cc.gatech.edu\u003C\/p\u003E\r\n","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}}}