{"599125":{"#nid":"599125","#data":{"type":"event","title":"PhD Defense by Vivian Chu","body":[{"value":"\u003Cp\u003E\u003Cstrong\u003ETitle\u003C\/strong\u003E:\u0026nbsp;Teaching Robots about Human Environments\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EVivian Chu\u003C\/p\u003E\r\n\r\n\u003Cp\u003ERobotics Ph.D. Candidate\u003C\/p\u003E\r\n\r\n\u003Cp\u003ESchool of Interactive Computing\u003C\/p\u003E\r\n\r\n\u003Cp\u003EGeorgia Institute of Technology\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EDate\u003C\/strong\u003E: December 6th, 2017 (Wednesday)\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003ETime\u003C\/strong\u003E: 1:00pm to 3:00pm (EST)\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003ELocation\u003C\/strong\u003E: CCB 340\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003ECommittee\u003C\/strong\u003E:\u003C\/p\u003E\r\n\r\n\u003Cp\u003E-------------------\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDr. Andrea L. Thomaz (Co-Advisor), Department of Electrical and Computer Engineering, The University of Texas at Austin\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDr. Sonia Chernova (Co-Advisor), School of Interactive Computing, Georgia Institute of Technology\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDr. Henrik I. Christensen, Department of Computer Science and Engineering, University of California, San Diego\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDr. Charles C. Kemp, School of Biomedical Engineering, Georgia Institute of Technology\u003C\/p\u003E\r\n\r\n\u003Cp\u003EDr. Siddhartha Srinivasa, School of Computer Science and Engineering, University of Washington\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003EAbstract:\u003C\/strong\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E-------------------\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe real world is complex, unstructured, and contains high levels of uncertainty. To operate in such environments, robots need to learn and adapt. One such framework that allows robots to learn and adapt is to model the world using affordances. By modeling the world with affordances, robots can reason about what actions they need to take to achieve a goal. This thesis provides a framework that allows robots to learn these models through interaction and human guidance.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWithin robotic affordance learning, there has been a large focus on learning visual skill representations due to the difficulty of getting robots to interact with the environment. Furthermore, utilizing different modalities (e.g. touch and sound) introduces challenges such as different sampling rates and data resolution. This thesis addresses these challenges by providing several methods to interactively gather multisensory data using \u003Cem\u003Ehuman guided robot self-exploration\u003C\/em\u003E\u0026nbsp;and an approach to integrate visual, haptic, and auditory data for \u003Cem\u003Eadaptive object manipulation\u003C\/em\u003E.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EWe take a human-centered approach to tackling the challenge of robots operating in unstructured environments. The following are the contributions this thesis makes to the field of robot learning: (1) a \u003Cem\u003Ehuman-centered framework for robot affordance learning\u003C\/em\u003E\u0026nbsp;that demonstrates how human teachers can guide the robot in the modeling process throughout the entire pipeline of affordance learning; (2) a \u003Cem\u003Ehuman-guided robot self-exploration framework\u003C\/em\u003E\u0026nbsp;that contributes several algorithms that use human guidance to enable robots to efficiently explore the environment and learn affordance models for a diverse range of manipulation tasks; (3) a \u003Cem\u003Emultisensory affordance model\u003C\/em\u003E\u0026nbsp;that integrates visual, haptic, and audio input; and (4) a novel control framework that allows \u003Cem\u003Eadaptation of affordances\u003C\/em\u003E\u0026nbsp;for object manipulation that utilizes multisensory data and human-guided exploration.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"Teaching Robots about Human Environments"}],"uid":"27707","created_gmt":"2017-11-27 13:28:19","changed_gmt":"2017-11-27 13:28:19","author":"Tatianna Richardson","boilerplate_text":"","field_publication":"","field_article_url":"","field_event_time":{"event_time_start":"2017-12-06T13:00:00-05:00","event_time_end":"2017-12-06T15:00:00-05:00","event_time_end_last":"2017-12-06T15:00:00-05:00","gmt_time_start":"2017-12-06 18:00:00","gmt_time_end":"2017-12-06 20:00:00","gmt_time_end_last":"2017-12-06 20:00:00","rrule":null,"timezone":"America\/New_York"},"extras":[],"groups":[{"id":"221981","name":"Graduate Studies"}],"categories":[],"keywords":[{"id":"100811","name":"Phd Defense"}],"core_research_areas":[],"news_room_topics":[],"event_categories":[{"id":"1788","name":"Other\/Miscellaneous"}],"invited_audience":[{"id":"78761","name":"Faculty\/Staff"},{"id":"78771","name":"Public"},{"id":"174045","name":"Graduate students"},{"id":"78751","name":"Undergraduate students"}],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[],"email":[],"slides":[],"orientation":[],"userdata":""}}}