<node id="599125">
  <nid>599125</nid>
  <type>event</type>
  <uid>
    <user id="27707"><![CDATA[27707]]></user>
  </uid>
  <created>1511789299</created>
  <changed>1511789299</changed>
  <title><![CDATA[PhD Defense by Vivian Chu]]></title>
  <body><![CDATA[<p><strong>Title</strong>:&nbsp;Teaching Robots about Human Environments</p>

<p>&nbsp;</p>

<p>Vivian Chu</p>

<p>Robotics Ph.D. Candidate</p>

<p>School of Interactive Computing</p>

<p>Georgia Institute of Technology</p>

<p>&nbsp;</p>

<p><strong>Date</strong>: December 6th, 2017 (Wednesday)</p>

<p><strong>Time</strong>: 1:00pm to 3:00pm (EST)</p>

<p><strong>Location</strong>: CCB 340</p>

<p>&nbsp;</p>

<p><strong>Committee</strong>:</p>

<p>-------------------</p>

<p>Dr. Andrea L. Thomaz (Co-Advisor), Department of Electrical and Computer Engineering, The University of Texas at Austin&nbsp;</p>

<p>Dr. Sonia Chernova (Co-Advisor), School of Interactive Computing, Georgia Institute of Technology&nbsp;</p>

<p>Dr. Henrik I. Christensen, Department of Computer Science and Engineering, University of California, San Diego</p>

<p>Dr. Charles C. Kemp, School of Biomedical Engineering, Georgia Institute of Technology</p>

<p>Dr. Siddhartha Srinivasa, School of Computer Science and Engineering, University of Washington</p>

<p>&nbsp;</p>

<p>&nbsp;</p>

<p><strong>Abstract:</strong></p>

<p>-------------------</p>

<p>&nbsp;</p>

<p>The real world is complex, unstructured, and contains high levels of uncertainty. To operate in such environments, robots need to learn and adapt. One such framework that allows robots to learn and adapt is to model the world using affordances. By modeling the world with affordances, robots can reason about what actions they need to take to achieve a goal. This thesis provides a framework that allows robots to learn these models through interaction and human guidance.&nbsp;</p>

<p>&nbsp;</p>

<p>Within robotic affordance learning, there has been a large focus on learning visual skill representations due to the difficulty of getting robots to interact with the environment. Furthermore, utilizing different modalities (e.g. touch and sound) introduces challenges such as different sampling rates and data resolution. This thesis addresses these challenges by providing several methods to interactively gather multisensory data using <em>human guided robot self-exploration</em>&nbsp;and an approach to integrate visual, haptic, and auditory data for <em>adaptive object manipulation</em>.&nbsp;</p>

<p>&nbsp;</p>

<p>We take a human-centered approach to tackling the challenge of robots operating in unstructured environments. The following are the contributions this thesis makes to the field of robot learning: (1) a <em>human-centered framework for robot affordance learning</em>&nbsp;that demonstrates how human teachers can guide the robot in the modeling process throughout the entire pipeline of affordance learning; (2) a <em>human-guided robot self-exploration framework</em>&nbsp;that contributes several algorithms that use human guidance to enable robots to efficiently explore the environment and learn affordance models for a diverse range of manipulation tasks; (3) a <em>multisensory affordance model</em>&nbsp;that integrates visual, haptic, and audio input; and (4) a novel control framework that allows <em>adaptation of affordances</em>&nbsp;for object manipulation that utilizes multisensory data and human-guided exploration.</p>
]]></body>
  <field_summary_sentence>
    <item>
      <value><![CDATA[Teaching Robots about Human Environments]]></value>
    </item>
  </field_summary_sentence>
  <field_summary>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_summary>
  <field_time>
    <item>
      <value><![CDATA[2017-12-06T13:00:00-05:00]]></value>
      <value2><![CDATA[2017-12-06T15:00:00-05:00]]></value2>
      <rrule><![CDATA[]]></rrule>
      <timezone><![CDATA[America/New_York]]></timezone>
    </item>
  </field_time>
  <field_fee>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_fee>
  <field_extras>
      </field_extras>
  <field_audience>
          <item>
        <value><![CDATA[Faculty/Staff]]></value>
      </item>
          <item>
        <value><![CDATA[Public]]></value>
      </item>
          <item>
        <value><![CDATA[Graduate students]]></value>
      </item>
          <item>
        <value><![CDATA[Undergraduate students]]></value>
      </item>
      </field_audience>
  <field_media>
      </field_media>
  <field_contact>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_contact>
  <field_location>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_location>
  <field_sidebar>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_sidebar>
  <field_phone>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_phone>
  <field_url>
    <item>
      <url><![CDATA[]]></url>
      <title><![CDATA[]]></title>
            <attributes><![CDATA[]]></attributes>
    </item>
  </field_url>
  <field_email>
    <item>
      <email><![CDATA[]]></email>
    </item>
  </field_email>
  <field_boilerplate>
    <item>
      <nid><![CDATA[]]></nid>
    </item>
  </field_boilerplate>
  <links_related>
      </links_related>
  <files>
      </files>
  <og_groups>
          <item>221981</item>
      </og_groups>
  <og_groups_both>
          <item><![CDATA[Graduate Studies]]></item>
      </og_groups_both>
  <field_categories>
          <item>
        <tid>1788</tid>
        <value><![CDATA[Other/Miscellaneous]]></value>
      </item>
      </field_categories>
  <field_keywords>
          <item>
        <tid>100811</tid>
        <value><![CDATA[Phd Defense]]></value>
      </item>
      </field_keywords>
  <field_userdata><![CDATA[]]></field_userdata>
</node>
