<node id="674171">
  <nid>674171</nid>
  <type>event</type>
  <uid>
    <user id="27707"><![CDATA[27707]]></user>
  </uid>
  <created>1712952628</created>
  <changed>1712952650</changed>
  <title><![CDATA[PhD Defense by Erin Botti]]></title>
  <body><![CDATA[<p><span><span><strong>Title:</strong> Improving Learning from Demonstration in Real-World Scenarios</span></span></p>

<p>&nbsp;</p>

<p><span><span><strong>Date: </strong>Wednesday, April 17th </span></span></p>

<p><span><span><strong>Time: </strong>12:00pm-2:00pm EST</span></span></p>

<p><span><span><strong>Location: </strong>Technology Square Research Building (TSRB) - Room 118 Auditorium</span></span></p>

<p><span><span><strong>Zoom:</strong> <a href="https://gatech.zoom.us/j/95722694024">https://gatech.zoom.us/j/95722694024</a></span></span></p>

<p>&nbsp;</p>

<p><span><span><strong>Erin Botti</strong></span></span></p>

<p><span><span>Robotics PhD Student</span></span></p>

<p><span><span>School of Interactive Computing</span></span></p>

<p><span><span>Georgia Institute of Technology</span></span></p>

<p>&nbsp;</p>

<p><span><span><strong>Committee</strong></span></span></p>

<p><span><span>Dr. Matthew Gombolay (Advisor) – School of Interactive Computing, Georgia Institute of Technology</span></span></p>

<p><span><span>Dr. Sonia Chernova – School of Interactive Computing, Georgia Institute of Technology</span></span></p>

<p><span><span>Dr. Charlie Kemp – CTO, Hello Robot Inc.</span></span></p>

<p><span><span>Dr. Agata Rozga – School of Interactive Computing, Georgia Institute of Technology</span></span></p>

<p><span><span>Dr. Maya Cakmak – Computer Science &amp; Engineering Department, University of Washington</span></span></p>

<p>&nbsp;</p>

<p><span><span><strong>Abstract</strong></span></span></p>

<p><span><span>To realize a vision of assistive robots in the home (e.g., care robots for older adults), robots will need the ability to learn new skills and adapt to their users and environment. Every home has a different layout, everyone has differing preferences and needs, and people's preferences and needs will change over time as they age. One solution to this challenge is Learning from Demonstration (LfD), a paradigm that enables novice end-users to teach robots new skills based on human demonstrations of a task. However, LfD is not foolproof. The real-world presents challenges from suboptimal teachers, communication errors, and unstructured environments. In this thesis, I aim to develop improvements to LfD to account for robot failure, teacher suboptimality, and target end-users. </span></span></p>

<p>&nbsp;</p>

<p><span><span>First, I conduct multiple human-subjects experiments to evaluate human perceptions of LfD across the general population and target end-users for assistive robots.&nbsp; I assess how people perceive different robot learning algorithms with the general and a target population of caregivers. Then I evaluate a person's perceptions of robot failure when the person is the one teaching the robot via LfD. Lastly, I compare perceptions of teaching a robot via LfD across a population of older and younger adults. Based on these findings, I develop a set of design guidelines and highlight current barriers for real-world deployment of LfD with non-expert users.</span></span></p>

<p>&nbsp;</p>

<p><span><span>One barrier is that robot failure can occur due to demonstrator suboptimality. I develop a novel architecture, Mutual Information-Driven Meta-Learning from Demonstration (MIND MELD) to learn from suboptimal demonstrators' feedback.&nbsp; Then I contribute a method, Reciprocal MIND MELD, to enable a robot to provide feedback to a human to reduce suboptimality in a human demonstrator. Another barrier for LfD is that end-users need more feedback from the robot during failure.&nbsp; I create a framework that Learns Interpretable Features from Interventions (LIFI).&nbsp; Users can intervene the robot when the robot is making a mistake, and the robot will communicate a human intelligible feature for the robot's understanding of its own failure.&nbsp; The robot then updates its policy to correctly perform the task.</span></span></p>

<p>&nbsp;</p>
]]></body>
  <field_summary_sentence>
    <item>
      <value><![CDATA[Improving Learning from Demonstration in Real-World Scenarios]]></value>
    </item>
  </field_summary_sentence>
  <field_summary>
    <item>
      <value><![CDATA[<p><span><span>Improving Learning from Demonstration in Real-World Scenarios</span></span></p>
]]></value>
    </item>
  </field_summary>
  <field_time>
    <item>
      <value><![CDATA[2024-04-17T12:00:00-04:00]]></value>
      <value2><![CDATA[2024-04-17T14:00:00-04:00]]></value2>
      <rrule><![CDATA[]]></rrule>
      <timezone><![CDATA[America/New_York]]></timezone>
    </item>
  </field_time>
  <field_fee>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_fee>
  <field_extras>
      </field_extras>
  <field_audience>
          <item>
        <value><![CDATA[Public]]></value>
      </item>
      </field_audience>
  <field_media>
      </field_media>
  <field_contact>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_contact>
  <field_location>
    <item>
      <value><![CDATA[Technology Square Research Building (TSRB) - Room 118 Auditorium]]></value>
    </item>
  </field_location>
  <field_sidebar>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_sidebar>
  <field_phone>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_phone>
  <field_url>
    <item>
      <url><![CDATA[]]></url>
      <title><![CDATA[]]></title>
            <attributes><![CDATA[]]></attributes>
    </item>
  </field_url>
  <field_email>
    <item>
      <email><![CDATA[]]></email>
    </item>
  </field_email>
  <field_boilerplate>
    <item>
      <nid><![CDATA[]]></nid>
    </item>
  </field_boilerplate>
  <links_related>
      </links_related>
  <files>
      </files>
  <og_groups>
          <item>221981</item>
      </og_groups>
  <og_groups_both>
          <item><![CDATA[Graduate Studies]]></item>
      </og_groups_both>
  <field_categories>
          <item>
        <tid>1788</tid>
        <value><![CDATA[Other/Miscellaneous]]></value>
      </item>
      </field_categories>
  <field_keywords>
          <item>
        <tid>100811</tid>
        <value><![CDATA[Phd Defense]]></value>
      </item>
      </field_keywords>
  <field_userdata><![CDATA[]]></field_userdata>
</node>
