<node id="641948">
  <nid>641948</nid>
  <type>event</type>
  <uid>
    <user id="28475"><![CDATA[28475]]></user>
  </uid>
  <created>1607446624</created>
  <changed>1607446624</changed>
  <title><![CDATA[Ph.D. Proposal Oral Exam - Niranjan Kannabiran]]></title>
  <body><![CDATA[<p><strong>Title:&nbsp; </strong><em>Teaching Robots to Learn about Objects by Interaction</em></p>

<p><strong>Committee:&nbsp; </strong></p>

<p>Dr. Issa, Advisor</p>

<p>Dr. Ha, Co-Advisor</p>

<p>Dr. Davenport, Co-Advisor&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</p>

<p>Dr. Vela, Chair</p>

<p>Dr. Chernova</p>

<p><strong>Abstract: </strong>The objective of the proposed research is to develop action policies that can aid the perception of a robotic agent. Traditionally, perception in robotics has been treated as a one-way flow of information from the outside world into a robot. But this framing overlooks the fact that a robot has the capacity to interact with its environment and perceive additional information, that otherwise might not be available to it. For example, a robot can push an object, observe how it moves and estimate its mass. In addition to such latent object properties, interaction can also be used to understand a visual scene better. Real world scenes are often cluttered with a lot of objects, many of which might be partially or completely occluded. To tackle such problems, we design neural network architectures that exploit synergies between supervised training and reinforcement learning. We train policies that can maintain an internal belief about the environment and take actions that gradually improve upon it. We propose a new dataset that contains challenging cluttered arrangements of ambiguous objects that have similar shapes or textures and use it to validate the performance of our method.</p>
]]></body>
  <field_summary_sentence>
    <item>
      <value><![CDATA[Teaching Robots to Learn about Objects by Interaction]]></value>
    </item>
  </field_summary_sentence>
  <field_summary>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_summary>
  <field_time>
    <item>
      <value><![CDATA[2020-12-14T10:00:00-05:00]]></value>
      <value2><![CDATA[2020-12-14T12:00:00-05:00]]></value2>
      <rrule><![CDATA[]]></rrule>
      <timezone><![CDATA[America/New_York]]></timezone>
    </item>
  </field_time>
  <field_fee>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_fee>
  <field_extras>
      </field_extras>
  <field_audience>
          <item>
        <value><![CDATA[Public]]></value>
      </item>
      </field_audience>
  <field_media>
      </field_media>
  <field_contact>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_contact>
  <field_location>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_location>
  <field_sidebar>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_sidebar>
  <field_phone>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_phone>
  <field_url>
    <item>
      <url><![CDATA[]]></url>
      <title><![CDATA[]]></title>
            <attributes><![CDATA[]]></attributes>
    </item>
  </field_url>
  <field_email>
    <item>
      <email><![CDATA[]]></email>
    </item>
  </field_email>
  <field_boilerplate>
    <item>
      <nid><![CDATA[]]></nid>
    </item>
  </field_boilerplate>
  <links_related>
      </links_related>
  <files>
      </files>
  <og_groups>
          <item>434371</item>
      </og_groups>
  <og_groups_both>
          <item><![CDATA[ECE Ph.D. Proposal Oral Exams]]></item>
      </og_groups_both>
  <field_categories>
          <item>
        <tid>1788</tid>
        <value><![CDATA[Other/Miscellaneous]]></value>
      </item>
      </field_categories>
  <field_keywords>
          <item>
        <tid>102851</tid>
        <value><![CDATA[Phd proposal]]></value>
      </item>
          <item>
        <tid>1808</tid>
        <value><![CDATA[graduate students]]></value>
      </item>
      </field_keywords>
  <field_userdata><![CDATA[]]></field_userdata>
</node>
