<node id="689147">
  <nid>689147</nid>
  <type>event</type>
  <uid>
    <user id="27707"><![CDATA[27707]]></user>
  </uid>
  <created>1774295561</created>
  <changed>1774295596</changed>
  <title><![CDATA[PhD Proposal by Yunhai Han]]></title>
  <body><![CDATA[<p><strong>Title</strong>: Towards Efficient and Self-Sufficient Learning of In-Domain Dexterous Manipulation Skills from Limited Supervision</p><p>&nbsp;</p><p><strong>Date</strong>: Thursday, April 2nd, 2026</p><p><strong>Time</strong>: 3:30PM to 5PM ET</p><p><strong>Location</strong>: 1212 Conference Room Klaus&nbsp;or <a href="https://gatech.zoom.us/j/3299213715?omn=92328882802" target="_blank" title="https://gatech.zoom.us/j/3299213715?omn=92328882802">Zoom Link</a></p><p>&nbsp;</p><p><strong>Yunhai Han</strong></p><p>Robotics Ph.D. Student</p><p>Woodruff School of Mechanical Engineering</p><p>Georgia Institute of Technology</p><p>&nbsp;</p><p><strong>Committee</strong>:</p><p>Dr. Harish Ravichandar (advisor) – School of Interactive Computing, Georgia Institute of Technology</p><p>Dr. Zsolt Kira&nbsp;– School of Interactive Computing, Georgia Institute of Technology</p><p>Dr. Danfei Xu&nbsp;– &nbsp;School of Interactive Computing, Georgia Institute of Technology</p><p>Dr. Yunzhu Li&nbsp;– Department of Computer Science, Columbia University</p><p>Dr. Dinesh Jayaraman&nbsp;&nbsp;– Department of Computer and Information Science, University of Pennsylvania</p><p>&nbsp;</p><p><strong>Abstract</strong>:</p><p>Recent advances in large-scale robot learning, that build upon successes from adjacent domains (e.g., Natural Language Processing&nbsp;and Computing Vision), have reinforced the view that scaling is the primary approach to achieving effective dexterous robotic manipulation. However, despite extensive efforts in massive data collection and pretrained foundation models&nbsp;for robotics, robot manipulation systems remain largely confined to settings that rely on bespoke demonstrations and well-resourced laboratory infrastructure.&nbsp;</p><p>A key reason is the critical need for <em><strong>high-quality, in-domain robot data&nbsp;</strong></em>and <em><strong>expert-designed training recipes&nbsp;</strong></em>to fine-tune or distill large pretrained models into effective manipulation policies. Consequently, this reliance on <em><strong>curated datasets</strong></em>, <em><strong>rich supervision,&nbsp;</strong></em>and <em><strong>expert intervention</strong></em>&nbsp;during training makes it challenging to adapt robot policies across the diverse settings encountered in domains such as manufacturing and consumer robotics. Moreover, the <em><strong>lack of interpretability&nbsp;</strong></em>in these distilled policies offers little insight into their brittleness and idiosyncratic failure modes.</p><p>&nbsp;</p><p>In this work, we propose a learning framework for robust robot dexterity by <em><strong>efficiently</strong></em>&nbsp;learning <em><strong>transparent</strong></em>&nbsp;robot policies from <em><strong>limited supervision</strong></em>&nbsp;while <em><strong>minimizing task-specific expert intervention</strong></em>. To achieve this, the learning system should: i) employ a transparent, task-agnostic policy architecture, ii) enable efficient, self-sufficient training and fine-tuning strategies, iii) learn from accessible in-the-wild data, and iv) integrate general-purpose reasoning modules with mechanisms that support reliable in-domain execution. In this way, our framework is complementary to general-purpose reasoning models but offers a principled alternative to monolithic generalist robot policies by learning <em><strong>lightweight specialists</strong></em>&nbsp;for specific dexterous skills in the target environment without significant reliance on expert-intervention and data curation. Our primary design objectives are to better support real-world diversity in tasks, objects, and environments by learning specialist policies tailored to the requirements of a particular setting.</p>]]></body>
  <field_summary_sentence>
    <item>
      <value><![CDATA[Towards Efficient and Self-Sufficient Learning of In-Domain Dexterous Manipulation Skills from Limited Supervision]]></value>
    </item>
  </field_summary_sentence>
  <field_summary>
    <item>
      <value><![CDATA[<p>Towards Efficient and Self-Sufficient Learning of In-Domain Dexterous Manipulation Skills from Limited Supervision</p>]]></value>
    </item>
  </field_summary>
  <field_time>
    <item>
      <value><![CDATA[2026-04-02T15:30:13-04:00]]></value>
      <value2><![CDATA[2026-04-02T17:00:13-04:00]]></value2>
      <rrule><![CDATA[]]></rrule>
      <timezone><![CDATA[America/New_York]]></timezone>
    </item>
  </field_time>
  <field_fee>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_fee>
  <field_extras>
      </field_extras>
  <field_audience>
          <item>
        <value><![CDATA[Public]]></value>
      </item>
      </field_audience>
  <field_media>
      </field_media>
  <field_contact>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_contact>
  <field_location>
    <item>
      <value><![CDATA[1212 Conference Room Klaus or Zoom Link]]></value>
    </item>
  </field_location>
  <field_sidebar>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_sidebar>
  <field_phone>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_phone>
  <field_url>
    <item>
      <url><![CDATA[]]></url>
      <title><![CDATA[]]></title>
            <attributes><![CDATA[]]></attributes>
    </item>
  </field_url>
  <field_email>
    <item>
      <email><![CDATA[]]></email>
    </item>
  </field_email>
  <field_boilerplate>
    <item>
      <nid><![CDATA[]]></nid>
    </item>
  </field_boilerplate>
  <links_related>
      </links_related>
  <files>
      </files>
  <og_groups>
          <item>221981</item>
      </og_groups>
  <og_groups_both>
          <item><![CDATA[Graduate Studies]]></item>
      </og_groups_both>
  <field_categories>
          <item>
        <tid>1788</tid>
        <value><![CDATA[Other/Miscellaneous]]></value>
      </item>
      </field_categories>
  <field_keywords>
          <item>
        <tid>102851</tid>
        <value><![CDATA[Phd proposal]]></value>
      </item>
      </field_keywords>
  <field_userdata><![CDATA[]]></field_userdata>
</node>
