<node id="681479">
  <nid>681479</nid>
  <type>event</type>
  <uid>
    <user id="27707"><![CDATA[27707]]></user>
  </uid>
  <created>1743447524</created>
  <changed>1743447550</changed>
  <title><![CDATA[PhD Proposal by Tianyu Li]]></title>
  <body><![CDATA[<p>Title: Cross-Embodiment Imitation for Robot Whole-body Skill Learning</p><p>Date: Tuesday, &nbsp;April 15th 2025<br>Time: 12:00 PM - &nbsp;2:00 PM EST<br>Location: Klaus 3126, Zoom Link</p><p><br>Tianyu Li<br>Ph.D. Student in Computer Science<br>School of Interactive Computing&nbsp;<br>Georgia Institute of Technology&nbsp;<br>https://easypapersniper.github.io/</p><p>&nbsp;<br>Committee:&nbsp;<br>Dr. Sehoon Ha (Advisor) – School of Interactive Computing, Georgia Institute of Technology<br>Dr. Danfei Xu – School of Interactive Computing, Georgia Institute of Technology<br>Dr. Greg Turk – School of Interactive Computing, Georgia Institute of Technology<br>Dr. Karen Liu – Computer Science Department, Stanford University<br>Dr. Marco Hutter – &nbsp;Robotic Systems Lab, ETH Zürich</p><p>&nbsp;</p><p>Abstract:&nbsp;<br>Robots need to acquire natural, human-like skills to assist humans with a wide range of tasks. A direct and intuitive approach to learning these skills is through imitation of human demonstrations. However, robots often differ drastically from humans in both morphology and dynamics—with forms ranging from quadrupedal and wheeled robots to multi-arm manipulators. This embodiment gap poses significant challenges for transferring complex, whole-body human motions to robots.</p><p>In this thesis, we introduce four frameworks that enable robots to learn sophisticated whole-body skills from human demonstrations. 1) we present Adversarial Correspondence Embedding (ACE), an unsupervised learning framework that maps human motion dataset to motion spaces of morphologically different robots. ACE enables a variety of human motions—such as playing volleyball or punching—to be retargeted to robots with significant different embodiments. 2) CrossLoco, an unsupervised reinforcement learning framework that jointly learns robot control and human-to-robot motion correspondence. Without requiring prior knowledge of robot skills, CrossLoco enables the translation of human motions into executable robot behaviors. 3) We introduce RobotMover, an imitation learning framework that allows robots to manipulate large objects, such as furniture, by imitating human-object interactions. We demonstrate that the learned policy enables robust and versatile manipulation of diverse objects along various target velocity trajectories. 4) we propose Cross-embodiment Interaction Imitation, a framework for learning complex two body interactive behaviors—such as dancing, sparring, and handshaking—from human demonstrations and transferring them to robots with different body structures. In ongoing work, we explore improved techniques for generalizing imitation across arbitrary embodiments ("any-to-any" motion imitation), focusing on universal motion representation.<br>&nbsp;</p>]]></body>
  <field_summary_sentence>
    <item>
      <value><![CDATA[Cross-Embodiment Imitation for Robot Whole-body Skill Learning]]></value>
    </item>
  </field_summary_sentence>
  <field_summary>
    <item>
      <value><![CDATA[<p>Cross-Embodiment Imitation&nbsp;for Robot Whole-body Skill Learning</p>]]></value>
    </item>
  </field_summary>
  <field_time>
    <item>
      <value><![CDATA[2025-04-15T12:00:00-04:00]]></value>
      <value2><![CDATA[2025-04-15T14:00:00-04:00]]></value2>
      <rrule><![CDATA[]]></rrule>
      <timezone><![CDATA[America/New_York]]></timezone>
    </item>
  </field_time>
  <field_fee>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_fee>
  <field_extras>
      </field_extras>
  <field_audience>
          <item>
        <value><![CDATA[Public]]></value>
      </item>
      </field_audience>
  <field_media>
      </field_media>
  <field_contact>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_contact>
  <field_location>
    <item>
      <value><![CDATA[Klaus 3126, Zoom Link]]></value>
    </item>
  </field_location>
  <field_sidebar>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_sidebar>
  <field_phone>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_phone>
  <field_url>
    <item>
      <url><![CDATA[]]></url>
      <title><![CDATA[]]></title>
            <attributes><![CDATA[]]></attributes>
    </item>
  </field_url>
  <field_email>
    <item>
      <email><![CDATA[]]></email>
    </item>
  </field_email>
  <field_boilerplate>
    <item>
      <nid><![CDATA[]]></nid>
    </item>
  </field_boilerplate>
  <links_related>
      </links_related>
  <files>
      </files>
  <og_groups>
          <item>221981</item>
      </og_groups>
  <og_groups_both>
          <item><![CDATA[Graduate Studies]]></item>
      </og_groups_both>
  <field_categories>
          <item>
        <tid>1788</tid>
        <value><![CDATA[Other/Miscellaneous]]></value>
      </item>
      </field_categories>
  <field_keywords>
          <item>
        <tid>100811</tid>
        <value><![CDATA[Phd Defense]]></value>
      </item>
      </field_keywords>
  <field_userdata><![CDATA[]]></field_userdata>
</node>
