<node id="682761">
  <nid>682761</nid>
  <type>news</type>
  <uid>
    <user id="36530"><![CDATA[36530]]></user>
  </uid>
  <created>1749655482</created>
  <changed>1749729176</changed>
  <title><![CDATA[Georgia Tech Team Takes Second Place at ICRA Robot Teleoperation Contest]]></title>
  <body><![CDATA[<p>An algorithmic breakthrough from School of Interactive Computing researchers that&nbsp;<a href="https://www.cc.gatech.edu/news/new-algorithm-teaches-robots-through-human-perspective"><strong>earned a Meta partnership</strong></a>drew more attention at the IEEE International Conference on Robotics and Automation (ICRA).</p><p>Meta announced in February its partnership with the labs of professors&nbsp;<a href="https://faculty.cc.gatech.edu/~danfei/"><strong>Danfei Xu</strong></a> and&nbsp;<a href="https://faculty.cc.gatech.edu/~judy/"><strong>Judy Hoffman</strong></a> on a novel computer vision-based algorithm called EgoMimic. It enables robots to learn new skills by imitating human tasks from first-person video footage captured by Meta’s Aria smart glasses.&nbsp;</p><p>Xu’s&nbsp;<a href="https://rl2.cc.gatech.edu/"><strong>Robot Learning and Reasoning Lab (RL2)</strong></a> displayed EgoMimic in action at ICRA May 19-23 at the World Congress Center in Atlanta.</p><p>Lawrence Zhu, Pranav Kuppili, and Patcharapong “Elmo” Aphiwetsa — students from Xu’s lab — used Egomimic to compete in a robot teleoperation contest at ICRA. The team finished second in the event titled What Bimanual Teleoperation and Learning from Demonstration Can Do Today, earning a $10,000 cash prize.</p><p>Teams were challenged to perform tasks by remotely controlling a robot gripper. The robot had to fold a tablecloth, open a vacuum-sealed container, place an object into the container, and then reseal it in succession without any errors.</p><p>Teams completed the tasks as many times as possible in 30 minutes, earning points for each successful attempt.</p><p>The competition also offered different challenge levels that increased the points awarded. Teams could directly operate the robot with a full workstation view and receive one point for each task completion. Or, as the RL2 team chose, teams could opt for the second challenge level.</p><p>The second level required an operator to control the task with no view of the workstation except for what was provided to through a video feed. The RL2 team completed the task seven times and received double points for the challenge level.</p><p>The third challenge level required teams to operate remotely from another location. At this level, teams could earn four times the number of points for each successful task completed. The fourth level challenged teams to deploy an algorithm for task performance and awarded eight points for each completion.</p><p>Using two of Meta’s Quest wireless controllers, Zhu controlled the robot under the direction of Aphiwetsa, while Kuppili monitored the coding from his laptop.</p><p>“It’s physically difficult to teleoperate for half an hour,” Zhu said. “My hands were shaking from holding the controllers in the air for that long.”</p><p>Being in constant communication with Aphiwetsa helped him stay focused throughout the contest.</p><p>“I helped him strategize the teleoperation and noticed he could skip some of the steps in the folding,” Aphiwetsa said. “There were many ways to do it, so I just told him what he could fix and how to do it faster.”</p><p>Zhu said he and his team had intended to tackle the fourth challenge level with the EgoMimic algorithm. However, due to unexpected time constraints, they decided to switch to the second level the day before the competition due to unexpected time constraints.&nbsp;</p><p>“I think we realized the day before the competition training the robot on our model would take a huge amount of time,” Zhu said. “We decided to go for the teleoperation and started practicing.”</p><p>He said the team wants to tackle the highest challenge level and use a training model for next year’s ICRA competition in Vienna, Austria.</p><p>ICRA is the world’s largest robotics conference, and&nbsp;<a href="https://www.cc.gatech.edu/news/georgia-tech-leads-robotics-world-converges-atlanta-icra-2025"><strong>Atlanta hosted the event</strong></a> for the third time in its history, drawing a record-breaking attendance of over 7,000.</p>]]></body>
  <field_subtitle>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_subtitle>
  <field_dateline>
    <item>
      <value>2025-06-11T00:00:00-04:00</value>
      <timezone><![CDATA[America/New_York]]></timezone>
    </item>
  </field_dateline>
  <field_summary_sentence>
    <item>
      <value><![CDATA[A Georgia Tech team earned second place in the ICRA Robot Teleoperation Contest for their EgoMimic algorithm, which allows robots to learn skills by mimicking human tasks from first-person video.]]></value>
    </item>
  </field_summary_sentence>
  <field_summary>
    <item>
      <value><![CDATA[<p>Students from Georgia Tech's Robot Learning and Reasoning Lab earned second place and a $10,000 cash prize in a robot teleoperation contest at the 2025 International Conference on Robotics and Automation in Atlanta. The RL2 lab announced a partnership with Meta in February on a novel computer vision-based algorithm called EgoMimic. It enables robots to learn new skills by imitating human tasks from first-person video footage captured by Meta’s Aria smart glasses.</p>]]></value>
    </item>
  </field_summary>
  <field_media>
          <item>
        <nid>
          <node id="677223">
            <nid>677223</nid>
            <type>image</type>
            <title><![CDATA[IMG_4291-2-copy.jpg]]></title>
            <body><![CDATA[]]></body>
                          <field_image>
                <item>
                  <fid>261102</fid>
                  <filename><![CDATA[IMG_4291-2-copy.jpg]]></filename>
                  <filepath><![CDATA[/sites/default/files/2025/06/12/IMG_4291-2-copy.jpg]]></filepath>
                  <file_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/2025/06/12/IMG_4291-2-copy.jpg]]></file_full_path>
                  <filemime>image/jpeg</filemime>
                  <image_740><![CDATA[]]></image_740>
                  <image_alt><![CDATA[ICRA]]></image_alt>
                </item>
              </field_image>
            
                      </node>
        </nid>
      </item>
      </field_media>
  <field_contact_email>
    <item>
      <email><![CDATA[]]></email>
    </item>
  </field_contact_email>
  <field_location>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_location>
  <field_contact>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_contact>
  <field_sidebar>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_sidebar>
  <field_boilerplate>
    <item>
      <nid><![CDATA[]]></nid>
    </item>
  </field_boilerplate>
  <!--  TO DO: correct to not conflate categories and news room topics  -->
  <!--  Disquisition: it's funny how I write these TODOs and then never
         revisit them. It's as though the act of writing the thing down frees me
         from the responsibility to actually solve the problem. But what can I
         say? There are more problems than there's time to solve.  -->
  <links_related> </links_related>
  <files> </files>
  <og_groups>
          <item>47223</item>
          <item>1188</item>
          <item>50876</item>
      </og_groups>
  <og_groups_both>
          <item>
        <![CDATA[Computer Science/Information Technology and Security]]>
      </item>
          <item>
        <![CDATA[Robotics]]>
      </item>
          <item>
        <![CDATA[Student Competition Winners (academic, innovation, and research)]]>
      </item>
      </og_groups_both>
  <field_categories>
          <item>
        <tid>153</tid>
        <value><![CDATA[Computer Science/Information Technology and Security]]></value>
      </item>
          <item>
        <tid>152</tid>
        <value><![CDATA[Robotics]]></value>
      </item>
          <item>
        <tid>193158</tid>
        <value><![CDATA[Student Competition Winners (academic, innovation, and research)]]></value>
      </item>
      </field_categories>
  <core_research_areas>
          <term tid="193655"><![CDATA[Artificial Intelligence at Georgia Tech]]></term>
          <term tid="39521"><![CDATA[Robotics]]></term>
      </core_research_areas>
  <field_news_room_topics>
      </field_news_room_topics>
  <links_related>
      </links_related>
  <files>
      </files>
  <og_groups>
          <item>47223</item>
          <item>1188</item>
          <item>50876</item>
      </og_groups>
  <og_groups_both>
          <item><![CDATA[College of Computing]]></item>
          <item><![CDATA[Research Horizons]]></item>
          <item><![CDATA[School of Interactive Computing]]></item>
      </og_groups_both>
  <field_keywords>
          <item>
        <tid>181920</tid>
        <value><![CDATA[cc-research; ic-ai-ml; ic-robotics]]></value>
      </item>
          <item>
        <tid>187812</tid>
        <value><![CDATA[artificial intelligence (AI)]]></value>
      </item>
          <item>
        <tid>192863</tid>
        <value><![CDATA[go-ai]]></value>
      </item>
          <item>
        <tid>187915</tid>
        <value><![CDATA[go-researchnews]]></value>
      </item>
          <item>
        <tid>9153</tid>
        <value><![CDATA[Research Horizons]]></value>
      </item>
          <item>
        <tid>167585</tid>
        <value><![CDATA[student competition]]></value>
      </item>
      </field_keywords>
  <field_userdata><![CDATA[]]></field_userdata>
</node>
