<nodes> <node id="688391">  <title><![CDATA[Robot Pollinator Could Produce More, Better Crops for Indoor Farms]]></title>  <uid>36530</uid>  <body><![CDATA[<p>A new robot could solve one of the biggest challenges facing indoor farmers: manual pollination.</p><p>Indoor farms, also known as vertical farms, are popular among agricultural researchers and are expanding across the agricultural industry. Some benefits they have over outdoor farms include:</p><ul><li>Year-round production of food crops</li><li>Less water and land requirements</li><li>Not needing pesticides</li><li>Reducing carbon emissions from shipping</li><li>Reducing food waste</li></ul><p>Additionally,&nbsp;<a href="https://www.agritecture.com/blog/2021/7/20/5-ways-vertical-farming-is-improving-nutrition"><strong>some studies</strong></a> indicate that indoor farms produce more nutritious food for urban communities.&nbsp;</p><p>However, these farms are often inaccessible to birds, bees, and other natural pollinators, leaving the pollination process to humans. The tedious process must be completed by hand for each flower to ensure the indoor crop flourishes.</p><p><a href="https://research.gatech.edu/people/ai-ping-hu"><strong>Ai-Ping Hu</strong></a>, a principal research engineer at the Georgia Tech Research Institute (GTRI), has spent years exploring methods to efficiently pollinate flowering plants and food crops in indoor farms to find a way to efficiently pollinate flower plants and food crops in indoor farms.</p><p>Hu,&nbsp;<a href="https://research.gatech.edu/people/shreyas-kousik"><strong>Assistant Professor Shreyas Kousik of the George W. Woodruff School of Mechanical Engineering</strong></a>, and a rotating group of student interns have developed a robot prototype that may be up to the task.</p><p>The robot can efficiently pollinate plants that have both male and female reproductive parts. These plants only require pollen to be transferred from one part to the other rather than externally from another flower.</p><p>Natural pollinators perform this task outdoors, but Hu said indoor farmers often use a paintbrush or electric tootbrush to ensure these flowers are pollinated.&nbsp;</p><h4><strong>Knowing the Pose</strong></h4><p>An early challenge the research team addressed was teaching the robot to identify the “pose” of each flower. Pose refers to a flower’s orientation, shape, and symmetry. Knowing these details ensures precise delivery of the pollen to maximize reproductive success.&nbsp;</p><p>“It’s crucial to know exactly which way the flowers are facing,” Hu said.</p><p>“You want to approach the flower from the front because that’s where all the biological structures are. Knowing the pose tells you where the stem is. Our device grasps the stem and shakes it to dislodge the pollen.</p><p>“Every flower is going to have its own pose, and you need to know what that is within at least 10 degrees.”</p><h4><strong>Computer Vision Breakthrough</strong></h4><p><strong>Harsh Muriki</strong> is a robotics master’s student at Georgia Tech’s School of Interactive Computing, who used computer vision to solve the pose problem while interning for Hu and GTRI.</p><p>Muriki attached a camera to a FarmBot to capture images of strawberry plants from dozens of angles in a small garden in front of Georgia Tech’s Food Processing Technology Building. The&nbsp;<a href="https://farm.bot/?srsltid=AfmBOoqh1Z8vSs3WflZisgw5DsOUSo8shD4VtY0Y8_VmVpVyt0Iwalxo"><strong>FarmBot</strong></a> is an XYZ-axis robot that waters and sprays pesticides on outdoor gardens, though it is not capable of pollination.</p><p>“We reconstruct the images of the flower into a 3D model and use a technique that converts the 3D model into multiple 2D images with depth information,” Muriki said. “This enables us to send them to object detectors.”</p><p>Muriki said he used a real-time object detection system called YOLO (You Only Look Once) to classify objects. YOLO is known for identifying and classifying objects in a single pass.</p><p><strong>Ved Sengupta</strong>, a computer engineering major who interned with Muriki, fine-tuned the algorithms that converted 3D images into 2D.</p><p>“This was a crucial part of making robot pollination possible,” Sengupta said. “There is a big gap between 3D and 2D image processing.</p><p>“There’s not a lot of data on the internet for 3D object detection, but there’s a ton for 2D. We were able to get great results from the converted images, and I think any sector of technology can take advantage of that.”</p><p>Sengupta, Muriki, and Hu co-authored a paper about their work that was accepted to the 2025 International Conference on Robotics and Automation (ICRA) in Atlanta.</p><h4><strong>Measuring Success</strong></h4><p>The pollination robot, built in Kousik’s Safe Robotics Lab, is now in the prototype phase.&nbsp;</p><p>Hu said the robot can do more than pollinate. It can also analyze each flower to determine how well it was pollinated and whether the chances for reproduction are high.</p><p>“It has an additional capability of microscopic inspection,” Hu said. “It’s the first device we know of that provides visual feedback on how well a flower was pollinated.”</p><p>For more information about the robot, visit the&nbsp;<a href="https://saferoboticslab.me.gatech.edu/research/towards-robotic-pollination/"><strong>Safe Robotics Lab project page</strong></a>.</p>]]></body>  <author>Nathan Deen</author>  <status>1</status>  <created>1771527492</created>  <gmt_created>2026-02-19 18:58:12</gmt_created>  <changed>1774011241</changed>  <gmt_changed>2026-03-20 12:54:01</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[A research team that expands GTRI, the College of Engineering, and the College of Computing have developed a robot capable of pollinating flowers in indoor farms.]]></teaser>  <type>news</type>  <sentence><![CDATA[A research team that expands GTRI, the College of Engineering, and the College of Computing have developed a robot capable of pollinating flowers in indoor farms.]]></sentence>  <summary><![CDATA[<p>Manual pollination is one of the biggest challenges for indoor farmers. These farms are often inaccessible to birds, bees, and other natural pollinators, leaving the pollination process to humans. The tedious process must be completed by hand for each flower to ensure the indoor crop flourishes.</p><p>A Georgia Tech research led by Ai-Ping Hu and Shreyas Kousik team is working to solve that. A robot they've developed can efficiently pollinate plants that have both male and female reproductive parts. These plants only require pollen to be transferred from one part to the other rather than externally from another flower.</p>]]></summary>  <dateline>2026-02-19T00:00:00-05:00</dateline>  <iso_dateline>2026-02-19T00:00:00-05:00</iso_dateline>  <gmt_dateline>2026-02-19 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p><a href="mailto:ndeen6@gatech.edu">Nathan Deen</a><br>College of Computing<br>Georgia Tech</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>679370</item>      </media>  <hg_media>          <item>          <nid>679370</nid>          <type>image</type>          <title><![CDATA[Harsh-Muriki_86A0006.jpg]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Harsh-Muriki_86A0006.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/2026/02/19/Harsh-Muriki_86A0006.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/2026/02/19/Harsh-Muriki_86A0006.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/2026/02/19/Harsh-Muriki_86A0006.jpg?itok=WJg8YQi9]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Harsh Muriki]]></image_alt>                    <created>1771527500</created>          <gmt_created>2026-02-19 18:58:20</gmt_created>          <changed>1771527500</changed>          <gmt_changed>2026-02-19 18:58:20</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1188"><![CDATA[Research Horizons]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>          <category tid="194606"><![CDATA[Artificial Intelligence]]></category>          <category tid="153"><![CDATA[Computer Science/Information Technology and Security]]></category>          <category tid="145"><![CDATA[Engineering]]></category>          <category tid="135"><![CDATA[Research]]></category>          <category tid="152"><![CDATA[Robotics]]></category>      </categories>  <news_terms>          <term tid="194606"><![CDATA[Artificial Intelligence]]></term>          <term tid="153"><![CDATA[Computer Science/Information Technology and Security]]></term>          <term tid="145"><![CDATA[Engineering]]></term>          <term tid="135"><![CDATA[Research]]></term>          <term tid="152"><![CDATA[Robotics]]></term>      </news_terms>  <keywords>          <keyword tid="9153"><![CDATA[Research Horizons]]></keyword>          <keyword tid="187991"><![CDATA[go-robotics]]></keyword>          <keyword tid="192863"><![CDATA[go-ai]]></keyword>          <keyword tid="11506"><![CDATA[computer vision]]></keyword>          <keyword tid="180840"><![CDATA[computer vision systems]]></keyword>          <keyword tid="669"><![CDATA[agriculture]]></keyword>          <keyword tid="194392"><![CDATA[AI in Agriculture]]></keyword>          <keyword tid="170254"><![CDATA[urban gardening]]></keyword>          <keyword tid="94111"><![CDATA[farming]]></keyword>          <keyword tid="14913"><![CDATA[urban farming]]></keyword>          <keyword tid="23911"><![CDATA[bees]]></keyword>          <keyword tid="6660"><![CDATA[flowers]]></keyword>          <keyword tid="187915"><![CDATA[go-researchnews]]></keyword>      </keywords>  <core_research_areas>          <term tid="193655"><![CDATA[Artificial Intelligence at Georgia Tech]]></term>          <term tid="193653"><![CDATA[Georgia Tech Research Institute]]></term>          <term tid="39521"><![CDATA[Robotics]]></term>      </core_research_areas>  <news_room_topics>          <topic tid="71911"><![CDATA[Earth and Environment]]></topic>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="688893">  <title><![CDATA[Sheepdogs Reveal a Better Way to Guide Robot Swarms]]></title>  <uid>27271</uid>  <body><![CDATA[<p>Sheepdogs, bred to control large groups of sheep in open fields, have demonstrated their skills in competitions dating back to the 1870s.</p><p>In these contests, a handler directs a trained dog with whistle signals to guide a small group of sheep across a field and sometimes split the flock cleanly into two groups. But sheep do not always cooperate.</p><p>Researchers at the Georgia Institute of Technology studied how handler–dog teams manage these unpredictable flocks in sheepdog trials and found principles that extend beyond livestock herding.</p><p>In a <a href="https://www.science.org/doi/10.1126/sciadv.adx6791"><strong>study</strong></a> published in <em>Science Advances&nbsp;</em>as the cover feature, the researchers applied those insights to computer simulations showing how similar strategies could improve the control of robot swarms, autonomous vehicles, AI agents, and other networked systems where many machines must coordinate their actions despite uncertain conditions.</p><p><strong>Group Movement Dynamics</strong></p><p>“Birds, bugs, fish, sheep, and many other organisms move in groups because it benefits individuals, including protection from predators,” said <a href="https://bhamla.gatech.edu/"><strong>Saad Bhamla</strong></a>, an associate professor in Georgia Tech’s School of Chemical and Biomolecular Engineering. “The puzzle is that the ‘group’ is not a single organism. It is built from many individuals, each making local, imperfect decisions.”</p><p>When a predator threatens a herd of sheep, individuals near the edge often move toward the center to reduce their own risk, Bhamla explained. “This is ‘selfish herd’ behavior,” he said. “Shepherds exploit that instinct using trained dogs.”</p><p>From examining hours of contest footage, the researchers found that controlling small groups of sheep can be harder than managing large ones. A larger group, with more sheep protected in the center, may behave more coherently than a small group as the animals constantly shift between two instincts: “follow the group” and “flee the dog.”</p><p>“That switching behavior makes the group unpredictable,” said Tuhin Chakrabortty, a former postdoctoral researcher in the Bhamla Lab who co-led the study.</p><p>Looking closely at how dogs and their handlers guide small groups, the researchers found that unpredictability in the flock’s behavior does not always make control harder. “Under the right conditions, that ‘noisy’ behavior might actually be a benefit,” Bhamla said.</p><p><strong>Successful Sheep Herding</strong></p><p>Sheepdog handlers categorize sheep by how strongly they respond to a dog’s threatening pressure. Some very responsive sheep might panic under too much pressure, while others might ignore mild pressure and require stronger positioning by the dog.</p><p>The researchers observed that successful control often followed a two-step pattern. First, the dog subtly influenced the sheep’s orientation while the animals were mostly standing still. Once the flock was aligned in the desired direction, the dog increased pressure to trigger movement. The timing of those actions was critical, because alignment within a small group could disappear quickly as individuals switched between instincts.</p><p>“In our simulations, increasing pressure makes the flock reach the desired orientation faster, but how long the flock stays aligned is set mainly by noise,” Chakrabortty said. “In essence, dogs can steer the direction, but they can’t hold that decision indefinitely, so timing matters.”</p><div><div><div><div><div><p><strong>Developing Computer Models</strong></p><p>To understand the broader implications of that behavior, the team developed computer models that captured how sheep respond both to the dog and to one another. The models allowed the researchers to test different strategies for guiding groups whose members make independent decisions under uncertainty.</p><p>They then applied those ideas to simulations of robotic swarms. Engineers often design such systems so that each robot blends signals from all nearby robots before deciding how to move. While that approach works well when signals are clear, it can break down when information is noisy or conflicting, Bhamla explained.</p></div></div></div></div></div><div><div><div><div><div><p>To explain why that switching strategy can work under noisy conditions, the researchers used an analogy of a smoke-filled room where only one person can see the exit, and no one knows who that person is. If everyone polls everyone else and averages the guesses, the one correct signal can get diluted by many noisy ones.</p><p>“That’s the counterintuitive part. When only one person has the right information, averaging can wash out the signal. But if you follow one person at a time, and keep switching who that is, the right information can spread through the crowd,” Bhamla said.</p><p>Building on that idea, the researchers tested a strategy inspired by the switching behavior they observed in sheep. In the simulations, each robot paid attention to just one source at a time (either a guiding signal or a neighboring robot) and switched that source from one step to the next.</p><p>Under noisy conditions, this switching strategy required less effort to keep the group moving along a desired path than either averaging-based strategies or fixed leader-follower strategies.</p><p>The researchers call their approach the Indecisive Swarm Algorithm. The name reflects a counterintuitive insight: allowing influence to shift among individuals over time can make groups easier to guide when conditions are uncertain.</p><p>“Our findings suggest that the same dynamics that make small animal groups unpredictable may also offer new ways to control complex engineered systems,” Bhamla said.</p><p>CITATION: Tuhin Chakrabortty and Saad Bhamla, “<a href="https://www.science.org/doi/10.1126/sciadv.adx6791"><strong>Controlling noisy herds: Temporal network restructuring improves control of indecisive collectives</strong></a>,” <em>Science Advances</em>, 2026</p><p><em>This research was funded in part by Schmidt Sciences as part of a </em><a href="https://news.gatech.edu/news/2025/09/16/saad-bhamla-named-2025-schmidt-polymath"><em>Schmidt Polymath</em></a><em> grant to Saad Bhamla.</em></p></div></div></div></div></div>]]></body>  <author>Brad Dixon</author>  <status>1</status>  <created>1773259186</created>  <gmt_created>2026-03-11 19:59:46</gmt_created>  <changed>1773330805</changed>  <gmt_changed>2026-03-12 15:53:25</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Georgia Tech researchers studying sheepdog trials found new principles for guiding unpredictable groups and used them to develop computer models that could improve coordination in robot swarms, autonomous vehicles, and other networked systems.]]></teaser>  <type>news</type>  <sentence><![CDATA[Georgia Tech researchers studying sheepdog trials found new principles for guiding unpredictable groups and used them to develop computer models that could improve coordination in robot swarms, autonomous vehicles, and other networked systems.]]></sentence>  <summary><![CDATA[<p>Georgia Tech researchers studying sheepdog trials found new principles for guiding unpredictable groups and used them to develop computer models that could improve coordination in robot swarms, autonomous vehicles, and other networked systems.</p>]]></summary>  <dateline>2026-03-11T00:00:00-04:00</dateline>  <iso_dateline>2026-03-11T00:00:00-04:00</iso_dateline>  <gmt_dateline>2026-03-11 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[braddixon@gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p>Brad Dixon, <a href="mailto: braddixon@gatech.edu">braddixon@gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>679589</item>          <item>679590</item>          <item>679591</item>          <item>679584</item>          <item>679588</item>      </media>  <hg_media>          <item>          <nid>679589</nid>          <type>video</type>          <title><![CDATA[SMART Dogs herding sheep on a farm, looks like flock of bird pattern]]></title>          <body><![CDATA[<p>SMART Dogs herding sheep on a farm, looks like flock of bird pattern</p>]]></body>                      <youtube_id><![CDATA[_CjwqIX6C2I]]></youtube_id>            <video_width><![CDATA[]]></video_width>            <video_height><![CDATA[]]></video_height>            <vimeo_id><![CDATA[]]></vimeo_id>            <video_width><![CDATA[]]></video_width>            <video_height><![CDATA[]]></video_height>            <video_url><![CDATA[https://youtu.be/_CjwqIX6C2I?si=bfsxIT77-iAJCm-2]]></video_url>            <video_width><![CDATA[]]></video_width>            <video_height><![CDATA[]]></video_height>                    <created>1773260200</created>          <gmt_created>2026-03-11 20:16:40</gmt_created>          <changed>1773260200</changed>          <gmt_changed>2026-03-11 20:16:40</gmt_changed>      </item>          <item>          <nid>679590</nid>          <type>video</type>          <title><![CDATA[A dog herding sheep in a sheepdog trial]]></title>          <body><![CDATA[<p><em>A dog herding sheep in a sheepdog trial</em></p>]]></body>                      <youtube_id><![CDATA[cnPOXfUC8rc]]></youtube_id>            <video_width><![CDATA[]]></video_width>            <video_height><![CDATA[]]></video_height>            <vimeo_id><![CDATA[]]></vimeo_id>            <video_width><![CDATA[]]></video_width>            <video_height><![CDATA[]]></video_height>            <video_url><![CDATA[https://youtu.be/cnPOXfUC8rc?si=41jH8u3UQ_qjgqWn]]></video_url>            <video_width><![CDATA[]]></video_width>            <video_height><![CDATA[]]></video_height>                    <created>1773260676</created>          <gmt_created>2026-03-11 20:24:36</gmt_created>          <changed>1773260676</changed>          <gmt_changed>2026-03-11 20:24:36</gmt_changed>      </item>          <item>          <nid>679591</nid>          <type>video</type>          <title><![CDATA[ Controlling 'Noisy' Sheep Herds]]></title>          <body><![CDATA[<p>Controlling 'noisy' sheep herds</p>]]></body>                      <youtube_id><![CDATA[EMHmDPpe8HE]]></youtube_id>            <video_width><![CDATA[]]></video_width>            <video_height><![CDATA[]]></video_height>            <vimeo_id><![CDATA[]]></vimeo_id>            <video_width><![CDATA[]]></video_width>            <video_height><![CDATA[]]></video_height>            <video_url><![CDATA[https://youtu.be/EMHmDPpe8HE?si=_5DFsk_BafsIK78R]]></video_url>            <video_width><![CDATA[]]></video_width>            <video_height><![CDATA[]]></video_height>                    <created>1773260974</created>          <gmt_created>2026-03-11 20:29:34</gmt_created>          <changed>1773260974</changed>          <gmt_changed>2026-03-11 20:29:34</gmt_changed>      </item>          <item>          <nid>679584</nid>          <type>image</type>          <title><![CDATA[Sheepdog herding sheep]]></title>          <body><![CDATA[<p>Sheepdog herding in a sheepdog trial competition</p>]]></body>                      <image_name><![CDATA[sheepdog1.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/2026/03/11/sheepdog1.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/2026/03/11/sheepdog1.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/2026/03/11/sheepdog1.jpg?itok=kTQiLGXI]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Sheepdog herding sheep]]></image_alt>                    <created>1773259589</created>          <gmt_created>2026-03-11 20:06:29</gmt_created>          <changed>1773261394</changed>          <gmt_changed>2026-03-11 20:36:34</gmt_changed>      </item>          <item>          <nid>679588</nid>          <type>image</type>          <title><![CDATA[Sheeping herding resistant sheep]]></title>          <body><![CDATA[<p>Sheepdogs first align the flock’s direction, then apply pressure to trigger movement before the sheep lose alignment.</p>]]></body>                      <image_name><![CDATA[sheepdog2-copy.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/2026/03/11/sheepdog2-copy.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/2026/03/11/sheepdog2-copy.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/2026/03/11/sheepdog2-copy.jpg?itok=5CXyEB8U]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Sheepdog herding seep]]></image_alt>                    <created>1773259967</created>          <gmt_created>2026-03-11 20:12:47</gmt_created>          <changed>1773261607</changed>          <gmt_changed>2026-03-11 20:40:07</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="1188"><![CDATA[Research Horizons]]></group>          <group id="1240"><![CDATA[School of Chemical and Biomolecular Engineering]]></group>      </groups>  <categories>          <category tid="145"><![CDATA[Engineering]]></category>          <category tid="135"><![CDATA[Research]]></category>          <category tid="152"><![CDATA[Robotics]]></category>      </categories>  <news_terms>          <term tid="145"><![CDATA[Engineering]]></term>          <term tid="135"><![CDATA[Research]]></term>          <term tid="152"><![CDATA[Robotics]]></term>      </news_terms>  <keywords>          <keyword tid="667"><![CDATA[robotics]]></keyword>          <keyword tid="194958"><![CDATA[Sheepdogs]]></keyword>          <keyword tid="194959"><![CDATA[Herding]]></keyword>          <keyword tid="187915"><![CDATA[go-researchnews]]></keyword>      </keywords>  <core_research_areas>          <term tid="39521"><![CDATA[Robotics]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="686540">  <title><![CDATA[Real-World Helper Exoskeletons Just Got Closer to Reality]]></title>  <uid>27446</uid>  <body><![CDATA[<p>To make useful wearable robotic devices that can help stroke patients or people with amputated limbs, the computer brains driving the systems must be trained. That takes time and money — lots of time and money. And researchers&nbsp;need specially equipped labs to collect mountains of human data for training.</p><p>Even when engineers have a working device and brain, called a controller, changes and improvements to the exoskeleton system typically mean data collection and training start all over again. The process is expensive and makes bringing fully functional exoskeletons or robotic limbs into the real world largely impractical.</p><p>Not anymore, thanks to Georgia Tech engineers and computer scientists.</p><p>They’ve created an artificial intelligence tool that can turn huge amounts of existing data on how people move into functional exoskeleton controllers. No data collection, retraining, and hours upon hours of additional lab time required for each specific device.</p><p>Their approach has produced an exoskeleton brain capable of offering meaningful assistance across a huge range of hip and knee movements that works as well as the best controllers currently available. <a href="https://doi.org/10.1126/scirobotics.ads8652">Their worked was published Nov. 19 in <em>Science Robotics.</em></a></p><p><a href="https://coe.gatech.edu/news/2025/11/real-world-helper-exoskeletons-just-got-closer-reality"><strong>Full details on the College of Engineering website.</strong></a></p>]]></body>  <author>Joshua Stewart</author>  <status>1</status>  <created>1763577513</created>  <gmt_created>2025-11-19 18:38:33</gmt_created>  <changed>1763579536</changed>  <gmt_changed>2025-11-19 19:12:16</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Georgia Tech researchers are using AI to quickly train exoskeleton devices, making it much more practical to develop, improve, and ultimately deploy wearable robots for people with impaired mobility.]]></teaser>  <type>news</type>  <sentence><![CDATA[Georgia Tech researchers are using AI to quickly train exoskeleton devices, making it much more practical to develop, improve, and ultimately deploy wearable robots for people with impaired mobility.]]></sentence>  <summary><![CDATA[<p>Georgia Tech researchers are using AI to quickly train exoskeleton devices, making it much more practical to develop, improve, and ultimately deploy wearable robots for people with impaired mobility.</p>]]></summary>  <dateline>2025-11-19T00:00:00-05:00</dateline>  <iso_dateline>2025-11-19T00:00:00-05:00</iso_dateline>  <gmt_dateline>2025-11-19 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[jstewart@gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p><a href="mailto:jstewart@gatech.edu">Joshua Stewart</a><br>College of Engineering</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>678673</item>      </media>  <hg_media>          <item>          <nid>678673</nid>          <type>image</type>          <title><![CDATA[Matthew-Gombolay-Aaron-Young-AI-exoskeleton-control-0337-h.jpg]]></title>          <body><![CDATA[<p>Researchers Matthew Gombolay, left, and Aaron Young used the lower-limb exoskeleton demonstrated in the background to test their new approach to creating exoskeleton controllers. They use huge amounts of existing data on how people move to create functional controllers able to provide meaningful assistance. And unlike earlier controllers, they do not require hours and hours of additional training and data collection with each specific exoskeleton device.</p>]]></body>                      <image_name><![CDATA[Matthew-Gombolay-Aaron-Young-AI-exoskeleton-control-0337-h.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/2025/11/19/Matthew-Gombolay-Aaron-Young-AI-exoskeleton-control-0337-h.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/2025/11/19/Matthew-Gombolay-Aaron-Young-AI-exoskeleton-control-0337-h.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/2025/11/19/Matthew-Gombolay-Aaron-Young-AI-exoskeleton-control-0337-h.jpg?itok=sxJlmrAp]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Matthew Gombolay and Aaron Young pose in the lab while Ph.D. researchers work on a leg exoskeleton device.]]></image_alt>                    <created>1763577576</created>          <gmt_created>2025-11-19 18:39:36</gmt_created>          <changed>1763577576</changed>          <gmt_changed>2025-11-19 18:39:36</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="1237"><![CDATA[College of Engineering]]></group>      </groups>  <categories>          <category tid="194606"><![CDATA[Artificial Intelligence]]></category>          <category tid="145"><![CDATA[Engineering]]></category>          <category tid="135"><![CDATA[Research]]></category>          <category tid="152"><![CDATA[Robotics]]></category>      </categories>  <news_terms>          <term tid="194606"><![CDATA[Artificial Intelligence]]></term>          <term tid="145"><![CDATA[Engineering]]></term>          <term tid="135"><![CDATA[Research]]></term>          <term tid="152"><![CDATA[Robotics]]></term>      </news_terms>  <keywords>          <keyword tid="168835"><![CDATA[Aaron Young]]></keyword>          <keyword tid="175375"><![CDATA[matthew gombolay]]></keyword>          <keyword tid="182630"><![CDATA[exoskeletons]]></keyword>          <keyword tid="187991"><![CDATA[go-robotics]]></keyword>      </keywords>  <core_research_areas>          <term tid="39521"><![CDATA[Robotics]]></term>      </core_research_areas>  <news_room_topics>          <topic tid="71881"><![CDATA[Science and Technology]]></topic>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="686422">  <title><![CDATA[Ph.D. Student’s Framework Used to Bolster Nvidia’s Cosmos Predict-2 Model]]></title>  <uid>36530</uid>  <body><![CDATA[<p>A new deep learning architectural framework could boost the development and deployment efficiency of autonomous vehicles and humanoid robots. The framework will lower training costs and reduce the amount of real-world data needed for training.</p><p>World foundation models (WFMs) enable physical AI systems to learn and operate within&nbsp;synthetic worlds created by generative artificial intelligence (genAI). For example, these models use predictive capabilities to generate up to 30 seconds of video that accurately reflects the real world.</p><p>The new framework, developed by a Georgia Tech researcher, enhances the processing speed of the neural networks that simulate these real-world environments from text, images, or video inputs.</p><p>The neural networks that make up the architectures of large language models like ChatGPT and visual models like Sora process contextual information using the “attention mechanism.”</p><p>Attention refers to a model’s ability to focus on the most relevant parts of input.</p><p>The Neighborhood Attention Extension (NATTEN) allows models that require GPUs or high-performance computing systems to process information and generate outputs more efficiently.</p><p>Processing speeds can increase by up to 2.6 times, said <a href="https://alihassanijr.com/"><strong>Ali Hassani</strong></a>, a Ph.D. student in the School of Interactive Computing and the creator of NATTEN. Hassani is advised by Associate Professor <a href="https://www.humphreyshi.com/"><strong>Humphrey Shi</strong></a>.</p><p>Hassani is also a research scientist at Nvidia, where he introduced NATTEN to <a href="https://www.nvidia.com/en-us/ai/cosmos/"><strong>Cosmos</strong></a> — a family of WFMs the company uses to train robots, autonomous vehicles, and other physical AI applications.</p><p>“You can map just about anything from a prompt or an image or any combination of frames from an existing video to predict future videos,” Hassani said. “Instead of generating words with an LLM, you’re generating a world.</p><p>“Unlike LLMs that generate a single token at a time, these models are compute-heavy. They generate many images — often hundreds of frames at a time — so the models put a lot of work on the GPU. NATTEN lets us decrease some of that work and proportionately accelerate the model.”</p>]]></body>  <author>Nathan Deen</author>  <status>1</status>  <created>1763068438</created>  <gmt_created>2025-11-13 21:13:58</gmt_created>  <changed>1763068498</changed>  <gmt_changed>2025-11-13 21:14:58</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[A new deep learning architectural framework, Neighborhood Attention Extension (NATTEN), is being used by Nvidia to  increase the processing speed of their Cosmos Predict-2 Model for training autonomous vehicles and humanoid robots.]]></teaser>  <type>news</type>  <sentence><![CDATA[A new deep learning architectural framework, Neighborhood Attention Extension (NATTEN), is being used by Nvidia to  increase the processing speed of their Cosmos Predict-2 Model for training autonomous vehicles and humanoid robots.]]></sentence>  <summary><![CDATA[<p>Georgia Tech Ph.D. student Ali Hassani developed the Neighborhood Attention Extension (NATTEN), a deep learning architectural framework that is being integrated into Nvidia's Cosmos Predict-2 world foundation model. NATTEN enhances the processing speed of neural networks that simulate real-world environments for physical AI systems, which are used to train autonomous vehicles and humanoid robots.&nbsp;</p>]]></summary>  <dateline>2025-11-03T00:00:00-05:00</dateline>  <iso_dateline>2025-11-03T00:00:00-05:00</iso_dateline>  <gmt_dateline>2025-11-03 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>678621</item>      </media>  <hg_media>          <item>          <nid>678621</nid>          <type>image</type>          <title><![CDATA[2X6A3487.jpg]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[2X6A3487.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/2025/11/13/2X6A3487.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/2025/11/13/2X6A3487.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/2025/11/13/2X6A3487.jpg?itok=TTWF4N4h]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Humprhey Shi and Ali Hassani]]></image_alt>                    <created>1763068473</created>          <gmt_created>2025-11-13 21:14:33</gmt_created>          <changed>1763068473</changed>          <gmt_changed>2025-11-13 21:14:33</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1188"><![CDATA[Research Horizons]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>          <category tid="153"><![CDATA[Computer Science/Information Technology and Security]]></category>          <category tid="194609"><![CDATA[Industry]]></category>          <category tid="152"><![CDATA[Robotics]]></category>      </categories>  <news_terms>          <term tid="153"><![CDATA[Computer Science/Information Technology and Security]]></term>          <term tid="194609"><![CDATA[Industry]]></term>          <term tid="152"><![CDATA[Robotics]]></term>      </news_terms>  <keywords>          <keyword tid="192863"><![CDATA[go-ai]]></keyword>          <keyword tid="193860"><![CDATA[Artifical Intelligence]]></keyword>          <keyword tid="194701"><![CDATA[go-resarchnews]]></keyword>          <keyword tid="9153"><![CDATA[Research Horizons]]></keyword>          <keyword tid="14549"><![CDATA[nvidia]]></keyword>          <keyword tid="191138"><![CDATA[artificial neural networks]]></keyword>          <keyword tid="97281"><![CDATA[autonomous vehicles]]></keyword>      </keywords>  <core_research_areas>          <term tid="193655"><![CDATA[Artificial Intelligence at Georgia Tech]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="684058">  <title><![CDATA[Tiny Fans on the Feet of Water Bugs Could Lead to Energy Efficient, Mini Robots]]></title>  <uid>27560</uid>  <body><![CDATA[<div><div><div><div><div><p>A new study explains how tiny water bugs use fan-like propellers to zip across streams at speeds up to 120 body lengths per second. The researchers then created a similar fan structure and used it to propel and maneuver an insect-sized robot.</p><p>The discovery offers new possibilities for designing small machines that could operate during floods or other challenging situations.</p></div></div></div></div></div><div><div><div><div><div><p>Instead of relying on their muscles, the insects about the size of a grain of rice use the water’s surface tension and elastic forces to morph the ribbon-shaped fans on the end of their legs to slice the water surface and change directions.&nbsp;<br><br>Once they understood the mechanism, the team built a self-deployable, one-milligram fan and installed it into an insect-sized robot capable of accelerating, braking, and maneuvering right and left.</p><p>The study is featured<strong> </strong>on the cover of the journal <em>Science.&nbsp;</em><br><br><a href="https://coe.gatech.edu/news/2025/08/tiny-fans-feet-water-bugs-could-lead-energy-efficient-mini-robots">Read the entire story and see the robot in action on the College of Engineering website.&nbsp;</a></p></div></div></div></div></div>]]></body>  <author>Jason Maderer</author>  <status>1</status>  <created>1755807115</created>  <gmt_created>2025-08-21 20:11:55</gmt_created>  <changed>1761333189</changed>  <gmt_changed>2025-10-24 19:13:09</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[A new study explains how tiny water bugs use fan-like propellers to zip across streams at speeds up to 120 body lengths per second]]></teaser>  <type>news</type>  <sentence><![CDATA[A new study explains how tiny water bugs use fan-like propellers to zip across streams at speeds up to 120 body lengths per second]]></sentence>  <summary><![CDATA[<p>A new study explains how tiny water bugs use fan-like propellers to zip across streams at speeds up to 120 body lengths per second. The researchers then created a similar fan structure and used it to propel and maneuver an insect-sized robot.</p><p>The discovery offers new possibilities for designing small machines that could operate during floods or other challenging situations.</p>]]></summary>  <dateline>2025-08-21T00:00:00-04:00</dateline>  <iso_dateline>2025-08-21T00:00:00-04:00</iso_dateline>  <gmt_dateline>2025-08-21 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[Researchers built an insect-sized robot that uses surface water and collapsable propellers as an idea to improve fast-moving machines that can operate in rivers or flooded areas. ]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[maderer@gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p>Jason Maderer<br>College of Engineering<br>maderer@gatech.edu</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>677766</item>      </media>  <hg_media>          <item>          <nid>677766</nid>          <type>image</type>          <title><![CDATA[water-bug-hero.jpg]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[water-bug-hero.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/2025/08/21/water-bug-hero.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/2025/08/21/water-bug-hero.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/2025/08/21/water-bug-hero.jpg?itok=ngJx7mnm]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[a water bug standing on water]]></image_alt>                    <created>1755807401</created>          <gmt_created>2025-08-21 20:16:41</gmt_created>          <changed>1755807401</changed>          <gmt_changed>2025-08-21 20:16:41</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="142761"><![CDATA[IRIM]]></group>          <group id="1292"><![CDATA[Parker H. Petit Institute for Bioengineering and Bioscience (IBB)]]></group>          <group id="1188"><![CDATA[Research Horizons]]></group>      </groups>  <categories>          <category tid="135"><![CDATA[Research]]></category>          <category tid="152"><![CDATA[Robotics]]></category>      </categories>  <news_terms>          <term tid="135"><![CDATA[Research]]></term>          <term tid="152"><![CDATA[Robotics]]></term>      </news_terms>  <keywords>          <keyword tid="194701"><![CDATA[go-resarchnews]]></keyword>          <keyword tid="187915"><![CDATA[go-researchnews]]></keyword>          <keyword tid="187423"><![CDATA[go-bio]]></keyword>      </keywords>  <core_research_areas>          <term tid="39441"><![CDATA[Bioengineering and Bioscience]]></term>          <term tid="39521"><![CDATA[Robotics]]></term>      </core_research_areas>  <news_room_topics>          <topic tid="71881"><![CDATA[Science and Technology]]></topic>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="685798">  <title><![CDATA[This Eighth Grader Is Shaping the Future of Wearable Robotics]]></title>  <uid>27255</uid>  <body><![CDATA[<p>Case Neel, 13, is a busy kid who loves coding and robotics, captains his school’s quiz bowl team, and lives with his family on a farm northwest of Atlanta.</p><p>He also has cerebral palsy — and for the past four years, he has played a key role in improving one of the most exciting medical devices at Georgia Tech.</p><p>“My role here is as a participant in exoskeleton research studies,” Case explained. “When I come in, researchers hook me up to sensors that monitor my gait when I’m walking in the device, and then they get a whole lot of data based off that.”</p><p><a href="https://research.gatech.edu/node/44098"><strong>Read more »</strong></a></p>]]></body>  <author>Josie Giles</author>  <status>1</status>  <created>1760728621</created>  <gmt_created>2025-10-17 19:17:01</gmt_created>  <changed>1760728796</changed>  <gmt_changed>2025-10-17 19:19:56</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Like many people with cerebral palsy, Case walks with impaired knee movement. Georgia Tech’s pediatric knee exoskeleton is designed to help children and adolescents walk with increased stability and mobility. Case’s data enables the researchers to analyze]]></teaser>  <type>news</type>  <sentence><![CDATA[Like many people with cerebral palsy, Case walks with impaired knee movement. Georgia Tech’s pediatric knee exoskeleton is designed to help children and adolescents walk with increased stability and mobility. Case’s data enables the researchers to analyze]]></sentence>  <summary><![CDATA[<p>How a middle schooler with cerebral palsy became a vital contributor to Georgia Tech’s cutting-edge robotic exoskeleton research — offering data, feedback, and a passion for innovation.</p>]]></summary>  <dateline>2025-10-13T00:00:00-04:00</dateline>  <iso_dateline>2025-10-13T00:00:00-04:00</iso_dateline>  <gmt_dateline>2025-10-13 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[How a middle schooler with cerebral palsy became a vital contributor to Georgia Tech’s cutting-edge robotic exoskeleton research — offering data, feedback, and a passion for innovation.]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>678385</item>      </media>  <hg_media>          <item>          <nid>678385</nid>          <type>image</type>          <title><![CDATA[26-R10410-P29-006_EDITED.jpg]]></title>          <body><![CDATA[<p>Kinsey Herrin, principal research scientist in the George W. Woodruff School of Mechanical Engineering, leads exoskeleton and prosthetic studies and fosters meaningful connections with the participant community.</p>]]></body>                      <image_name><![CDATA[26-R10410-P29-006_EDITED.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/2025/10/17/26-R10410-P29-006_EDITED.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/2025/10/17/26-R10410-P29-006_EDITED.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/2025/10/17/26-R10410-P29-006_EDITED.jpg?itok=oG4qdRcH]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Person wearing a floral-patterned shirt interacting with a group of people indoors; one individual is dressed in a bright yellow button-up shirt.]]></image_alt>                    <created>1760728650</created>          <gmt_created>2025-10-17 19:17:30</gmt_created>          <changed>1760728650</changed>          <gmt_changed>2025-10-17 19:17:30</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="66220"><![CDATA[Neuro]]></group>          <group id="1188"><![CDATA[Research Horizons]]></group>      </groups>  <categories>          <category tid="152"><![CDATA[Robotics]]></category>      </categories>  <news_terms>          <term tid="152"><![CDATA[Robotics]]></term>      </news_terms>  <keywords>      </keywords>  <core_research_areas>          <term tid="193656"><![CDATA[Neuro Next Initiative]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="685070">  <title><![CDATA[The Robotic Breakthrough That Could Help Stroke Survivors Reclaim Their Stride]]></title>  <uid>36410</uid>  <body><![CDATA[<p>Crossing a room shouldn’t feel like a marathon. But for many stroke survivors, even the smallest number of steps carries enormous weight. Each movement becomes a reminder of lost coordination, muscle weakness, and physical vulnerability.</p><p>A team of Georgia Tech researchers wanted to ease that struggle, and robotic exoskeletons offered a promising path. Their findings point to a simple but powerful shift: exoskeletons that adapt to people, rather than forcing people to adapt to the machine. Using artificial intelligence (AI) to learn the rhythm of patients’ strides in real time, the team showed how these devices can reduce strain and increase efficiency. They also demonstrated how the technology can help restore confidence for stroke survivors.&nbsp;<br><br><strong>The Robot Finds the Rhythm</strong></p><p>A robotic exoskeleton is a wearable device that helps people move with mechanical support. Traditional exoskeletons require endless manual adjustments — turning knobs, calibrating settings, and tweaking controls.&nbsp;</p><p>“It can be frustrating, even nearly impossible, to get it right for each person,” said <a href="https://www.me.gatech.edu/faculty/young">Aaron Young</a>, associate professor in the <a href="https://www.me.gatech.edu/">George W. Woodruff School of Mechanical Engineering.</a> “With AI, the exoskeleton figures out the mapping itself. It learns the timing of someone’s gait through a neural network, without an engineer needing to hand-tune everything.”</p><p>The software monitors each step, instantly updates, and fine-tunes the support it provides. Over time, the exoskeleton aligns its movements with the unique gait of the person wearing it. In this study, the research team used a hip exoskeleton, which provides torque at the hip joint — in other words, adding power to help stroke survivors walk or move their legs more easily.<br>&nbsp;</p><p><strong>Taking Smarter Steps</strong></p><p>Walking after a stroke can be tough and unpredictable. A patient’s stride can change from one day to the next, and even from one step to the next. Most exoskeletons aren’t built for that kind of variation. They are designed around the steady, even gait of healthy young adults, which can leave stroke survivors feeling more unsteady than supported.</p><p>Young’s breakthrough, detailed in <a href="https://ieeexplore.ieee.org/abstract/document/11112638"><em>IEEE Transactions on Robotics</em>,</a> is a neural network — a type of AI that learns patterns much like the human brain does. Sensors at the hip pick up how someone is moving, and the network translates those signals into just the right boost of power to support each step. It quickly figures out a person’s unique walking pattern. But lead clinician Kinsey Herrin said the AI’s learning doesn’t stop there. It keeps adjusting as the patient walks, so the exoskeleton can stay in sync even during stride shifts.</p><p>“The speed really surprised us,” Young said. “In just one to two minutes of walking, the system had already learned a person’s gait pattern with high accuracy. That’s a big deal, to adapt that quickly and then keep adapting as they move.”</p><p>Tests showed the system was far more accurate than the standard exoskeleton. It reduced errors in tracking stroke patients’ walking patterns by 70%.</p><p>Young emphasized that this research is about more than metrics. “When you see someone able to walk farther without becoming exhausted, that’s when you realize this isn’t just about robotics — it’s about giving people back a measure of independence,” he said.<br>&nbsp;</p><p><strong>Adapting Anywhere</strong></p><p>Every exoskeleton comes with its own set of sensors, so the data they collect can look completely different from one device to the next. A neural network trained on one machine often stumbles when it’s moved to another. To get around that, Young’s team designed software that works like a universal adapter plug — no matter what device it’s connected to, it converts the signals into a form the AI can use. After just 10 strides of calibration, the system cut error rates by more than 75%.</p><p>“The goal is that someone could strap on a device, and, within a minute, it feels like it was built just for them,” Young said.<br><br><br><strong>A Step Toward the Future</strong></p><p>While the study centered on stroke survivors, the implications are far broader. The same adaptive approach could support older adults coping with age-related muscle weakness, people with conditions like Parkinson’s or osteoarthritis, or even children with neurological disabilities.&nbsp;<br>Young and his team are now running clinical trials to measure how well the AI-powered exoskeleton supports people in a wide range of everyday activities.</p><p>“There’s no such thing as an ‘average’ user,” Young said. “The real challenge is designing technology that can adapt to the full spectrum of human mobility.”</p><p>If Georgia Tech’s exoskeleton can rise to that challenge, the promise goes well beyond the lab. It could mean a world where technology doesn’t just help people walk — it learns to walk with them.</p><p>Inseung Kang, who holds a B.S., M.S., and Ph.D. from Georgia Tech, is the paper’s lead author and now an assistant professor of mechanical engineering at Carnegie Mellon University. He explained that the real promise is in what comes next.&nbsp;<br><br>“We’ve developed a system that can adjust to a person’s walking style in just minutes. But the potential is even greater. Imagine an exoskeleton that keeps learning with you over your lifetime, adjusting as your body and mobility change. Think of it as a robot companion that understands how you walk and gives you the right assistance every step of the way.”<br><br>&nbsp;</p><p><em>Aaron Young is affiliated with Georgia Tech’s&nbsp;</em><a href="https://research.gatech.edu/robotics"><em>Institute for Robotics and Intelligent Machines</em></a>.</p><p><em>This research was primarily funded by a grant (DP2HD111709-01)&nbsp;from the National Institutes of Health New Innovator Award Program. </em>Georgia Tech researchers have created the first lung-on-a-chip with a functioning immune system, allowing it to respond to infections much like a real human lung. The breakthrough, published in <em>Nature Biomedical Engineering</em>, provides a more accurate way to study diseases, test therapies, and reduce reliance on animal models. With potential applications in conditions from influenza to cancer, the technology opens the door to personalized medicine that predicts how individual patients will respond to treatment.</p>]]></body>  <author>mazriel3</author>  <status>1</status>  <created>1758209214</created>  <gmt_created>2025-09-18 15:26:54</gmt_created>  <changed>1758726539</changed>  <gmt_changed>2025-09-24 15:08:59</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Georgia Tech's AI-fueled exoskeleton adapts to every step, helping patients relearn to walk with less effort and more confidence.]]></teaser>  <type>news</type>  <sentence><![CDATA[Georgia Tech's AI-fueled exoskeleton adapts to every step, helping patients relearn to walk with less effort and more confidence.]]></sentence>  <summary><![CDATA[<p>Georgia Tech researchers have developed an AI-powered hip exoskeleton that adapts in real time to a stroke survivor’s changing gait, reducing errors by 70% and helping patients walk with greater ease and confidence. Unlike traditional devices that require constant manual tuning, the system learns each person’s unique stride within minutes and continues adjusting as they move. The breakthrough could extend beyond stroke recovery, offering personalized mobility support for people of all ages and conditions.</p>]]></summary>  <dateline>2025-09-18T00:00:00-04:00</dateline>  <iso_dateline>2025-09-18T00:00:00-04:00</iso_dateline>  <gmt_dateline>2025-09-18 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[mazriel3@gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p>Michelle Azriel Sr. Writer - Editor</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>678071</item>      </media>  <hg_media>          <item>          <nid>678071</nid>          <type>video</type>          <title><![CDATA[The Robotic Breakthrough That Could Help Stroke Survivors Reclaim Their Stride]]></title>          <body><![CDATA[<p>Georgia Tech's AI-fueled exoskeleton adapts to every step, helping patients relearn to walk with less effort and more confidence.Traditional robotic exoskeleton models require extensive manual calibration, but Aaron Young, associate professor in the George W. Woodruff School of Mechanical Engineering, and his team developed AI-driven software that automatically adapts to each user’s gait. By using a neural network, the system continuously monitors and adjusts support with each step, gradually syncing with the wearer’s unique movement. In this study, the team used a hip exoskeleton that delivers torque at the hip joint to help stroke survivors walk more easily.</p>]]></body>                      <youtube_id><![CDATA[RPHz2mU9sBA]]></youtube_id>            <video_width><![CDATA[]]></video_width>            <video_height><![CDATA[]]></video_height>            <vimeo_id><![CDATA[]]></vimeo_id>            <video_width><![CDATA[]]></video_width>            <video_height><![CDATA[]]></video_height>            <video_url><![CDATA[https://youtu.be/RPHz2mU9sBA]]></video_url>            <video_width><![CDATA[]]></video_width>            <video_height><![CDATA[]]></video_height>                    <created>1758208325</created>          <gmt_created>2025-09-18 15:12:05</gmt_created>          <changed>1758208325</changed>          <gmt_changed>2025-09-18 15:12:05</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="66220"><![CDATA[Neuro]]></group>          <group id="1188"><![CDATA[Research Horizons]]></group>      </groups>  <categories>          <category tid="138"><![CDATA[Biotechnology, Health, Bioengineering, Genetics]]></category>          <category tid="152"><![CDATA[Robotics]]></category>      </categories>  <news_terms>          <term tid="138"><![CDATA[Biotechnology, Health, Bioengineering, Genetics]]></term>          <term tid="152"><![CDATA[Robotics]]></term>      </news_terms>  <keywords>          <keyword tid="194701"><![CDATA[go-resarchnews]]></keyword>          <keyword tid="13169"><![CDATA[autonomous robots]]></keyword>          <keyword tid="98751"><![CDATA[College of Engineering; George W. Woodruff School of Mechanical Engineering]]></keyword>          <keyword tid="172970"><![CDATA[go-neuro]]></keyword>      </keywords>  <core_research_areas>          <term tid="39441"><![CDATA[Bioengineering and Bioscience]]></term>          <term tid="39521"><![CDATA[Robotics]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="685002">  <title><![CDATA[Two IC Faculty Receive NSF CAREER for Robotics and AR/VR Initiatives]]></title>  <uid>36530</uid>  <body><![CDATA[<p>Practice may not make perfect for robots, but new machine learning models from Georgia Tech are allowing them to improve their skillsets to more effectively assist humans in the real world.&nbsp;</p><p><a href="https://faculty.cc.gatech.edu/~danfei/"><strong>Danfei Xu</strong></a>, an assistant professor in <a href="https://ic.gatech.edu/"><strong>Georgia Tech’s School of Interactive Computing</strong></a>, is introducing new models that provide robots with “on-the-job” training.</p><p>The National Science Foundation (NSF) awarded Xu its CAREER award given to early career faculty. The award will enable Xu to expand his research and refine his models, which could accelerate the process of robot deployment and alleviate manufacturers from the burden of achieving perfection.</p><p>“The main problem we’re trying to tackle is how to allow robots to learn on the job,” Xu said. “How should it self-improve based on the performance or the new requirements or new user preferences in each home or working environment? You cannot expect a robot manufacturer to program all of that.</p><p>“The challenging thing about robotics is that the robot must get feedback from the physical environment. It must try to solve a problem to understand the limits of its abilities so it can decide how to improve its own performance.”</p><p>As with humans, Xu views practice as the most effective way for a robot to improve a skill. His models train the robot to identify the point at which it failed in its task performance.</p><p>“It identifies that skill and sets up an environment where it can practice,” he said. “If it needs to improve opening a drawer, it will navigate itself to the drawer and practice opening it.”</p><p>The models allow the robot to split tasks into smaller parts and evaluate its own skill level using reward functions. Cooking dinner, for example, can be divided into steps like turning on the stove and opening the fridge, which are necessary to achieve the overall goal.</p><p>“Planning is a complex problem because you must predict what’s going to happen in the physical world,” Xu said. “We use machine learning techniques that our group has developed over the past two years, using generated models to generate positive futures. They’re very good at modeling long-horizon phenomena.</p><p>“The robot knows when it’s failed because there’s a value that tells it how well it performed the task and whether it received its reward. While we don’t know how to tell the robot why it failed, we have ways for it to improve its skills based on that measurement.”&nbsp;</p><p>One of the biggest barriers that keeps many robots from being made available for public use is the pressure on manufacturers to make the robot as close to perfect as possible at deployment. Xu said it’s more practical to accept that robots will have learning gaps that need to be filled and to implement more efficient real-world learning models.</p><p>“We work under the pressure of getting everything correct before deployment,” he said. “We need to meet the basic safety requirements, but in terms of competence, it is difficult to get that perfect at deployment. This takes some of the pressure off because it will be able to self-adapt.”</p><h4><strong>Virtual Workspace for Data Workers</strong></h4><p><a href="https://ivi.cc.gatech.edu/people.html"><strong>Yalong Yang</strong></a>, another assistant professor in the School of IC, also received the NSF CAREER Award for a research proposal that will design augmented and virtual reality (AR/VR) workspaces for data workers.&nbsp;</p><p>“In 10 years, I envision everyone will use AR/VR in their office, and it will replace their laptop or their monitor,” Yang said.</p><p>Yang said he is also working with Google on the project and using Google Gemini to bring conventional applications to immersive space, with data tools being the most complicated systems to re-design for immersive environments.</p><p>The immersive workspace and interface will also enable teams of data workers to collaborate and share their data in real-time.</p><p>“I want to support the end-to-end process,” Yang said. “We have visualization tools for data, but it’s not enough. Data science is a pipeline — from collecting data to processing, visualizing, modeling and then communicating. If you only support one, people will need to switch to other platforms for the other steps.”</p><p>Yang also noted that prior research has shown that VR can enhance cognitive abilities, such as memory and attention and support multitasking. The results of his project could lead to maximizing worker efficiency without them feeling strained.</p><p>“We all have a cognitive limit in our working memory. Using AR/VR can increase those limits and process more information. We can expand people’s spatial ability to help them build a better mental model of the data presented to them.”</p><p>Yang was also recently named a <a href="https://www.cc.gatech.edu/news/tiktok-photoshop-generative-ai-could-bring-millions-apps-3d-reality"><strong>2025 Google Research Scholar</strong></a> as he seeks to build a new artificial intelligence (AI) tool that converts mobile apps into 3D immersive environments.</p>]]></body>  <author>Nathan Deen</author>  <status>1</status>  <created>1758133463</created>  <gmt_created>2025-09-17 18:24:23</gmt_created>  <changed>1758133731</changed>  <gmt_changed>2025-09-17 18:28:51</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Two Georgia Tech professors, Danfei Xu and Yalong Yang, have received the prestigious NSF CAREER award for their research in robotics, which focuses on teaching robots to self-improve, and in augmented and virtual reality (AR/VR), which aims to create imm]]></teaser>  <type>news</type>  <sentence><![CDATA[Two Georgia Tech professors, Danfei Xu and Yalong Yang, have received the prestigious NSF CAREER award for their research in robotics, which focuses on teaching robots to self-improve, and in augmented and virtual reality (AR/VR), which aims to create imm]]></sentence>  <summary><![CDATA[<p>Two assistant professors in Georgia Tech’s School of Interactive Computing — Danfei Xu and Yalong Yang — have each won NSF CAREER Awards for their respective research in robotics and AR/VR initiatives. Xu’s work will develop machine learning models that let robots learn “on the job,” adapting from feedback and failure in real-world environments rather than being perfectly preprogrammed. Yang’s project aims to build immersive AR/VR workspaces to support data workers across the full data pipeline, including a collaboration with Google to bring conventional apps into immersive environments.</p>]]></summary>  <dateline>2025-09-17T00:00:00-04:00</dateline>  <iso_dateline>2025-09-17T00:00:00-04:00</iso_dateline>  <gmt_dateline>2025-09-17 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>678055</item>      </media>  <hg_media>          <item>          <nid>678055</nid>          <type>image</type>          <title><![CDATA[ICRA-2025_86A9079-Enhanced-NR.jpg]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[ICRA-2025_86A9079-Enhanced-NR.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/2025/09/17/ICRA-2025_86A9079-Enhanced-NR.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/2025/09/17/ICRA-2025_86A9079-Enhanced-NR.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/2025/09/17/ICRA-2025_86A9079-Enhanced-NR.jpg?itok=Wz_zxhQx]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Danfei Xu]]></image_alt>                    <created>1758133475</created>          <gmt_created>2025-09-17 18:24:35</gmt_created>          <changed>1758133475</changed>          <gmt_changed>2025-09-17 18:24:35</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1188"><![CDATA[Research Horizons]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>          <category tid="153"><![CDATA[Computer Science/Information Technology and Security]]></category>          <category tid="152"><![CDATA[Robotics]]></category>      </categories>  <news_terms>          <term tid="153"><![CDATA[Computer Science/Information Technology and Security]]></term>          <term tid="152"><![CDATA[Robotics]]></term>      </news_terms>  <keywords>          <keyword tid="191934"><![CDATA[National Science Foundation (NSF)]]></keyword>          <keyword tid="7842"><![CDATA[NSF CAREER Award]]></keyword>          <keyword tid="188776"><![CDATA[go-research]]></keyword>          <keyword tid="9153"><![CDATA[Research Horizons]]></keyword>          <keyword tid="145251"><![CDATA[virtual reality]]></keyword>          <keyword tid="1597"><![CDATA[Augmented Reality]]></keyword>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>          <term tid="39521"><![CDATA[Robotics]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="684700">  <title><![CDATA[Georgia Tech Team Designing Robot Guide Dog to Assist the Visually Impaired]]></title>  <uid>32045</uid>  <body><![CDATA[<p>People who are visually impaired and cannot afford or care for service animals might have a practical alternative in a robotic guide dog being developed at Georgia Tech.</p><p>Before launching its prototype, a research team within Georgia Tech’s School of Interactive Computing, led by Professor <strong>Bruce Walker</strong> and Assistant Professor <strong>Sehoon Ha</strong>, is working to improve its methods and designs based on research within blind and visually impaired (BVI) communities.</p><p>“There’s been research on the technical aspects and functionality of robotic guide dogs, but not a lot of emphasis on the aesthetics or form factors,” said <strong>Avery</strong> <strong>Gong</strong>, a recent master’s graduate who worked in Walker’s lab. “We wanted to fill this gap.”</p><p>Training a guide dog can cost up to $50,000, and while there are nonprofit organizations that can cover these costs for potential owners, there is still a gap between the amount of available guide dogs and BVI individuals who need them. Not all BVI individuals are able to care for a dog and feed it. The dog also has fewer than 10 working years before it needs replacement.</p><p>Gong co-authored a paper on the design implications of the robotic guide dog that was presented at the 2025 International Conference on Robotics and Automation (ICRA) in Atlanta in May.</p><p>The consensus among the study’s participants indicates they prefer a robotic guide dog that:</p><ul><li>resembles a real dog and appears approachable</li><li>has a clear identifier of being a guide dog, such as a vest</li><li>has built-in GPS and Bluetooth connectivity</li><li>has control options such as voice command</li><li>has soft textures without feeling furry</li><li>has long battery life and self-charging capability</li></ul><p>“A lot of people said they didn’t want the dog to look too cute or appealing because it would draw too much attention,” said <strong>Aviv Cohav</strong>, another lead author of the paper and recent master’s graduate.</p><p>“Many people have issues with taking their guide dog to places, whether it’s little kids wanting to play with the dog or people not liking dogs or people being scared of them, and that reflects on the owners themselves. We wanted to look at what would be a good balance between having a functional robot that wouldn’t scare people away or be a distraction.”</p><p>The researchers also had to consider the perspectives of sighted individuals and how society at large might view a robotic guide dog.</p><p>An example of this is the amount of noise the dog makes while walking. The owner needs to hear the dog is active, but the clanky sound many off-the-shelf robots make could create disturbances in indoor spaces that amplify sounds. To offset the noise, the team developed algorithms that allow the robot to move more quietly.</p><p>Walker and his lab have examined similar scenarios that must take public perception into account.</p><p>“We like to think of Georgia Tech as going the extra mile,” Walker said. “Let’s not just make a robot, but a robot that’s going to fit into society.</p><p>“To have impact, the technologies we produce must be produced with society in mind. This is a holistic design that considers the users and all the people with whom the users interact.”</p><p><strong>Taery Kim</strong>, a computer science Ph.D. student, began working on the concept of a robotic guide dog when she came to Georgia Tech in 2022. She and Ha, her advisor, have authored papers on building the robot’s navigation and safety components.&nbsp;</p><p>“When I started, I thought it would be as simple as giving the guide dog a command to take me to Starbucks or the grocery store, and it would just take me,” Kim said. “But the user must give waypoint directions — ‘go left here,’ ‘turn right,’ ‘go forward,’ ‘stop.’ Detailed commands must be delivered to the dog.”</p><p>While a real dog has naturally enhanced senses of hearing and smell that can’t be replicated, technology can provide interconnected safety features during an emergency. The researchers envision a camera system equipped with a 360-degree field of view, computer vision algorithms that detect obstacles or hazards, and voice recognition that recognizes calls for help. An SOS function could automatically call 911 at the owner’s request or if the owner is unresponsive.</p><p>Kim said the robot should also have explainability features to enhance communication with the owner. For example, if the robot suddenly stops or ignores an owner’s commands, it should tell the owner that it’s detecting a hazard in their path.</p><p>Manufacturing a robot at scale would initially be expensive, but the researchers believe the cost would eventually be offset because of its longevity. BVI individuals may only need to purchase one during their lifetime.</p><p>To introduce a prototype, the multidisciplinary research team recognizes that it needs to enlist experts from other fields to adequately address the various implications and research gaps inherent in the project.</p><p>Walker said the teams welcome additional partners who are keen to tackle challenges ranging from design and engineering to battery life to human-robot interaction.</p><p>Team member <strong>J. Taery Kim</strong> was supported by the National Science Foundation's Graduate Research Fellowship Program (NSF GRFP) under Grant No. DGE-2039655.</p>]]></body>  <author>Ben Snedeker</author>  <status>1</status>  <created>1757509079</created>  <gmt_created>2025-09-10 12:57:59</gmt_created>  <changed>1758127447</changed>  <gmt_changed>2025-09-17 16:44:07</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Researchers rely on feedback from blind and visually impaired (BVI) communities to create service animal prototype.]]></teaser>  <type>news</type>  <sentence><![CDATA[Researchers rely on feedback from blind and visually impaired (BVI) communities to create service animal prototype.]]></sentence>  <summary><![CDATA[<p>Georgia Tech researchers from the School of Interactive Computing are using survey information from individuals who are blind or visually impaired (BVI) to develop a robotic service dog.</p>]]></summary>  <dateline>2025-09-10T00:00:00-04:00</dateline>  <iso_dateline>2025-09-10T00:00:00-04:00</iso_dateline>  <gmt_dateline>2025-09-10 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>Nathan Deen, Communications Officer<br>School of Interactive Computing</p><p>nathan.deen@cc.gatech.edu</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>677956</item>          <item>677957</item>      </media>  <hg_media>          <item>          <nid>677956</nid>          <type>image</type>          <title><![CDATA[Georgia Tech researchers test their prototype of a robotic guide dog. Photo by Terence Rushin/College of Computing.]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Robotic-Seeing-Eye-Dog_86A0019-Enhanced-NR.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/2025/09/10/Robotic-Seeing-Eye-Dog_86A0019-Enhanced-NR.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/2025/09/10/Robotic-Seeing-Eye-Dog_86A0019-Enhanced-NR.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/2025/09/10/Robotic-Seeing-Eye-Dog_86A0019-Enhanced-NR.jpg?itok=ULOJYgOx]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Georgia Tech researchers test their prototype of a robotic guide dog. Photo by Terence Rushin/College of Computing.]]></image_alt>                    <created>1757509562</created>          <gmt_created>2025-09-10 13:06:02</gmt_created>          <changed>1757509562</changed>          <gmt_changed>2025-09-10 13:06:02</gmt_changed>      </item>          <item>          <nid>677957</nid>          <type>image</type>          <title><![CDATA[A graphic depicts design considerations for the prototype.]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Robotic-Dog-Story-01-20-.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/2025/09/10/Robotic-Dog-Story-01-20-.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/2025/09/10/Robotic-Dog-Story-01-20-.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/2025/09/10/Robotic-Dog-Story-01-20-.jpg?itok=Y-Ee-LqE]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[A graphic depicts design considerations for the prototype.]]></image_alt>                    <created>1757509677</created>          <gmt_created>2025-09-10 13:07:57</gmt_created>          <changed>1757509677</changed>          <gmt_changed>2025-09-10 13:07:57</gmt_changed>      </item>      </hg_media>  <related>          <link>        <url><![CDATA[https://youtu.be/4CzDPxaVWkI?feature=shared]]></url>        <title><![CDATA[VIDEO: Robotic guide dogs could reshape the future for the blind and visually impaired]]></title>      </link>      </related>  <files>      </files>  <groups>          <group id="1278"><![CDATA[College of Sciences]]></group>          <group id="1188"><![CDATA[Research Horizons]]></group>          <group id="443951"><![CDATA[School of Psychology]]></group>      </groups>  <categories>          <category tid="135"><![CDATA[Research]]></category>          <category tid="152"><![CDATA[Robotics]]></category>      </categories>  <news_terms>          <term tid="135"><![CDATA[Research]]></term>          <term tid="152"><![CDATA[Robotics]]></term>      </news_terms>  <keywords>          <keyword tid="10199"><![CDATA[Daily Digest]]></keyword>          <keyword tid="181991"><![CDATA[Georgia Tech News Center]]></keyword>          <keyword tid="187915"><![CDATA[go-researchnews]]></keyword>          <keyword tid="188087"><![CDATA[go-irim]]></keyword>          <keyword tid="667"><![CDATA[robotics]]></keyword>          <keyword tid="172970"><![CDATA[go-neuro]]></keyword>      </keywords>  <core_research_areas>          <term tid="193656"><![CDATA[Neuro Next Initiative]]></term>          <term tid="39521"><![CDATA[Robotics]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="683686">  <title><![CDATA[Research Combining Humans, Robots, and Unicycles Receives NSF Award]]></title>  <uid>27863</uid>  <body><![CDATA[<p>Research into tailored assistive and rehabilitative devices has seen recent advancements but the goal remains out of reach due to the sparsity of data on how humans learn complex balance tasks. To address this gap, a collaborating team of interdisciplinary faculty from Florida State University and Georgia Tech have been awarded ~$798,000 by the NSF to launch a study to better understand human motor learning as well as gain greater understanding into human robot interaction dynamics during the learning process.</p><p>&nbsp;Led by PI:&nbsp;<a href="https://rthmlab.wixsite.com/taylorgambon">Taylor Higgins</a>, Assistant Professor, FAMU-FSU Department of Mechanical Engineering, partnering with Co-PIs&nbsp;<a href="https://www.shreyaskousik.com/">Shreyas Kousik</a>, Assistant Professor, Georgia Tech, George W. Woodruff School of Mechanical Engineering, and&nbsp;<a href="https://annescollege.fsu.edu/faculty-staff/dr-brady-decouto">Brady DeCouto,</a> Assistant Professor, FSU&nbsp;Anne Spencer Daves College of Education, Health, and Human Sciences, the research will use the acquisition of unicycle riding skill by participants to gain a better grasp on human motor learning in tasks requiring balance and complex movement in space. Although it might sound a bit odd, the fact that most people don’t know how to ride a unicycle, and the fact that it requires balance, mean that the data will cover the learning process from novice to skilled across the participant pool.</p><p>Using data acquired from human participants, the team will develop a “robotics assistive unicycle” that will be used in the training of the next pool of novice unicycle riders. &nbsp;This is to gauge if, and how rapidly, human motor learning outcomes improve with the assistive unicycle. The participants that engage with the robotic unicycle will also give valuable insight into developing effective human-robot collaboration strategies.</p><p>The fact that deciding to get on a unicycle requires a bit of bravery might not be great for the participants, but it’s great for the research team. The project will also allow exploration into the interconnection between anxiety and human motor learning to discover possible alleviation strategies, thus increasing the likelihood of positive outcomes for future patients and consumers of these devices.</p><p>&nbsp;</p><p>Author<br>-Christa M. Ernst</p><p>This Article Refers to NSF Award # 2449160</p>]]></body>  <author>Christa Ernst</author>  <status>1</status>  <created>1754681755</created>  <gmt_created>2025-08-08 19:35:55</gmt_created>  <changed>1755008137</changed>  <gmt_changed>2025-08-12 14:15:37</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Novel research to improve tailored assistive and rehabilitative devices wins NSF Grant]]></teaser>  <type>news</type>  <sentence><![CDATA[Novel research to improve tailored assistive and rehabilitative devices wins NSF Grant]]></sentence>  <summary><![CDATA[<p>A collaborating team of interdisciplinary faculty from Florida State University and Georgia Tech have been awarded ~$798,000 by the NSF to launch a study to better understand human motor learning as well as gain greater understanding into human robot interaction dynamics during the learning process.</p>]]></summary>  <dateline>2025-08-08T00:00:00-04:00</dateline>  <iso_dateline>2025-08-08T00:00:00-04:00</iso_dateline>  <gmt_dateline>2025-08-08 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[Trio from Florida State University and Georgia Tech aim to develop better assistive and rehabilitative technologies and strategies using novel approach.]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[christa.ernst@research.gatech.edu]]></email>  <location></location>  <contact><![CDATA[<div><strong>Christa M. Ernst</strong></div><div>Research Communications Program Manager</div><div>Klaus Advance Computing Building 1120E | 266 Ferst Drive | Atlanta GA | 30332</div><div><strong>Topic Expertise: Robotics | Data Sciences | Semiconductor Design &amp; Fab</strong></div><div>christa.ernst@research.gatech.edu</div>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>677632</item>      </media>  <hg_media>          <item>          <nid>677632</nid>          <type>image</type>          <title><![CDATA[Kousik-NSF-Award-News-Graphic.png]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Kousik-NSF-Award-News-Graphic.png]]></image_name>            <image_path><![CDATA[/sites/default/files/2025/08/08/Kousik-NSF-Award-News-Graphic.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/2025/08/08/Kousik-NSF-Award-News-Graphic.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/2025/08/08/Kousik-NSF-Award-News-Graphic.png?itok=5xmuJ9X7]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[Graphic of person using an assistive device thinking about how a robot could hep learn riding a unicycle]]></image_alt>                    <created>1754681767</created>          <gmt_created>2025-08-08 19:36:07</gmt_created>          <changed>1754681767</changed>          <gmt_changed>2025-08-08 19:36:07</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="545781"><![CDATA[Institute for Data Engineering and Science]]></group>          <group id="142761"><![CDATA[IRIM]]></group>          <group id="1188"><![CDATA[Research Horizons]]></group>      </groups>  <categories>          <category tid="194606"><![CDATA[Artificial Intelligence]]></category>          <category tid="138"><![CDATA[Biotechnology, Health, Bioengineering, Genetics]]></category>          <category tid="145"><![CDATA[Engineering]]></category>          <category tid="135"><![CDATA[Research]]></category>          <category tid="152"><![CDATA[Robotics]]></category>      </categories>  <news_terms>          <term tid="194606"><![CDATA[Artificial Intelligence]]></term>          <term tid="138"><![CDATA[Biotechnology, Health, Bioengineering, Genetics]]></term>          <term tid="145"><![CDATA[Engineering]]></term>          <term tid="135"><![CDATA[Research]]></term>          <term tid="152"><![CDATA[Robotics]]></term>      </news_terms>  <keywords>          <keyword tid="78841"><![CDATA[human-robot interaction]]></keyword>          <keyword tid="5525"><![CDATA[assistive technologies]]></keyword>          <keyword tid="187915"><![CDATA[go-researchnews]]></keyword>          <keyword tid="187582"><![CDATA[go-ibb]]></keyword>      </keywords>  <core_research_areas>          <term tid="193655"><![CDATA[Artificial Intelligence at Georgia Tech]]></term>          <term tid="39441"><![CDATA[Bioengineering and Bioscience]]></term>          <term tid="193656"><![CDATA[Neuro Next Initiative]]></term>          <term tid="39521"><![CDATA[Robotics]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="682404">  <title><![CDATA[Researchers Say Stress “Sweet Spot” Can Improve Remote Operators' Performance]]></title>  <uid>36530</uid>  <body><![CDATA[<p>Military drone pilots, disaster search and rescue teams, and astronauts stationed on the International Space Station are often required to remotely control robots while maintaining their concentration for hours at a time.</p><p>Georgia Tech roboticists are attempting to identify the most stressful periods that human teleoperators experience while performing tasks remotely. A novel study provides new insights into determining when a teleoperator needs to operate at a high level of focus and which parts of the task can be delegated to robot automation.</p><p>School of Interactive Computing Associate Professor <strong>Matthew</strong> <strong>Gombolay</strong> calls it the “sweet spot” of human ingenuity and robotic precision. Gombolay and students from his <a href="https://core-robotics.gatech.edu/"><strong>CORE Robotics Lab</strong></a>conducted a novel study that measures stress and workload on human teleoperators.</p><p>Gombolay said it can inform military officials on how to strategically implement task automation and maximize human teleoperator performance.</p><p>Humans continue to hand over more tasks to robots to perform, but Gombolay said that some functions will still require human input and oversight for the foreseeable future.</p><p>Specific applications, such as space exploration, commercial and military aviation, disaster relief, and search and rescue, pose substantial safety concerns. Astronauts stationed on the International Space Station, for example, manually control robots that bring in supplies, move cargo, and make structural repairs.</p><p>“It’s brutal from a psychological perspective,” Gombolay said.</p><p>The question often asked about automating a task in these fields is, at what point can a robot be trusted more than a human?</p><p>A recent paper by Gombolay and his current and former students — <strong>Sam</strong> <strong>Yi</strong> <strong>Ting</strong>, <strong>Erin</strong> <strong>Hedlund</strong>-<strong>Botti</strong>, and <strong>Manisha</strong> <strong>Natarajan</strong> — sheds new light on the debate. The paper was published in the IEEE Robotics and Automation Letters and will be presented at the International Conference on Robotics and Automation in Atlanta.</p><p>The NASA-funded study can identify which aspects of tedious, time-consuming tasks can be automated and which require human supervision. If roboticists can pinpoint the elements of a task that cause the least stress, they can automate these components and enable humans to oversee the more challenging aspects.</p><p>“If we’re talking about repetitive tasks, robots do better with that, so if you can automate it, you should,” said Ting, a former grad student and lead author of the paper. “I don’t think humans enjoy doing repetitive tasks. We can move toward a better future with automation.”</p><p>Military officials, for example, could measure the stress of remote drone pilots and know which times during a pilot’s shift require the highest level of attention.</p><p>“We can get a sense of how stressed you are and create models of how divided your attention is and the performance rate of the tasks you’re doing,” Gombolay said.</p><p>“It can be a low-stress or high-stress situation depending on the stakes and what’s going on with you personally. Are you well-caffeinated? Well-rested? Is there stress from home you’re bringing with you to the workplace? The goal is to predict how good your task performance will be. If it indicates it might be poor, we may need to outsource work to other people or create a safe space for the operator to destress.”</p><h4><strong>The Stress Test</strong></h4><p>For their study, the researchers cut a small river-shaped path into a medium-density fiberboard. The exercise required the 24 participants to use a remote robotic arm to navigate through the path from one end to the other without touching the edges.</p><p>The experiment grew more challenging as new stress conditions and workload requirements were introduced. The changing conditions required the test participants to multitask to complete the assignment.</p><p>Gombolay said the study supports the Yerkes-Dodson Law, which states that moderate levels of stress increase human performance.</p><p>The experiment showed that operators felt overwhelmed and performed poorly when multitasking was introduced. Too much stress led to poor performance, but a moderate amount of stress induced more engagement and enhanced teleoperator focus.&nbsp;</p><p>Ting said finding that ideal stress zone can lead to a higher performance rating.&nbsp;</p><p>“You would think the more stressed you are, the more your performance decreases,” Ting said. “Most people didn’t react that way. As stress increased, performance increased, but when you increased workload and gave them more to do, that’s when you started seeing deteriorating performance.”</p><p>Gombolay said no stress can be just as detrimental as too much stress. Performing a task without stress tends to cause teleoperators to become disinterested, especially if it is repetitive and time-consuming.</p><p>“No stress led to complacency,” Gombolay said. “They weren’t as engaged in completing the task.</p><p>“If your excitement is too low, you get so bored you can’t muster the cognitive energy to reason about robot operation problems.”</p><h4><strong>The Human Factor</strong></h4><p>Roboticists have made significant leaps in recent years to remove teleoperators from the equation. Still, Gombolay said it’s too early to tell whether robots can be trusted with any task that a human can perform.</p><p>“We’re a long way from full autonomy,” he said. “There’s a lot that robots still can’t do without a human operator. Search and rescue operations, if a building collapses, we don’t have much training data for robots to go through rubble by themselves to rescue people. There are ethical needs for humans to be able to supervise or take direct control of robots.”</p>]]></body>  <author>Nathan Deen</author>  <status>1</status>  <created>1747314528</created>  <gmt_created>2025-05-15 13:08:48</gmt_created>  <changed>1752591939</changed>  <gmt_changed>2025-07-15 15:05:39</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Georgia Tech researchers say there's a "sweet spot" of stress that can enhance performance of remote robot operators such as drone pilots and astronauts.]]></teaser>  <type>news</type>  <sentence><![CDATA[Georgia Tech researchers say there's a "sweet spot" of stress that can enhance performance of remote robot operators such as drone pilots and astronauts.]]></sentence>  <summary><![CDATA[<p>Researchers at Georgia Tech are exploring the relationship between stress levels and the performance of remote robot operators. They found a moderate level of of stress can enhance performance and keep operators engaged and focused.&nbsp;</p>]]></summary>  <dateline>2025-05-13T00:00:00-04:00</dateline>  <iso_dateline>2025-05-13T00:00:00-04:00</iso_dateline>  <gmt_dateline>2025-05-13 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>      </media>  <hg_media>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1188"><![CDATA[Research Horizons]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>          <category tid="153"><![CDATA[Computer Science/Information Technology and Security]]></category>          <category tid="147"><![CDATA[Military Technology]]></category>          <category tid="135"><![CDATA[Research]]></category>          <category tid="152"><![CDATA[Robotics]]></category>          <category tid="8862"><![CDATA[Student Research]]></category>      </categories>  <news_terms>          <term tid="153"><![CDATA[Computer Science/Information Technology and Security]]></term>          <term tid="147"><![CDATA[Military Technology]]></term>          <term tid="135"><![CDATA[Research]]></term>          <term tid="152"><![CDATA[Robotics]]></term>          <term tid="8862"><![CDATA[Student Research]]></term>      </news_terms>  <keywords>      </keywords>  <core_research_areas>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="682890">  <title><![CDATA[Tech Researchers Tabbed to Build AI Systems for Medical Robots in South Korea]]></title>  <uid>36530</uid>  <body><![CDATA[<p>Overwhelmed doctors and nurses struggling to provide adequate patient care in South Korea are getting support from Georgia Tech and Korean-based researchers through an AI-powered robotic medical assistant.</p><p>Top South Korean research institutes have enlisted Georgia Tech researchers <strong>Sehoon</strong> <strong>Ha</strong> and <strong>Jennifer G.</strong> <strong>Kim</strong> to develop artificial intelligence (AI) to help the humanoid assistant navigate hospitals and interact with doctors, nurses, and patients.</p><p>Ha and Kim will partner with Neuromeka, a South Korean robotics company, on a five-year, 10 billion won (about $7.2 million US) grant from the South Korean government. Georgia Tech will receive about $1.8 million of the grant.</p><p>Ha and Kim, assistant professors in the School of Interactive Computing, will lead Tech’s efforts and also work with researchers from the Korea Advanced Institute of Science and Technology and the Electronics and Telecommunications Research Institute.</p><p>Neuromeka has built industrial robots since its founding in 2013 and recently decided to expand into humanoid service robots.</p><p>Lee, the group leader of the humanoid medical assistant project, said he fielded partnership requests from many academic researchers. Ha and Kim stood out as an ideal match because of their robotics, AI, and human-computer interaction expertise.&nbsp;</p><p>For Ha, the project is an opportunity to test navigation and control algorithms he’s developed through research that earned him the National Science Foundation CAREER Award. Ha combines computer simulation and real-world training data to make robots more deployable in high-stress, chaotic environments.&nbsp;</p><p>“Dr. Ha has everything we want to put into our system, including his navigation policies,” Lee said. “He works with robots and AI, and there weren’t many candidates in that space. We needed a collaborator who can create the software and has experience running it on robots.”</p><p>Ha said he is already considering how his algorithms could scale beyond hospitals and become a universal means of robot navigation in unstructured real-world environments.</p><p>“For now, we’re focusing on a customized navigation model for Korean environments, but there are ways to transfer the data set to different environments, such as the U.S. or European healthcare systems,” Ha said.&nbsp;</p><p>“The final product can be deployed to other systems and industries. It can help industrial workers at factories, retail stores, any place where workers can get overwhelmed by a high volume of tasks.”</p><p>Kim will focus on making the robot’s design and interaction features more human. She’ll develop a large-language model (LLM) AI system to communicate with patients, nurses, and doctors. She’ll also develop an app that will allow users to input their commands and queries.&nbsp;</p><p>“This project is not just about controlling robots, which is why Dr. Kim’s expertise in human-computer interaction design through natural language was essential.,” Lee said.&nbsp;</p><p>Kim is interviewing stakeholders from three South Korean hospitals to identify service and care pain points. The issues she’s identified so far relate to doctor-patient communication, a lack of emotional support for patients, and an excessive number of small tasks that consume nurses’ time.</p><p>“Our goal is to develop this robot in a very human-centered way,” she said. “One way is to give patients a way to communicate about the quality of their care and how the robot can support their emotional well-being.</p><p>“We found that patients often hesitate to ask busy nurses for small things like getting a cup of water. We believe this is an area a robot can support.”</p><p>The robot’s hardware will be built in Korea, while Ha and Kim will develop the software in the U.S.</p><p>Jong-hoon Park, CEO of Neuromeka, said in a press release the goal is to have a commercialized product as soon as possible.&nbsp;</p><p>“Through this project, we will solve problems that existing collaborative robots could not,” Park said. “We expect the medical AI humanoid robot technology being developed will contribute to reducing the daily work burden of medical and healthcare workers in the field.”</p>]]></body>  <author>Nathan Deen</author>  <status>1</status>  <created>1750880997</created>  <gmt_created>2025-06-25 19:49:57</gmt_created>  <changed>1750881315</changed>  <gmt_changed>2025-06-25 19:55:15</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Georgia Tech researchers are collaborating with South Korean research institutes on a five-year grant to develop an AI-powered humanoid medical assistant to help doctors and nurses in South Korea.]]></teaser>  <type>news</type>  <sentence><![CDATA[Georgia Tech researchers are collaborating with South Korean research institutes on a five-year grant to develop an AI-powered humanoid medical assistant to help doctors and nurses in South Korea.]]></sentence>  <summary><![CDATA[<p>Georgia Tech researchers Sehoon Ha and Jennifer Kim are working with South Korean institutions to create an AI-powered medical assistant robot. This five-year project, funded by a $7.2 million grant from the South Korean government, aims to alleviate the workload of healthcare professionals in South Korea by enabling the robot to navigate hospitals and interact with staff and patients.&nbsp;</p>]]></summary>  <dateline>2025-06-25T00:00:00-04:00</dateline>  <iso_dateline>2025-06-25T00:00:00-04:00</iso_dateline>  <gmt_dateline>2025-06-25 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>677282</item>      </media>  <hg_media>          <item>          <nid>677282</nid>          <type>image</type>          <title><![CDATA[IMG_4499-copy.jpg]]></title>          <body><![CDATA[<p><em>School of Interactive Computing Assistant Professor Sehoon Ha, Neuromeka researchers Joonho Lee and Yunho Kim, School of IC Assistant Professor Jennifer Kim, and Electronics and Telecommunications Research Institute researcher Dongyeop Kang, are collaborating to develop a medical assistant robot to support doctors and nurses in Korea. Photo by Nathan Deen/College of Computing.</em></p>]]></body>                      <image_name><![CDATA[IMG_4499-copy.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/2025/06/25/IMG_4499-copy.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/2025/06/25/IMG_4499-copy.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/2025/06/25/IMG_4499-copy.jpg?itok=5VPD5dev]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Researchers]]></image_alt>                    <created>1750881009</created>          <gmt_created>2025-06-25 19:50:09</gmt_created>          <changed>1750881009</changed>          <gmt_changed>2025-06-25 19:50:09</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1188"><![CDATA[Research Horizons]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>          <category tid="194606"><![CDATA[Artificial Intelligence]]></category>          <category tid="153"><![CDATA[Computer Science/Information Technology and Security]]></category>          <category tid="152"><![CDATA[Robotics]]></category>      </categories>  <news_terms>          <term tid="194606"><![CDATA[Artificial Intelligence]]></term>          <term tid="153"><![CDATA[Computer Science/Information Technology and Security]]></term>          <term tid="152"><![CDATA[Robotics]]></term>      </news_terms>  <keywords>          <keyword tid="192863"><![CDATA[go-ai]]></keyword>          <keyword tid="187812"><![CDATA[artificial intelligence (AI)]]></keyword>          <keyword tid="9153"><![CDATA[Research Horizons]]></keyword>          <keyword tid="187915"><![CDATA[go-researchnews]]></keyword>          <keyword tid="78681"><![CDATA[medical robotics]]></keyword>          <keyword tid="194391"><![CDATA[AI in Healthcare]]></keyword>      </keywords>  <core_research_areas>          <term tid="193655"><![CDATA[Artificial Intelligence at Georgia Tech]]></term>          <term tid="39501"><![CDATA[People and Technology]]></term>          <term tid="39521"><![CDATA[Robotics]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="682761">  <title><![CDATA[Georgia Tech Team Takes Second Place at ICRA Robot Teleoperation Contest]]></title>  <uid>36530</uid>  <body><![CDATA[<p>An algorithmic breakthrough from School of Interactive Computing researchers that&nbsp;<a href="https://www.cc.gatech.edu/news/new-algorithm-teaches-robots-through-human-perspective"><strong>earned a Meta partnership</strong></a>drew more attention at the IEEE International Conference on Robotics and Automation (ICRA).</p><p>Meta announced in February its partnership with the labs of professors&nbsp;<a href="https://faculty.cc.gatech.edu/~danfei/"><strong>Danfei Xu</strong></a> and&nbsp;<a href="https://faculty.cc.gatech.edu/~judy/"><strong>Judy Hoffman</strong></a> on a novel computer vision-based algorithm called EgoMimic. It enables robots to learn new skills by imitating human tasks from first-person video footage captured by Meta’s Aria smart glasses.&nbsp;</p><p>Xu’s&nbsp;<a href="https://rl2.cc.gatech.edu/"><strong>Robot Learning and Reasoning Lab (RL2)</strong></a> displayed EgoMimic in action at ICRA May 19-23 at the World Congress Center in Atlanta.</p><p>Lawrence Zhu, Pranav Kuppili, and Patcharapong “Elmo” Aphiwetsa — students from Xu’s lab — used Egomimic to compete in a robot teleoperation contest at ICRA. The team finished second in the event titled What Bimanual Teleoperation and Learning from Demonstration Can Do Today, earning a $10,000 cash prize.</p><p>Teams were challenged to perform tasks by remotely controlling a robot gripper. The robot had to fold a tablecloth, open a vacuum-sealed container, place an object into the container, and then reseal it in succession without any errors.</p><p>Teams completed the tasks as many times as possible in 30 minutes, earning points for each successful attempt.</p><p>The competition also offered different challenge levels that increased the points awarded. Teams could directly operate the robot with a full workstation view and receive one point for each task completion. Or, as the RL2 team chose, teams could opt for the second challenge level.</p><p>The second level required an operator to control the task with no view of the workstation except for what was provided to through a video feed. The RL2 team completed the task seven times and received double points for the challenge level.</p><p>The third challenge level required teams to operate remotely from another location. At this level, teams could earn four times the number of points for each successful task completed. The fourth level challenged teams to deploy an algorithm for task performance and awarded eight points for each completion.</p><p>Using two of Meta’s Quest wireless controllers, Zhu controlled the robot under the direction of Aphiwetsa, while Kuppili monitored the coding from his laptop.</p><p>“It’s physically difficult to teleoperate for half an hour,” Zhu said. “My hands were shaking from holding the controllers in the air for that long.”</p><p>Being in constant communication with Aphiwetsa helped him stay focused throughout the contest.</p><p>“I helped him strategize the teleoperation and noticed he could skip some of the steps in the folding,” Aphiwetsa said. “There were many ways to do it, so I just told him what he could fix and how to do it faster.”</p><p>Zhu said he and his team had intended to tackle the fourth challenge level with the EgoMimic algorithm. However, due to unexpected time constraints, they decided to switch to the second level the day before the competition due to unexpected time constraints.&nbsp;</p><p>“I think we realized the day before the competition training the robot on our model would take a huge amount of time,” Zhu said. “We decided to go for the teleoperation and started practicing.”</p><p>He said the team wants to tackle the highest challenge level and use a training model for next year’s ICRA competition in Vienna, Austria.</p><p>ICRA is the world’s largest robotics conference, and&nbsp;<a href="https://www.cc.gatech.edu/news/georgia-tech-leads-robotics-world-converges-atlanta-icra-2025"><strong>Atlanta hosted the event</strong></a> for the third time in its history, drawing a record-breaking attendance of over 7,000.</p>]]></body>  <author>Nathan Deen</author>  <status>1</status>  <created>1749655482</created>  <gmt_created>2025-06-11 15:24:42</gmt_created>  <changed>1749729176</changed>  <gmt_changed>2025-06-12 11:52:56</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[A Georgia Tech team earned second place in the ICRA Robot Teleoperation Contest for their EgoMimic algorithm, which allows robots to learn skills by mimicking human tasks from first-person video.]]></teaser>  <type>news</type>  <sentence><![CDATA[A Georgia Tech team earned second place in the ICRA Robot Teleoperation Contest for their EgoMimic algorithm, which allows robots to learn skills by mimicking human tasks from first-person video.]]></sentence>  <summary><![CDATA[<p>Students from Georgia Tech's Robot Learning and Reasoning Lab earned second place and a $10,000 cash prize in a robot teleoperation contest at the 2025 International Conference on Robotics and Automation in Atlanta. The RL2 lab announced a partnership with Meta in February on a novel computer vision-based algorithm called EgoMimic. It enables robots to learn new skills by imitating human tasks from first-person video footage captured by Meta’s Aria smart glasses.</p>]]></summary>  <dateline>2025-06-11T00:00:00-04:00</dateline>  <iso_dateline>2025-06-11T00:00:00-04:00</iso_dateline>  <gmt_dateline>2025-06-11 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>677223</item>      </media>  <hg_media>          <item>          <nid>677223</nid>          <type>image</type>          <title><![CDATA[IMG_4291-2-copy.jpg]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[IMG_4291-2-copy.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/2025/06/12/IMG_4291-2-copy.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/2025/06/12/IMG_4291-2-copy.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/2025/06/12/IMG_4291-2-copy.jpg?itok=f261J8gE]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[ICRA]]></image_alt>                    <created>1749729142</created>          <gmt_created>2025-06-12 11:52:22</gmt_created>          <changed>1749729142</changed>          <gmt_changed>2025-06-12 11:52:22</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1188"><![CDATA[Research Horizons]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>          <category tid="153"><![CDATA[Computer Science/Information Technology and Security]]></category>          <category tid="152"><![CDATA[Robotics]]></category>          <category tid="193158"><![CDATA[Student Competition Winners (academic, innovation, and research)]]></category>      </categories>  <news_terms>          <term tid="153"><![CDATA[Computer Science/Information Technology and Security]]></term>          <term tid="152"><![CDATA[Robotics]]></term>          <term tid="193158"><![CDATA[Student Competition Winners (academic, innovation, and research)]]></term>      </news_terms>  <keywords>          <keyword tid="181920"><![CDATA[cc-research; ic-ai-ml; ic-robotics]]></keyword>          <keyword tid="187812"><![CDATA[artificial intelligence (AI)]]></keyword>          <keyword tid="192863"><![CDATA[go-ai]]></keyword>          <keyword tid="187915"><![CDATA[go-researchnews]]></keyword>          <keyword tid="9153"><![CDATA[Research Horizons]]></keyword>          <keyword tid="167585"><![CDATA[student competition]]></keyword>      </keywords>  <core_research_areas>          <term tid="193655"><![CDATA[Artificial Intelligence at Georgia Tech]]></term>          <term tid="39521"><![CDATA[Robotics]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="682424">  <title><![CDATA[Rule the Pool This Summer and Make the Biggest Splash]]></title>  <uid>27560</uid>  <body><![CDATA[<p>Want to create the biggest splash in the pool this summer? Forget the bellyflop and the cannonball.&nbsp;</p><p>“Popping the Manu” will make you a winner.&nbsp;</p><p>Georgia Tech researchers studied dives by the Māori, the indigenous people of New Zealand, who have made Manu jumping a cultural tradition. By hitting the water in a “V” shape, then quickly extending their bodies underwater, they’ve perfected the art of huge splashes.&nbsp;</p><p>See a video on how to make the splash and <a href="https://coe.gatech.edu/news/2025/05/rule-pool-summer-and-make-biggest-splash">read the entire story on the College of Engineering homepage</a>.&nbsp;</p>]]></body>  <author>Jason Maderer</author>  <status>1</status>  <created>1747410093</created>  <gmt_created>2025-05-16 15:41:33</gmt_created>  <changed>1747413461</changed>  <gmt_changed>2025-05-16 16:37:41</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[By hitting the water in a “V” shape, then quickly extending their bodies underwater, the Māori have perfected the art of huge splashes. ]]></teaser>  <type>news</type>  <sentence><![CDATA[By hitting the water in a “V” shape, then quickly extending their bodies underwater, the Māori have perfected the art of huge splashes. ]]></sentence>  <summary><![CDATA[<p>Georgia Tech researchers studied dives by the Māori, the indigenous people of New Zealand, who have made Manu jumping a cultural tradition. By hitting the water in a “V” shape, then quickly extending their bodies underwater, they’ve perfected the art of huge splashes.&nbsp;</p>]]></summary>  <dateline>2025-05-16T00:00:00-04:00</dateline>  <iso_dateline>2025-05-16T00:00:00-04:00</iso_dateline>  <gmt_dateline>2025-05-16 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[Georgia Tech roboticists explain the physics of epic pool jumps and the New Zealanders who have mastered them]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[maderer@gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p>Jason Maderer<br>College of Engineering<br>maderer@gatech.edu</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>677084</item>      </media>  <hg_media>          <item>          <nid>677084</nid>          <type>video</type>          <title><![CDATA[Make a Big Splash in the Pool]]></title>          <body><![CDATA[<p>Georgia Tech researchers learned the physics of epic pool jumps and the New Zealanders who have mastered them.</p>]]></body>                      <youtube_id><![CDATA[POda_NwypSM]]></youtube_id>            <video_width><![CDATA[]]></video_width>            <video_height><![CDATA[]]></video_height>            <vimeo_id><![CDATA[]]></vimeo_id>            <video_width><![CDATA[]]></video_width>            <video_height><![CDATA[]]></video_height>            <video_url><![CDATA[https://www.youtube.com/watch?v=POda_NwypSM]]></video_url>            <video_width><![CDATA[]]></video_width>            <video_height><![CDATA[]]></video_height>                    <created>1747412201</created>          <gmt_created>2025-05-16 16:16:41</gmt_created>          <changed>1747412201</changed>          <gmt_changed>2025-05-16 16:16:41</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="1237"><![CDATA[College of Engineering]]></group>      </groups>  <categories>          <category tid="135"><![CDATA[Research]]></category>          <category tid="152"><![CDATA[Robotics]]></category>      </categories>  <news_terms>          <term tid="135"><![CDATA[Research]]></term>          <term tid="152"><![CDATA[Robotics]]></term>      </news_terms>  <keywords>          <keyword tid="187915"><![CDATA[go-researchnews]]></keyword>          <keyword tid="188776"><![CDATA[go-research]]></keyword>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>          <topic tid="71881"><![CDATA[Science and Technology]]></topic>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="681961">  <title><![CDATA[Thesis on Human-Centered AI Earns Honors from International Computing Organization]]></title>  <uid>36319</uid>  <body><![CDATA[<p>A Georgia Tech alum’s dissertation introduced ways to make artificial intelligence (AI) more accessible, interpretable, and accountable. Although it’s been a year since his doctoral defense,&nbsp;<a href="https://zijie.wang/"><strong>Zijie (Jay) Wang</strong></a>’s (Ph.D. ML-CSE 2024) work continues to resonate with researchers.</p><p>Wang is a recipient of the&nbsp;<a href="https://medium.com/sigchi/announcing-the-2025-acm-sigchi-awards-17c1feaf865f"><strong>2025 Outstanding Dissertation Award from the Association for Computing Machinery Special Interest Group on Computer-Human Interaction (ACM SIGCHI)</strong></a>. The award recognizes Wang for his lifelong work on democratizing human-centered AI.</p><p>“Throughout my Ph.D. and industry internships, I observed a gap in existing research: there is a strong need for practical tools for applying human-centered approaches when designing AI systems,” said Wang, now a safety researcher at OpenAI.</p><p>“My work not only helps people understand AI and guide its behavior but also provides user-friendly tools that fit into existing workflows.”</p><p>[Related: <a href="https://sites.gatech.edu/research/chi-2025/">Georgia Tech College of Computing Swarms to Yokohama, Japan, for CHI 2025</a>]</p><p>Wang’s dissertation presented techniques in visual explanation and interactive guidance to align AI models with user knowledge and values. The work culminated from years of research, fellowship support, and internships.</p><p>Wang’s most influential projects formed the core of his dissertation. These included:</p><ul><li><a href="https://poloclub.github.io/cnn-explainer/"><strong>CNN Explainer</strong></a>: an open-source tool developed for deep-learning beginners. Since its release in July 2020, more than 436,000 global visitors have used the tool.</li><li><a href="https://poloclub.github.io/diffusiondb/"><strong>DiffusionDB</strong></a>: a first-of-its-kind large-scale dataset that lays a foundation to help people better understand generative AI. This work could lead to new research in detecting deepfakes and designing human-AI interaction tools to help people more easily use these models.</li><li><a href="https://interpret.ml/gam-changer/"><strong>GAM Changer</strong></a>: an interface that empowers users in healthcare, finance, or other domains to edit ML models to include knowledge and values specific to their domain, which improves reliability.</li><li><a href="https://www.jennwv.com/papers/gamcoach.pdf"><strong>GAM Coach</strong></a>: an interactive ML tool that could help people who have been rejected for a loan by automatically letting an applicant know what is needed for them to receive loan approval. </li><li><a href="https://www.cc.gatech.edu/news/new-tool-teaches-responsible-ai-practices-when-using-large-language-models"><strong>Farsight</strong></a>: a tool that alerts developers when they write prompts in large language models that could be harmful and misused. &nbsp;</li></ul><p>“I feel extremely honored and lucky to receive this award, and I am deeply grateful to many who have supported me along the way, including Polo, mentors, collaborators, and friends,” said Wang, who was advised by School of Computational Science and Engineering (CSE) Professor&nbsp;<a href="https://poloclub.github.io/polochau/"><strong>Polo Chau</strong></a>.</p><p>“This recognition also inspired me to continue striving to design and develop easy-to-use tools that help everyone to easily interact with AI systems.”</p><p>Like Wang, Chau advised Georgia Tech alumnus&nbsp;<a href="https://fredhohman.com/">Fred Hohman</a> (Ph.D. CSE 2020).&nbsp;<a href="https://www.cc.gatech.edu/news/alumnus-building-legacy-through-dissertation-and-mentorship">Hohman won the ACM SIGCHI Outstanding Dissertation Award in 2022</a>.</p><p><a href="https://poloclub.github.io/">Chau’s group</a> synthesizes machine learning (ML) and visualization techniques into scalable, interactive, and trustworthy tools. These tools increase understanding and interaction with large-scale data and ML models.&nbsp;</p><p>Chau is the associate director of corporate relations for the Machine Learning Center at Georgia Tech. Wang called the School of CSE his home unit while a student in the ML program under Chau.</p><p>Wang is one of five recipients of this year’s award to be presented at the 2025 Conference on Human Factors in Computing Systems (<a href="https://chi2025.acm.org/">CHI 2025</a>). The conference occurs April 25-May 1 in Yokohama, Japan.&nbsp;</p><p>SIGCHI is the world’s largest association of human-computer interaction professionals and practitioners. The group sponsors or co-sponsors 26 conferences, including CHI.</p><p>Wang’s outstanding dissertation award is the latest recognition of a career decorated with achievement.</p><p>Months after graduating from Georgia Tech,&nbsp;<a href="https://www.cc.gatech.edu/news/research-ai-safety-lands-recent-graduate-forbes-30-under-30">Forbes named Wang to its 30 Under 30 in Science for 2025</a> for his dissertation. Wang was one of 15 Yellow Jackets included in nine different 30 Under 30 lists and the only Georgia Tech-affiliated individual on the 30 Under 30 in Science list.</p><p>While a Georgia Tech student, Wang earned recognition from big names in business and technology. He received the&nbsp;<a href="https://www.cc.gatech.edu/news/student-named-apple-scholar-connecting-people-machine-learning">Apple Scholars in AI/ML Ph.D. Fellowship in 2023</a> and was in the&nbsp;<a href="https://www.cc.gatech.edu/news/georgia-tech-machine-learning-students-earn-jp-morgan-ai-phd-fellowships">2022 cohort of the J.P. Morgan AI Ph.D. Fellowships Program</a>.</p><p>Along with the CHI award, Wang’s dissertation earned him awards this year at banquets across campus. The&nbsp;<a href="https://bpb-us-e1.wpmucdn.com/sites.gatech.edu/dist/0/283/files/2025/03/2025-Sigma-Xi-Research-Award-Winners.pdf">Georgia Tech chapter of Sigma Xi presented Wang with the Best Ph.D. Thesis Award</a>. He also received the College of Computing’s Outstanding Dissertation Award.</p><p>“Georgia Tech attracts many great minds, and I’m glad that some, like Jay, chose to join our group,” Chau said. “It has been a joy to work alongside them and witness the many wonderful things they have accomplished, and with many more to come in their careers.”</p>]]></body>  <author>Bryant Wine</author>  <status>1</status>  <created>1745331886</created>  <gmt_created>2025-04-22 14:24:46</gmt_created>  <changed>1745332147</changed>  <gmt_changed>2025-04-22 14:29:07</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[ Zijie (Jay) Wang (Ph.D. ML-CSE 2024) is a recipient of the 2025 Outstanding Dissertation Award from the Association for Computing Machinery Special Interest Group on Computer-Human Interaction (ACM SIGCHI).]]></teaser>  <type>news</type>  <sentence><![CDATA[ Zijie (Jay) Wang (Ph.D. ML-CSE 2024) is a recipient of the 2025 Outstanding Dissertation Award from the Association for Computing Machinery Special Interest Group on Computer-Human Interaction (ACM SIGCHI).]]></sentence>  <summary><![CDATA[<p>A Georgia Tech alum’s dissertation introduced ways to make artificial intelligence (AI) more accessible, interpretable, and accountable. Although it’s been a year since his doctoral defense,&nbsp;<a href="https://zijie.wang/"><strong>Zijie (Jay) Wang</strong></a>’s (Ph.D. ML-CSE 2024) work continues to resonate with researchers.</p><p>Wang is a recipient of the&nbsp;<a href="https://medium.com/sigchi/announcing-the-2025-acm-sigchi-awards-17c1feaf865f"><strong>2025 Outstanding Dissertation Award from the Association for Computing Machinery Special Interest Group on Computer-Human Interaction (ACM SIGCHI)</strong></a>. The award recognizes Wang for his lifelong work on democratizing human-centered AI.</p>]]></summary>  <dateline>2025-04-17T00:00:00-04:00</dateline>  <iso_dateline>2025-04-17T00:00:00-04:00</iso_dateline>  <gmt_dateline>2025-04-17 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>Bryant Wine, Communications Officer<br><a href="mailto:bryant.wine@cc.gatech.edu">bryant.wine@cc.gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>676903</item>          <item>673947</item>      </media>  <hg_media>          <item>          <nid>676903</nid>          <type>image</type>          <title><![CDATA[Jay-Wang-SIGCHI-Dissertation-Award.jpg]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Jay-Wang-SIGCHI-Dissertation-Award.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/2025/04/22/Jay-Wang-SIGCHI-Dissertation-Award.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/2025/04/22/Jay-Wang-SIGCHI-Dissertation-Award.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/2025/04/22/Jay-Wang-SIGCHI-Dissertation-Award.jpg?itok=BwjW7CxH]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Zijie (Jay) Wang CHI 2025]]></image_alt>                    <created>1745331896</created>          <gmt_created>2025-04-22 14:24:56</gmt_created>          <changed>1745331896</changed>          <gmt_changed>2025-04-22 14:24:56</gmt_changed>      </item>          <item>          <nid>673947</nid>          <type>image</type>          <title><![CDATA[Farsight CHI.jpg]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Farsight CHI.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/2024/05/05/Farsight%20CHI.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/2024/05/05/Farsight%20CHI.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/2024/05/05/Farsight%2520CHI.jpg?itok=hWo1VxQt]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[CHI 2024 Farsight]]></image_alt>                    <created>1714954253</created>          <gmt_created>2024-05-06 00:10:53</gmt_created>          <changed>1714954253</changed>          <gmt_changed>2024-05-06 00:10:53</gmt_changed>      </item>      </hg_media>  <related>          <link>        <url><![CDATA[https://www.cc.gatech.edu/news/thesis-human-centered-ai-earns-honors-international-computing-organization]]></url>        <title><![CDATA[Thesis on Human-Centered AI Earns Honors from International Computing Organization]]></title>      </link>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1188"><![CDATA[Research Horizons]]></group>          <group id="50877"><![CDATA[School of Computational Science and Engineering]]></group>      </groups>  <categories>          <category tid="153"><![CDATA[Computer Science/Information Technology and Security]]></category>          <category tid="155"><![CDATA[Congressional Testimony]]></category>          <category tid="143"><![CDATA[Digital Media and Entertainment]]></category>          <category tid="131"><![CDATA[Economic Development and Policy]]></category>          <category tid="42911"><![CDATA[Education]]></category>          <category tid="144"><![CDATA[Energy]]></category>          <category tid="145"><![CDATA[Engineering]]></category>          <category tid="154"><![CDATA[Environment]]></category>          <category tid="42921"><![CDATA[Exhibitions]]></category>          <category tid="42891"><![CDATA[Georgia Tech Arts]]></category>          <category tid="179356"><![CDATA[Industrial Design]]></category>          <category tid="129"><![CDATA[Institute and Campus]]></category>          <category tid="132"><![CDATA[Institute Leadership]]></category>          <category tid="194248"><![CDATA[International Education]]></category>          <category tid="146"><![CDATA[Life Sciences and Biology]]></category>          <category tid="147"><![CDATA[Military Technology]]></category>          <category tid="148"><![CDATA[Music and Music Technology]]></category>          <category tid="149"><![CDATA[Nanotechnology and Nanoscience]]></category>          <category tid="42931"><![CDATA[Performances]]></category>          <category tid="150"><![CDATA[Physics and Physical Sciences]]></category>          <category tid="151"><![CDATA[Policy, Social Sciences, and Liberal Arts]]></category>          <category tid="135"><![CDATA[Research]]></category>          <category tid="152"><![CDATA[Robotics]]></category>          <category tid="133"><![CDATA[Special Events and Guest Speakers]]></category>          <category tid="193157"><![CDATA[Student Honors and Achievements]]></category>          <category tid="8862"><![CDATA[Student Research]]></category>      </categories>  <news_terms>          <term tid="153"><![CDATA[Computer Science/Information Technology and Security]]></term>          <term tid="155"><![CDATA[Congressional Testimony]]></term>          <term tid="143"><![CDATA[Digital Media and Entertainment]]></term>          <term tid="131"><![CDATA[Economic Development and Policy]]></term>          <term tid="42911"><![CDATA[Education]]></term>          <term tid="144"><![CDATA[Energy]]></term>          <term tid="145"><![CDATA[Engineering]]></term>          <term tid="154"><![CDATA[Environment]]></term>          <term tid="42921"><![CDATA[Exhibitions]]></term>          <term tid="42891"><![CDATA[Georgia Tech Arts]]></term>          <term tid="179356"><![CDATA[Industrial Design]]></term>          <term tid="129"><![CDATA[Institute and Campus]]></term>          <term tid="132"><![CDATA[Institute Leadership]]></term>          <term tid="194248"><![CDATA[International Education]]></term>          <term tid="146"><![CDATA[Life Sciences and Biology]]></term>          <term tid="147"><![CDATA[Military Technology]]></term>          <term tid="148"><![CDATA[Music and Music Technology]]></term>          <term tid="149"><![CDATA[Nanotechnology and Nanoscience]]></term>          <term tid="42931"><![CDATA[Performances]]></term>          <term tid="150"><![CDATA[Physics and Physical Sciences]]></term>          <term tid="151"><![CDATA[Policy, Social Sciences, and Liberal Arts]]></term>          <term tid="135"><![CDATA[Research]]></term>          <term tid="152"><![CDATA[Robotics]]></term>          <term tid="133"><![CDATA[Special Events and Guest Speakers]]></term>          <term tid="193157"><![CDATA[Student Honors and Achievements]]></term>          <term tid="8862"><![CDATA[Student Research]]></term>      </news_terms>  <keywords>          <keyword tid="654"><![CDATA[College of Computing]]></keyword>          <keyword tid="166983"><![CDATA[School of Computational Science and Engineering]]></keyword>          <keyword tid="187812"><![CDATA[artificial intelligence (AI)]]></keyword>          <keyword tid="181991"><![CDATA[Georgia Tech News Center]]></keyword>          <keyword tid="10199"><![CDATA[Daily Digest]]></keyword>          <keyword tid="9153"><![CDATA[Research Horizons]]></keyword>          <keyword tid="187915"><![CDATA[go-researchnews]]></keyword>          <keyword tid="192863"><![CDATA[go-ai]]></keyword>      </keywords>  <core_research_areas>          <term tid="193655"><![CDATA[Artificial Intelligence at Georgia Tech]]></term>          <term tid="39431"><![CDATA[Data Engineering and Science]]></term>          <term tid="39501"><![CDATA[People and Technology]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="680875">  <title><![CDATA[Securing Tomorrow’s Autonomous Robots Today]]></title>  <uid>32045</uid>  <body><![CDATA[<p>Every year, people in California risk their lives battling wildfires, but in the future, machines powered by artificial intelligence will be on the front lines, not firefighters.</p><p>However, this new generation of self-thinking robots will need security protocols to ensure they aren’t susceptible to hackers. To integrate such robots into society, they must come with assurances that they will behave safely around humans.</p><p>It begs the question: can you guarantee the safety of something that doesn’t exist yet? It’s something Assistant Professor <a href="https://glenchou.github.io/"><strong>Glen Chou</strong></a> hopes to accomplish by developing algorithms that will enable autonomous systems to learn and adapt while acting with safety and security assurances.</p><p>He plans to launch research initiatives, in collaboration with the <a href="https://scp.cc.gatech.edu/"><strong>School of Cybersecurity and Privacy</strong></a> and the <a href="https://ae.gatech.edu/"><strong>Daniel Guggenheim School of Aerospace Engineering</strong></a>, to secure this new technological frontier as it develops.</p><p>“To operate in uncertain real-world environments, robots and other autonomous systems need to leverage and adapt a complex network of perception and control algorithms to turn sensor data into actions,” he said. “To obtain realistic assurances, we must do a joint safety and security analysis on these sensors and algorithms simultaneously, rather than one at a time.”</p><p>This end-to-end method would proactively look for flaws in the robot’s systems rather than wait for them to be exploited. This would lead to intrinsically robust robotic systems that can recover from failures.</p><p><a href="https://www.cc.gatech.edu/news/new-algorithm-teaches-robots-through-human-perspective">[RELATED: New Algorithm Teaches Robots Through Human Perspective]</a></p><p>Chou said this research will be helpful in other domains, including advanced space exploration. If a space rover is sent to one of Saturn’s moons, for example, it needs to be able to act and think independently of scientists on Earth.&nbsp;</p><p>Aside from fighting fires and exploring space, this technology could perform maintenance in nuclear reactors, automatically maintain the power grid, and make autonomous surgery safer. It could also bring assistive robots into the home, enabling higher standards of care.&nbsp;</p><p>This is a challenging domain where safety, security, and privacy concerns are paramount due to frequent, close contact with humans.</p><p>This will start in the newly established <a href="https://trustworthyrobotics.github.io/"><strong>Trustworthy Robotics Lab</strong></a> at Georgia Tech, which Chou directs. He and his Ph.D. students will design principled algorithms that enable general-purpose robots and autonomous systems to operate capably, safely, and securely with humans while remaining resilient to real-world failures and uncertainty.</p><p>Chou earned dual bachelor’s degrees in electrical engineering and computer sciences as well as mechanical engineering from the University of California, Berkeley, in 2017, a master’s and Ph.D. in electrical and computer engineering from the University of Michigan in 2019 and 2022, respectively.&nbsp;</p><p>He was a postdoc at the Massachusetts Institute of Technology Computer Science &amp; Artificial Intelligence Laboratory before joining Georgia Tech in November 2024. He received the National Defense Science and Engineering Graduate fellowship program, NSF Graduate Research fellowships, and was named a Robotics: Science and Systems Pioneer in 2022.</p>]]></body>  <author>Ben Snedeker</author>  <status>1</status>  <created>1741107318</created>  <gmt_created>2025-03-04 16:55:18</gmt_created>  <changed>1742951908</changed>  <gmt_changed>2025-03-26 01:18:28</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[The Trustworthy Robotics Lab enables robots and autonomous systems to operate safely with humans while remaining resilient to real-world challenges.]]></teaser>  <type>news</type>  <sentence><![CDATA[The Trustworthy Robotics Lab enables robots and autonomous systems to operate safely with humans while remaining resilient to real-world challenges.]]></sentence>  <summary><![CDATA[<p>The Trustworthy Robotics Lab is a new interdisciplinary venture led by School of Cybersecurity &amp; Privacy Assistant Professor <strong>Glen</strong> <strong>Chou</strong>. The lab's mission is to enable robots and autonomous systems to operate safely with humans while remaining resilient to real-world challenges.</p>]]></summary>  <dateline>2025-03-04T00:00:00-05:00</dateline>  <iso_dateline>2025-03-04T00:00:00-05:00</iso_dateline>  <gmt_dateline>2025-03-04 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>J.P. Popham, Communications Officer</p><p>Georgia Tech</p><p>School of Cybersecurity &amp; Privacy</p><p>john.popham@cc.gatech.edu</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>676448</item>      </media>  <hg_media>          <item>          <nid>676448</nid>          <type>image</type>          <title><![CDATA[Georgia Tech Assistant Professor Glen Chou with the School of Cybersecurity and Privacy works through an equation on a transparent writing board.]]></title>          <body><![CDATA[<p>Assistant Professor <a href="https://glenchou.github.io/"><strong>Glen Chou</strong></a> is launching research initiatives to develop algorithms enabling autonomous systems to learn and adapt while acting with safety and security assurances. Photo by Terence Rushin, College of Computing</p>]]></body>                      <image_name><![CDATA[Glen-Header-Image.jpeg]]></image_name>            <image_path><![CDATA[/sites/default/files/2025/03/04/Glen-Header-Image.jpeg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/2025/03/04/Glen-Header-Image.jpeg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/2025/03/04/Glen-Header-Image.jpeg?itok=D2iJwmEm]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Georgia Tech Assistant Professor Glen Chou with the School of Cybersecurity and Privacy works through an equation on a transparent writing board.]]></image_alt>                    <created>1741107406</created>          <gmt_created>2025-03-04 16:56:46</gmt_created>          <changed>1741107406</changed>          <gmt_changed>2025-03-04 16:56:46</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1188"><![CDATA[Research Horizons]]></group>      </groups>  <categories>          <category tid="153"><![CDATA[Computer Science/Information Technology and Security]]></category>          <category tid="135"><![CDATA[Research]]></category>          <category tid="152"><![CDATA[Robotics]]></category>      </categories>  <news_terms>          <term tid="153"><![CDATA[Computer Science/Information Technology and Security]]></term>          <term tid="135"><![CDATA[Research]]></term>          <term tid="152"><![CDATA[Robotics]]></term>      </news_terms>  <keywords>          <keyword tid="187915"><![CDATA[go-researchnews]]></keyword>          <keyword tid="181991"><![CDATA[Georgia Tech News Center]]></keyword>          <keyword tid="187812"><![CDATA[artificial intelligence (AI)]]></keyword>          <keyword tid="78271"><![CDATA[IRIM]]></keyword>      </keywords>  <core_research_areas>          <term tid="193655"><![CDATA[Artificial Intelligence at Georgia Tech]]></term>          <term tid="145171"><![CDATA[Cybersecurity]]></term>          <term tid="39521"><![CDATA[Robotics]]></term>      </core_research_areas>  <news_room_topics>          <topic tid="71881"><![CDATA[Science and Technology]]></topic>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="680735">  <title><![CDATA[New Algorithms Developed at Georgia Tech are Lunar Bound]]></title>  <uid>34736</uid>  <body><![CDATA[<p>In the past five years, five lunar landers have launched into space, marking a series of first successful landings in decades. The future will see more of these type of missions, including <a href="https://www.nasa.gov/humans-in-space/artemis/"><strong>NASA’s Artemis program</strong></a> and various private ventures. These missions need reliable and quick navigation abilities to successfully complete missions, especially if ground stations on Earth are overburdened or disconnected.&nbsp;</p><p>Georgia Tech’s <a href="https://seal.ae.gatech.edu/"><strong>Space Exploration and Analysis Laboratory</strong></a> (SEAL) has developed new algorithms that are headed to the Moon, as part of the <a href="https://www.intuitivemachines.com/im-2"><strong>Intuitive Machine’s</strong></a> IM-2 mission. The mission is sending a Nova-C class lunar lander named Athena to the Moon’s south pole region to test technologies and collect data that aim to enable future exploration. The mission is part of <a href="https://www.nasa.gov/commercial-lunar-payload-services/"><strong>NASA’s Commercial Lunar Payload Services</strong></a> (CLPS) initiative.</p><div><div><h3><strong>SEAL’s Space Odyssey&nbsp;</strong></h3></div></div><div><div><p>SEAL, led by AE professor <a href="https://ae.gatech.edu/directory/person/john-christian"><strong>John Christian</strong></a>, collaborated with Intuitive Machines to develop algorithms to guide Athena to the Shackleton crater: a region known for its limited sunlight and cold temperatures. In coordination with <a href="https://www.spacex.com/"><strong>SpaceX</strong></a>, launch of the company’s IM-2 mission is targeted for a multi-day launch window that opens no earlier than February 26 from Launch Complex 39A at NASA’s Kennedy Space Center in Florida.&nbsp;</p><p>Athena will transport NASA's<strong>&nbsp;</strong><a href="https://www.nasa.gov/mission/polar-resources-ice-mining-experiment-1-prime-1/"><strong>PRIME-1</strong></a> (Polar Resources Ice Mining Experiment-1) which includes two instruments: a drill and spectrometer. The Regolith and Ice Drill for Exploring New Terrain (TRIDENT) is designed to drill up to three feet of lunar surface to extract soil, while the mass spectrometer (MSOLO) will measure the amount of ice in the soil samples.&nbsp;</p><p>After launch, Athena will separate from the rocket and begin a roughly five-to-four-day cruise to the Moon’s orbit. The lander will orbit the Moon for approximately three to 1.5 days before its descent to the south pole.&nbsp;</p><p>In Fall 2022, Research Engineer <strong>Ava Thrasher&nbsp;</strong>(AE 2022, M.S. AE 2024)<strong>&nbsp;</strong>began working on IM-2, developing new algorithms to guide Athena to the Shackleton crater using optical terrain relative navigation (TRN). Her approach looked at developing a crater detection algorithm (CDA) using image processing techniques that capture crater center locations on the Moon which are then used to determine Athena's position estimations.&nbsp;</p><p>Then, she developed a crater identification algorithm (CIA) to match craters found in the image to a catalog of known lunar craters. By using CDA and CIA in tandem, Athena is able to estimate its location and orientation with a single photo, autonomously, and in real-time.&nbsp;</p><p>“We wanted to strike a balance between creating something that would be done quickly on board, but also something that was reliable,” she explained. “We ended up using simple crater geometry and knowledge of the sun angle to render what we expect a crater to look like in the image.”&nbsp;</p><p>The CDA finds craters by calculating a similarity score between the image and the rendered crater at each image pixel point. This process, also known as template matching, marks crater centers at points of very high similarity. CIA then uses these crater center locations to match them with known craters in a catalog. By matching pixel locations in an image to known three-dimensional positions on the Moon, the spacecraft is able to produce an estimation of its position.&nbsp;</p><p>After two years of research and testing, Thrasher, Christian, and the Intuitive Machines team successfully demonstrated the CDA and CIA on synthetic imagery and Thrasher handed off the algorithms to Intuitive Machines to convert them into flight software for Athena.&nbsp;</p><p>She first got involved with optical navigation (OPNAV) research after she took AE 4342: Senior Design with Prof. Christian as an undergraduate student. “I found optical navigation to be really interesting. I liked the idea of being able to figure out where you are and how you’re moving in real-time based on a picture,” she said. In Fall 2022, she started her first graduate semester at Tech and was a new member of SEAL, where she quickly began demonstrating the idea of detecting craters and prototyping the CDA and CIA programmed into Athena. &nbsp;</p><p>After she graduated with her master’s degree in aerospace engineering in May 2024, &nbsp;she loved what she did so much, that she decided to stay and work as a full-time research engineer in SEAL. Now, she’s gearing up to see her work make its way to the Moon.</p><p>“It's been really exciting and humbling to contribute to the massive task of putting a lander on the Moon. I never really appreciated the scale of work and collaboration needed to make it happen until I was lucky enough to be a part of it. I'll certainly be watching the launch and tracking the mission with great anticipation of both the engineering and scientific results,” said Thrasher.&nbsp;</p><div><div><h3><strong>IM-1 Makes History</strong></h3></div></div><div><div><p>As part of a multi-year collaboration, Christian helped <a href="https://www.ae.gatech.edu/news/2024/02/georgia-tech-algorithm-headed-moon"><strong>develop a key navigation algorithm for Intuitive Machines’ first space mission (IM-1</strong></a>) which launched a Nova-C lunar lander named Odysseus to the Malapert A crater on the Moon’s south pole region; about 11 miles away from IM-2’s targeted Shackleton crater.&nbsp;</p><p>The IM-1 mission launched from Kennedy Space Center on February 15, 2024 and soft-landed on the Moon on February 22, 2024---making Odysseus the first U.S. lunar landing since the Apollo program and the first-ever successful commercial lunar landing. Odysseus had a rougher-than-expected soft landing due to an anomaly with the altimeter that was supposed to provide insight into the lander’s height above the lunar surface. In the absence of these altimeter measurements, Odysseus relied critically on the visual odometry technique that was jointly developed by Christian and Intuitive Machines.&nbsp;</p></div></div><div><div><p>Despite these challenges, Odysseus captured images of the Moon during landing and operated on the lunar surface for 144 hours before entering standby mode.&nbsp;</p><p>Prof. Christian and SEAL have more projects on the horizon to develop new technologies for exploring our Moon, other planets, asteroids, and the solar system. These technologies will enable future scientific missions to safely explore challenging destinations and answer scientific questions that were impossible with yesterday’s technology.&nbsp;</p></div></div></div></div>]]></body>  <author>Kelsey Gulledge</author>  <status>1</status>  <created>1740586771</created>  <gmt_created>2025-02-26 16:19:31</gmt_created>  <changed>1740587259</changed>  <gmt_changed>2025-02-26 16:27:39</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[AE researchers have developed new algorithms to help Intuitive Machine’s lunar lander find water ice on the Moon.  ]]></teaser>  <type>news</type>  <sentence><![CDATA[AE researchers have developed new algorithms to help Intuitive Machine’s lunar lander find water ice on the Moon.  ]]></sentence>  <summary><![CDATA[<p>Georgia Tech’s <a href="https://seal.ae.gatech.edu/"><strong>Space Exploration and Analysis Laboratory</strong></a> (SEAL) has developed new algorithms that are headed to the Moon, as part of the <a href="https://www.intuitivemachines.com/im-2"><strong>Intuitive Machine’s</strong></a> IM-2 mission. The mission is sending a Nova-C class lunar lander named Athena to the Moon’s south pole region to test technologies and collect data that aim to enable future exploration. The mission is part of <a href="https://www.nasa.gov/commercial-lunar-payload-services/"><strong>NASA’s Commercial Lunar Payload Services</strong></a> (CLPS) initiative.</p><p>SEAL, led by Professor <strong>John Christian</strong>, collaborated with Intuitive Machines to develop algorithms to guide Athena to the Shackleton crater: a region known for its limited sunlight and cold temperatures. Research Engineer <strong>Ava Thrasher</strong> (AE 2022, M.S. AE 2024) led Georgia Tech's SEAL team on developing the algorithms used for Athena's flight software.&nbsp;</p>]]></summary>  <dateline>2025-02-25T00:00:00-05:00</dateline>  <iso_dateline>2025-02-25T00:00:00-05:00</iso_dateline>  <gmt_dateline>2025-02-25 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[<p><strong>LAUNCHING: February 26, 2025</strong></p><p><strong>6:30 p.m. EST </strong><a href="https://www.nasa.gov/news-release/nasa-sets-coverage-for-intuitive-machines-next-commercial-moon-launch/"><strong>launch coverage</strong></a><strong> begins&nbsp;</strong><br><strong>7:02-7:34 p.m. EST launch window</strong></p><p>Stream on <a href="https://plus.nasa.gov/scheduled-video/intuitive-machines-2-launch-to-the-moon/"><strong>NASA+</strong></a></p>]]></sidebar>  <email><![CDATA[kelsey.gulledge@aerospace.gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p>Kelsey Gulledge</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>676397</item>          <item>676398</item>          <item>676399</item>          <item>676401</item>      </media>  <hg_media>          <item>          <nid>676397</nid>          <type>image</type>          <title><![CDATA[54284511327_9ca21c7337_o.jpg]]></title>          <body><![CDATA[<div><div><div><div><div><div><p>Intuitive Machines' IM-2 mission lunar lander, Athena, in the company's Lunar Production and Operations Center. Credit: Intuitive Machines</p></div></div></div></div></div></div><div><div><div><div><div><br> </div></div></div></div></div>]]></body>                      <image_name><![CDATA[54284511327_9ca21c7337_o.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/2025/02/26/54284511327_9ca21c7337_o.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/2025/02/26/54284511327_9ca21c7337_o.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/2025/02/26/54284511327_9ca21c7337_o.jpg?itok=swWOgO_h]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Intuitive Machines' IM-2 mission lunar lander, Athena, in the company's Lunar Production and Operations Center. Credit: Intuitive Machines]]></image_alt>                    <created>1740586783</created>          <gmt_created>2025-02-26 16:19:43</gmt_created>          <changed>1740586783</changed>          <gmt_changed>2025-02-26 16:19:43</gmt_changed>      </item>          <item>          <nid>676398</nid>          <type>image</type>          <title><![CDATA[Christian-John.jpg]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Christian-John.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/2025/02/26/Christian-John.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/2025/02/26/Christian-John.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/2025/02/26/Christian-John.jpg?itok=a2Mf1kZz]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Headshot of John Christian, AE School Professor]]></image_alt>                    <created>1740586840</created>          <gmt_created>2025-02-26 16:20:40</gmt_created>          <changed>1740586840</changed>          <gmt_changed>2025-02-26 16:20:40</gmt_changed>      </item>          <item>          <nid>676399</nid>          <type>image</type>          <title><![CDATA[HeadShotThrasher.JPG]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[HeadShotThrasher.JPG]]></image_name>            <image_path><![CDATA[/sites/default/files/2025/02/26/HeadShotThrasher.JPG]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/2025/02/26/HeadShotThrasher.JPG]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/2025/02/26/HeadShotThrasher.JPG?itok=pmytxNcG]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Headshot of Ava Thrasher, AE School alumna and research engineer]]></image_alt>                    <created>1740586878</created>          <gmt_created>2025-02-26 16:21:18</gmt_created>          <changed>1740586878</changed>          <gmt_changed>2025-02-26 16:21:18</gmt_changed>      </item>          <item>          <nid>676401</nid>          <type>image</type>          <title><![CDATA[AAS_2024_CraterDetection_final-2.png]]></title>          <body><![CDATA[<div><div><div>Illustration of the steps used to detect and identify craters to ultimately determine the vehicles state estimation. Credit: Georgia Tech </div></div></div><div><br> </div>]]></body>                      <image_name><![CDATA[AAS_2024_CraterDetection_final-2.png]]></image_name>            <image_path><![CDATA[/sites/default/files/2025/02/26/AAS_2024_CraterDetection_final-2.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/2025/02/26/AAS_2024_CraterDetection_final-2.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/2025/02/26/AAS_2024_CraterDetection_final-2.png?itok=NAZs3A2Z]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[Illustration of the steps used to detect and identify craters to ultimately determine the vehicles state estimation. Credit: Georgia Tech ]]></image_alt>                    <created>1740587067</created>          <gmt_created>2025-02-26 16:24:27</gmt_created>          <changed>1740587067</changed>          <gmt_changed>2025-02-26 16:24:27</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="660364"><![CDATA[Aerospace Engineering]]></group>          <group id="1237"><![CDATA[College of Engineering]]></group>          <group id="1188"><![CDATA[Research Horizons]]></group>      </groups>  <categories>          <category tid="136"><![CDATA[Aerospace]]></category>          <category tid="130"><![CDATA[Alumni]]></category>          <category tid="42911"><![CDATA[Education]]></category>          <category tid="144"><![CDATA[Energy]]></category>          <category tid="145"><![CDATA[Engineering]]></category>          <category tid="154"><![CDATA[Environment]]></category>          <category tid="146"><![CDATA[Life Sciences and Biology]]></category>          <category tid="135"><![CDATA[Research]]></category>          <category tid="152"><![CDATA[Robotics]]></category>      </categories>  <news_terms>          <term tid="136"><![CDATA[Aerospace]]></term>          <term tid="130"><![CDATA[Alumni]]></term>          <term tid="42911"><![CDATA[Education]]></term>          <term tid="144"><![CDATA[Energy]]></term>          <term tid="145"><![CDATA[Engineering]]></term>          <term tid="154"><![CDATA[Environment]]></term>          <term tid="146"><![CDATA[Life Sciences and Biology]]></term>          <term tid="135"><![CDATA[Research]]></term>          <term tid="152"><![CDATA[Robotics]]></term>      </news_terms>  <keywords>      </keywords>  <core_research_areas>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="680585">  <title><![CDATA[New Algorithm Teaches Robots Through Human Perspective]]></title>  <uid>32045</uid>  <body><![CDATA[<p>A new data creation paradigm and algorithmic breakthrough from Georgia Tech has laid the groundwork for humanoid assistive robots to help with laundry, dishwashing, and other household chores. The framework enables these robots to learn new skills by mimicking actions from first-person videos of everyday activities.</p><p>Current training methods limit robots from being produced at the necessary scale to put a robot in every home, said <strong>Simar</strong> <strong>Kareer</strong>, a Ph.D. student in the School of Interactive Computing.</p><p>“Traditionally, collecting data for robotics means creating demonstration data,” Kareer said. “You operate the robot’s joints with a controller to move it and achieve the task you want, and you do this hundreds of times while recording sensor data, then train your models. This is slow and difficult. The only way to break that cycle is to detach the data collection from the robot itself.”</p><p><a href="https://youtu.be/ckGUsdFX9pU?si=7qmGR1D5P_iPAVMt"><strong>[VIDEO: Meta Shares EgoMimic Case Study Video]</strong></a></p><p>Other fields, such as computer vision and natural language processing (NLP), already leverage training data passively culled from the internet to create powerful generative AI and large-language models (LLMs).</p><p>Many roboticists, however, have shifted toward interventions that allow individual users to teach their robots how to perform tasks. Kareer believes a similar source of passive data can be established to enable practical generalized training that scales the production of humanoid robots.</p><p>This is why Kareer collaborated with School of IC Assistant Professor <strong>Danfei</strong> <strong>Xu</strong> and his <a href="https://rl2.cc.gatech.edu/"><strong>Robot Learning and Reasoning Lab</strong></a> to develop EgoMimic, an algorithmic framework that leverages data from egocentric videos.</p><p>Meta’s Ego4D dataset inspired Kareer’s project. The benchmark dataset, released in 2023, consists of first-person videos of humans performing daily activities. This open-source data set trains AI models from a first-person human perspective.</p><p>“When I looked at Ego4D, I saw a dataset that’s the same as all the large robot datasets we’re trying to collect, except it’s with humans,” Kareer said. “You just wear a pair of glasses, and you go do things. It doesn’t need to come from the robot. It should come from something more scalable and passively generated, which is us.”</p><p>Kareer acquired a pair of Meta’s Project Aria research glasses, which contain a rich sensor suite and can record video from a first-person perspective through external RGB and SLAM cameras.</p><p>Kareer recorded himself folding a shirt while wearing the glasses and repeated the process. He did the same with other tasks such as placing a toy in a bowl and groceries into a bag. Then, he constructed a humanoid robot with pincers for hands and attached the glasses to the top to mimic a first-person viewpoint.</p><p>The robot performed each task repeatedly for two hours. Kareer said building a traditional training algorithm would take days of teleoperating and recording robot sensory data. For his project, he only needed to gather a baseline of sensory data to ensure performance improvement.&nbsp;</p><p>Kareer bridged the gap between the two training sets with the EgoMimic algorithm. The robot’s task performance rating increased by as much as 400% among various tasks with just 90 minutes of recorded footage. It also showed the ability to perform these tasks in unseen environments.</p><p>If enough people wear Aria glasses or other smart glasses while performing daily tasks, it can create the passive data bank needed to train robots on a massive scale.</p><p>This type of data collection can enable nearly endless possibilities for roboticists to help humans achieve more in their everyday lives. Humanoid robots can be produced and trained at an industrial level and be able to perform tasks the same way humans do.</p><p>“This work is most applicable to jobs that you can get a humanoid robot to do,” Kareer said. “In whatever industry we are allowed to collect egocentric data, we can develop humanoid robots.”</p><p>Kareer will present his paper on EgoMimic at the 2025 IEEE Engineers’ International Conference on Robotics and Automation (ICRA), which will take place from May 19 to 23 in Atlanta. The paper was co-authored by Xu and School of IC Assistant Professor <strong>Judy</strong> <strong>Hoffman</strong>, fellow Tech students <strong>Dhruv</strong> <strong>Patel</strong>, <strong>Ryan</strong> <strong>Punamiya</strong>, <strong>Pranay</strong> <strong>Mathur</strong>, and <strong>Shuo</strong> <strong>Cheng</strong>, and <strong>Chen</strong> <strong>Wang</strong>, a Ph.D. student at Stanford.</p>]]></body>  <author>Ben Snedeker</author>  <status>1</status>  <created>1739977213</created>  <gmt_created>2025-02-19 15:00:13</gmt_created>  <changed>1739996446</changed>  <gmt_changed>2025-02-19 20:20:46</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Inspired by a dataset created by Meta, a Georgia Tech Ph.D. student is bringing a new perspective to robotics training.]]></teaser>  <type>news</type>  <sentence><![CDATA[Inspired by a dataset created by Meta, a Georgia Tech Ph.D. student is bringing a new perspective to robotics training.]]></sentence>  <summary><![CDATA[<p>Inspired by a dataset created by Meta, a Georgia Tech Ph.D. student is bringing a new perspective to robotics training.</p>]]></summary>  <dateline>2025-02-19T00:00:00-05:00</dateline>  <iso_dateline>2025-02-19T00:00:00-05:00</iso_dateline>  <gmt_dateline>2025-02-19 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p>Ben Snedeker, Communication Manager</p><p>Georgia Tech College of Computing</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>676332</item>      </media>  <hg_media>          <item>          <nid>676332</nid>          <type>image</type>          <title><![CDATA[Georgia Tech Ph.D. student Simar Kareer is revolutionizing how robots are trained.]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Simar Kareer_86A7668 (1).jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/2025/02/19/Simar%20Kareer_86A7668%20%281%29.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/2025/02/19/Simar%20Kareer_86A7668%20%281%29.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/2025/02/19/Simar%2520Kareer_86A7668%2520%25281%2529.jpg?itok=JwZua-cA]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Georgia Tech Ph.D. student Simar Kareer is revolutionizing how robots are trained.]]></image_alt>                    <created>1739977597</created>          <gmt_created>2025-02-19 15:06:37</gmt_created>          <changed>1739977597</changed>          <gmt_changed>2025-02-19 15:06:37</gmt_changed>      </item>      </hg_media>  <related>          <link>        <url><![CDATA[https://youtu.be/ckGUsdFX9pU?si=b-J_aUjaDNpMpq2b]]></url>        <title><![CDATA[Project Aria Case Study: Introducing EgoMimic by the Georgia Institute of Technology]]></title>      </link>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1188"><![CDATA[Research Horizons]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>          <category tid="135"><![CDATA[Research]]></category>          <category tid="152"><![CDATA[Robotics]]></category>      </categories>  <news_terms>          <term tid="135"><![CDATA[Research]]></term>          <term tid="152"><![CDATA[Robotics]]></term>      </news_terms>  <keywords>          <keyword tid="10199"><![CDATA[Daily Digest]]></keyword>          <keyword tid="181991"><![CDATA[Georgia Tech News Center]]></keyword>          <keyword tid="187915"><![CDATA[go-researchnews]]></keyword>      </keywords>  <core_research_areas>          <term tid="39521"><![CDATA[Robotics]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="680526">  <title><![CDATA[Securing Tomorrow’s Autonomous Robots Today]]></title>  <uid>36253</uid>  <body><![CDATA[<p>Men and women in California put their lives on the line when battling wildfires every year, but there is a future where machines powered by artificial intelligence are on the front lines, not firefighters.</p><p>However, this new generation of self-thinking robots would need security protocols to ensure they aren’t susceptible to hackers. To integrate such robots into society, they must come with assurances that they will behave safely around humans.</p><p>It begs the question: can you guarantee the safety of something that doesn’t exist yet? It’s something Assistant Professor Glen Chou hopes to accomplish by developing algorithms that will enable autonomous systems to learn and adapt while acting with safety and security assurances.&nbsp;</p><p>He plans to launch research initiatives, in collaboration with the School of Cybersecurity and Privacy and the Daniel Guggenheim School of Aerospace Engineering, to secure this new technological frontier as it develops.&nbsp;</p><p>“To operate in uncertain real-world environments, robots and other autonomous systems need to leverage and adapt a complex network of perception and control algorithms to turn sensor data into actions,” he said. “To obtain realistic assurances, we must do a joint safety and security analysis on these sensors and algorithms simultaneously, rather than one at a time.”</p><p>This end-to-end method would proactively look for flaws in the robot’s systems rather than wait for them to be exploited. This would lead to intrinsically robust robotic systems that can recover from failures.</p><p>Chou said this research will be useful in other domains, including advanced space exploration. If a space rover is sent to one of Saturn’s moons, for example, it needs to be able to act and think independently of scientists on Earth.&nbsp;</p><p>Aside from fighting fires and exploring space, this technology could perform maintenance in nuclear reactors, automatically maintain the power grid, and make autonomous surgery safer. It could also bring assistive robots into the home, enabling higher standards of care.&nbsp;</p><p>This is a challenging domain where safety, security, and privacy concerns are paramount due to frequent, close contact with humans.</p><p>This will start in the newly established Trustworthy Robotics Lab at Georgia Tech, which Chou directs. He and his Ph.D. students will design principled algorithms that enable general-purpose robots and autonomous systems to operate capably, safely, and securely with humans while remaining resilient to real-world failures and uncertainty.</p><p>Chou earned dual bachelor’s degrees in electrical engineering and computer sciences as well as mechanical engineering from University of California Berkeley in 2017, a master’s and Ph.D. in electrical and computer engineering from the University of Michigan in 2019 and 2022, respectively. He was a postdoc at MIT Computer Science &amp; Artificial Intelligence Laboratory prior to joining Georgia Tech in November 2024. He is a recipient of the National Defense Science and Engineering Graduate fellowship program, NSF Graduate Research fellowships, and was named a Robotics: Science and Systems Pioneer in 2022.</p>]]></body>  <author>John Popham</author>  <status>1</status>  <created>1739799760</created>  <gmt_created>2025-02-17 13:42:40</gmt_created>  <changed>1739800381</changed>  <gmt_changed>2025-02-17 13:53:01</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Assistant Professor Glen Chou is leading research to ensure the security and safety of future autonomous robots, which could one day fight wildfires, explore space, and assist in critical environments like nuclear reactors and hospitals.]]></teaser>  <type>news</type>  <sentence><![CDATA[Assistant Professor Glen Chou is leading research to ensure the security and safety of future autonomous robots, which could one day fight wildfires, explore space, and assist in critical environments like nuclear reactors and hospitals.]]></sentence>  <summary><![CDATA[<p>Assistant Professor Glen Chou is leading research to ensure the security and safety of future autonomous robots, which could one day fight wildfires, explore space, and assist in critical environments like nuclear reactors and hospitals. His work at Georgia Tech’s Trustworthy Robotics Lab focuses on developing algorithms that allow robots to learn, adapt, and operate securely in uncertain real-world conditions. By integrating safety and security analyses, Chou aims to create resilient robotic systems that can proactively address vulnerabilities. His research, conducted in collaboration with cybersecurity and aerospace engineering experts, could revolutionize autonomous technology across multiple domains.</p>]]></summary>  <dateline>2025-02-14T00:00:00-05:00</dateline>  <iso_dateline>2025-02-14T00:00:00-05:00</iso_dateline>  <gmt_dateline>2025-02-14 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[jpopham3@gatech.edu]]></email>  <location></location>  <contact><![CDATA[<div><p>John (JP) Popham&nbsp;<br>Communications Officer II&nbsp;<br>College of Computing | School of Cybersecurity and Privacy</p></div>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>676301</item>      </media>  <hg_media>          <item>          <nid>676301</nid>          <type>image</type>          <title><![CDATA[Glen Header Image.jpeg]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Glen Header Image.jpeg]]></image_name>            <image_path><![CDATA[/sites/default/files/2025/02/17/Glen%20Header%20Image.jpeg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/2025/02/17/Glen%20Header%20Image.jpeg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/2025/02/17/Glen%2520Header%2520Image.jpeg?itok=RpD7xXA_]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Man writing on glass with a marker ]]></image_alt>                    <created>1739799782</created>          <gmt_created>2025-02-17 13:43:02</gmt_created>          <changed>1739799782</changed>          <gmt_changed>2025-02-17 13:43:02</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1188"><![CDATA[Research Horizons]]></group>          <group id="660367"><![CDATA[School of Cybersecurity and Privacy]]></group>      </groups>  <categories>          <category tid="153"><![CDATA[Computer Science/Information Technology and Security]]></category>          <category tid="145"><![CDATA[Engineering]]></category>          <category tid="135"><![CDATA[Research]]></category>          <category tid="152"><![CDATA[Robotics]]></category>      </categories>  <news_terms>          <term tid="153"><![CDATA[Computer Science/Information Technology and Security]]></term>          <term tid="145"><![CDATA[Engineering]]></term>          <term tid="135"><![CDATA[Research]]></term>          <term tid="152"><![CDATA[Robotics]]></term>      </news_terms>  <keywords>          <keyword tid="192863"><![CDATA[go-ai]]></keyword>          <keyword tid="187991"><![CDATA[go-robotics]]></keyword>          <keyword tid="10199"><![CDATA[Daily Digest]]></keyword>          <keyword tid="188776"><![CDATA[go-research]]></keyword>          <keyword tid="187812"><![CDATA[artificial intelligence (AI)]]></keyword>          <keyword tid="182941"><![CDATA[cc-research; ic-cybersecurity; ic-hcc]]></keyword>          <keyword tid="1404"><![CDATA[Cybersecurity]]></keyword>          <keyword tid="181920"><![CDATA[cc-research; ic-ai-ml; ic-robotics]]></keyword>          <keyword tid="182191"><![CDATA[areospace systems analysis]]></keyword>      </keywords>  <core_research_areas>          <term tid="193655"><![CDATA[Artificial Intelligence at Georgia Tech]]></term>          <term tid="145171"><![CDATA[Cybersecurity]]></term>          <term tid="39501"><![CDATA[People and Technology]]></term>          <term tid="39521"><![CDATA[Robotics]]></term>          <term tid="193657"><![CDATA[Space Research Initiative]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="675467">  <title><![CDATA[Using Deep Learning Techniques to Improve Liver Disease Diagnosis and Treatment]]></title>  <uid>27863</uid>  <body><![CDATA[<p>Hepatic, or liver, disease affects more than 100 million people in the U.S. About 4.5 million adults (1.8%) have been diagnosed with liver disease, but it is estimated that between 80 and 100 million adults in the U.S. have undiagnosed fatty liver disease in varying stages. Over time, undiagnosed and untreated hepatic diseases can lead to cirrhosis, a severe scarring of the liver that cannot be reversed.&nbsp;</p><p>Most hepatic diseases are chronic conditions that will be present over the life of the patient, but early detection improves overall health and the ability to manage specific conditions over time. Additionally, assessing patients over time allows for effective treatments to be adjusted as necessary. The standard protocol for diagnosis, as well as follow-up tissue assessment, is a biopsy after the return of an abnormal blood test, but biopsies are time-consuming and pose risks for the patient. Several non-invasive imaging techniques have been developed to assess the stiffness of liver tissue, an indication of scarring, including magnetic resonance elastography (MRE).</p><p>MRE combines elements of ultrasound and MRI imaging to create a visual map showing gradients of stiffness throughout the liver and is increasingly used to diagnose hepatic issues. MRE exams, however, can fail for many reasons, including patient motion, patient physiology, imaging issues, and mechanical issues such as improper wave generation or propagation in the liver. Determining the success of MRE exams depends on visual inspection of technologists and radiologists. With increasing work demands and workforce shortages, providing an accurate, automated way to classify image quality will create a streamlined approach and reduce the need for repeat scans.&nbsp;</p><p>Professor&nbsp;<a href="https://www.biorobotics.gatech.edu/wp/">Jun Ueda</a> in the George W. Woodruff School of Mechanical Engineering and robotics Ph.D. student Heriberto Nieves, working with a team from the Icahn School of Medicine at Mount Sinai, have successfully applied deep learning techniques for accurate, automated quality control image assessment. The research,&nbsp;<a href="https://onlinelibrary.wiley.com/doi/10.1002/jmri.29490">“Deep Learning-Enabled Automated Quality Control for Liver MR Elastography: Initial Results,”</a> was published in the<em> Journal of Magnetic Resonance Imaging</em>.</p><p>Using five deep learning training models, an accuracy of 92% was achieved by the best-performing ensemble on retrospective MRE images of patients with varied liver stiffnesses. The team also achieved a return of the analyzed data within seconds. The rapidity of image quality return allows the technician to focus on adjusting hardware or patient orientation for re-scan in a single session, rather than requiring patients to return for costly and timely re-scans due to low-quality initial images.</p><p>This new research is a step toward streamlining the review pipeline for MRE using deep learning techniques, which have remained unexplored compared to other medical imaging modalities.&nbsp; The research also provides a helpful baseline for future avenues of inquiry, such as assessing the health of the spleen or kidneys. It may also be applied to automation for image quality control for monitoring non-hepatic conditions, such as breast cancer or muscular dystrophy, in which tissue stiffness is an indicator of initial health and disease progression. Ueda, Nieves, and their team hope to test these models on Siemens Healthineers magnetic resonance scanners within the next year.</p><p>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</p><p><strong>Publication</strong><br>Nieves-Vazquez, H.A., Ozkaya, E., Meinhold, W., Geahchan, A., Bane, O., Ueda, J. and Taouli, B. (2024), Deep Learning-Enabled Automated Quality Control for Liver MR Elastography: Initial Results. J Magn Reson Imaging.&nbsp;<a href="https://doi.org/10.1002/jmri.29490">https://doi.org/10.1002/jmri.29490</a></p><p><strong>Prior Work</strong>&nbsp;<br><a href="https://research.gatech.edu/robotically-precise-diagnostics-and-therapeutics-degenerative-disc-disorder">Robotically Precise Diagnostics and Therapeutics for Degenerative Disc Disorder</a></p><p><strong>Related Material</strong><br><a href="https://onlinelibrary.wiley.com/doi/10.1002/jmri.29492">Editorial for “Deep Learning-Enabled Automated Quality Control for Liver MR Elastography: Initial Results”</a></p>]]></body>  <author>Christa Ernst</author>  <status>1</status>  <created>1721072004</created>  <gmt_created>2024-07-15 19:33:24</gmt_created>  <changed>1721229620</changed>  <gmt_changed>2024-07-17 15:20:20</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[With increasing work demands and workforce shortages, providing an accurate, automated way to classify image quality will create a streamlined approach and reduce the need for repeat scans. ]]></teaser>  <type>news</type>  <sentence><![CDATA[With increasing work demands and workforce shortages, providing an accurate, automated way to classify image quality will create a streamlined approach and reduce the need for repeat scans. ]]></sentence>  <summary><![CDATA[<p>Professor&nbsp;<a href="https://www.biorobotics.gatech.edu/wp/">Jun Ueda</a> in the George W. Woodruff School of Mechanical Engineering and robotics Ph.D. student Heriberto Nieves, working with a team from the Icahn School of Medicine at Mount Sinai, have successfully applied deep learning techniques for accurate, automated quality control image assessment.&nbsp;</p>]]></summary>  <dateline>2024-07-15T00:00:00-04:00</dateline>  <iso_dateline>2024-07-15T00:00:00-04:00</iso_dateline>  <gmt_dateline>2024-07-15 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[christa.ernst@research.gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p>Christa M. Ernst |&nbsp;</p><p><strong>Research Communications Program Manager |&nbsp;</strong></p><p><strong>Topic Expertise: Robotics, Data Sciences, Semiconductor Design &amp; Fab |&nbsp;</strong></p><p><a href="https://research.gatech.edu/" rel="noopener noreferrer" target="_blank"><strong>Research @ the Georgia Institute of Technology</strong></a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>674351</item>      </media>  <hg_media>          <item>          <nid>674351</nid>          <type>image</type>          <title><![CDATA[Ueda MRE News]]></title>          <body><![CDATA[<p>Professor <a href="https://www.biorobotics.gatech.edu/wp/">Jun Ueda</a> in the George W. Woodruff School of Mechanical Engineering and robotics Ph.D. student Heriberto Nieves.</p>]]></body>                      <image_name><![CDATA[Heriberto and Ueda DL-MRE 6 half sized.png]]></image_name>            <image_path><![CDATA[/sites/default/files/2024/07/15/Heriberto%20and%20Ueda%20DL-MRE%206%20half%20sized.png]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/2024/07/15/Heriberto%20and%20Ueda%20DL-MRE%206%20half%20sized.png]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/2024/07/15/Heriberto%2520and%2520Ueda%2520DL-MRE%25206%2520half%2520sized.png?itok=rAgP2eec]]></image_740>            <image_mime>image/png</image_mime>            <image_alt><![CDATA[Professor Jun Ueda in the George W. Woodruff School of Mechanical Engineering and robotics Ph.D. student Heriberto Nieves.]]></image_alt>                    <created>1721071536</created>          <gmt_created>2024-07-15 19:25:36</gmt_created>          <changed>1721071827</changed>          <gmt_changed>2024-07-15 19:30:27</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="142761"><![CDATA[IRIM]]></group>          <group id="1292"><![CDATA[Parker H. Petit Institute for Bioengineering and Bioscience (IBB)]]></group>          <group id="1188"><![CDATA[Research Horizons]]></group>      </groups>  <categories>          <category tid="138"><![CDATA[Biotechnology, Health, Bioengineering, Genetics]]></category>          <category tid="135"><![CDATA[Research]]></category>          <category tid="152"><![CDATA[Robotics]]></category>      </categories>  <news_terms>          <term tid="138"><![CDATA[Biotechnology, Health, Bioengineering, Genetics]]></term>          <term tid="135"><![CDATA[Research]]></term>          <term tid="152"><![CDATA[Robotics]]></term>      </news_terms>  <keywords>          <keyword tid="81491"><![CDATA[Institute for Robotics and Intelligent Machines (IRIM)]]></keyword>          <keyword tid="11689"><![CDATA[Institute for Bioengineeirng and Bioscience]]></keyword>          <keyword tid="594"><![CDATA[college of engineering]]></keyword>          <keyword tid="98751"><![CDATA[College of Engineering; George W. Woodruff School of Mechanical Engineering]]></keyword>          <keyword tid="187812"><![CDATA[artificial intelligence (AI)]]></keyword>          <keyword tid="9540"><![CDATA[Bioengineering and Bioscience]]></keyword>          <keyword tid="97611"><![CDATA[research news]]></keyword>          <keyword tid="188087"><![CDATA[go-irim]]></keyword>          <keyword tid="187915"><![CDATA[go-researchnews]]></keyword>          <keyword tid="192863"><![CDATA[go-ai]]></keyword>          <keyword tid="187423"><![CDATA[go-bio]]></keyword>      </keywords>  <core_research_areas>          <term tid="193655"><![CDATA[Artificial Intelligence at Georgia Tech]]></term>          <term tid="39441"><![CDATA[Bioengineering and Bioscience]]></term>          <term tid="39521"><![CDATA[Robotics]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="675021">  <title><![CDATA[ Ph.D. Student Wins Best Paper at Robotics Conference]]></title>  <uid>36530</uid>  <body><![CDATA[<p>Ask a person to find a frying pan, and they will most likely go to the kitchen. Ask a robot to do the same, and you may get numerous responses, depending on how the robot is trained.</p><p>Since humans often associate objects in a home with the room they are in, Naoki Yokoyama thinks robots that navigate human environments to perform assistive tasks should mimic that reasoning.</p><p>Roboticists have employed natural language models to help robots mimic human reasoning over the past few years. However, Yokoyama, a Ph.D. student in robotics, said these models create a “bottleneck” that prevents agents from picking up on visual cues such as room type, size, décor, and lighting.&nbsp;</p><p>Yokoyama presented a new framework for semantic reasoning at the Institute of Electrical and Electronic Engineers (IEEE) <a href="https://www.ieee-ras.org/conferences-workshops/fully-sponsored/icra"><strong>International Conference on Robotics and Automation</strong></a> (ICRA) last month in Yokohama, Japan. ICRA is the world’s largest robotics conference.</p><p>Yokoyama earned a best paper award in the Cognitive Robotics category with his <a href="http://naoki.io/portfolio/vlfm"><strong>Vision-Language Frontier Maps (VLFM) proposal</strong></a>.</p><p>Assistant Professor Sehoon Ha and Associate Professor Dhruv Batra from the School of Interactive Computing advised Yokoyama on the paper. Yokoyama authored the paper while interning at the Boston Dynamics’ <a href="https://theaiinstitute.com/"><strong>AI Institute</strong></a>.</p><p>“I think the cognitive robotic category represents a significant portion of submissions to ICRA nowadays,” said Yokoyama, whose family is from Japan. “I’m grateful that our work is being recognized among the best in this field.”</p><p>Instead of natural language models, Yokoyama used a renowned vision-language model called BLIP-2 and tested it on a Boston Dynamics “Spot” robot in home and office environments.</p><p>“We rely on models that have been trained on vast amounts of data collected from the web,” Yokoyama said. “That allows us to use models with common sense reasoning and world knowledge. It’s not limited to a typical robot learning environment.”</p><h6><strong>What is Blip-2?</strong></h6><p>BLIP-2 matches images to text by assigning a score that evaluates how well the user input text describes the content of an image. The model removes the need for the robot to use object detectors and language models.&nbsp;</p><p>Instead, the robot uses BLIP-2 to extract semantic values from RGB images with a text prompt that includes the target object.&nbsp;</p><p>BLIP-2 then teaches the robot to recognize the room type, distinguishing the living room from the bathroom and the kitchen. The robot learns to associate certain objects with specific rooms where it will likely find them.</p><p>From here, the robot creates a value map to determine the most likely locations for a target object, Yokoyama said.</p><p>Yokoyama said this is a step forward for intelligent home assistive robots, enabling users to find objects — like missing keys — in their homes without knowing an item’s location.&nbsp;</p><p>“If you’re looking for a pair of scissors, the robot can automatically figure out it should head to the kitchen or the office,” he said. “Even if the scissors are in an unusual place, it uses semantic reasoning to work through each room from most probable location to least likely.”</p><p>He added that the benefit of using a VLM instead of an object detector is that the robot will include visual cues in its reasoning.</p><p>“You can look at a room in an apartment, and there are so many things an object detector wouldn’t tell you about that room that would be informative,” he said. “You don’t want to limit yourself to a textual description or a list of object classes because you’re missing many semantic visual cues.”</p><p>While other VLMs exist, Yokoyama chose BLIP-2 because the model:</p><ul><li>Accepts any text length and isn’t limited to a small set of objects or categories.</li><li>Allows the robot to be pre-trained on vast amounts of data collected from the internet.</li><li>Has proven results that enable accurate image-to-text matching.</li></ul><h6><strong>Home, Office, and Beyond</strong></h6><p>Yokoyama also tested the Spot robot to navigate a more challenging office environment. Office spaces tend to be more homogenous and harder to distinguish from one another than rooms in a home.&nbsp;</p><p>“We showed a few cases in which the robot will still work,” Yokoyama said. “We tell it to find a microwave, and it searches for the kitchen. We tell it to find a potted plant, and it moves toward an area with windows because, based on what it knows from BLIP-2, that’s the most likely place to find the plant.”</p><p>Yokoyama said as VLM models continue to improve, so will robot navigation. The increase in the number of VLM models has caused robot navigation to steer away from traditional physical simulations.</p><p>“It shows how important it is to keep an eye on the work being done in computer vision and natural language processing for getting robots to perform tasks more efficiently,” he said. “The current research direction in robot learning is moving toward more intelligent and higher-level reasoning. These foundation models are going to play a key role in that.”</p><p><em>Top photo by Kevin Beasley/College of Computing.</em></p>]]></body>  <author>Nathan Deen</author>  <status>1</status>  <created>1717684006</created>  <gmt_created>2024-06-06 14:26:46</gmt_created>  <changed>1717684832</changed>  <gmt_changed>2024-06-06 14:40:32</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Yokoyama presented a new framework for semantic reasoning for robots at the IEEE International Conference on Robotics and Automation, where he won best paper in the Cognitive Robotics category.]]></teaser>  <type>news</type>  <sentence><![CDATA[Yokoyama presented a new framework for semantic reasoning for robots at the IEEE International Conference on Robotics and Automation, where he won best paper in the Cognitive Robotics category.]]></sentence>  <summary><![CDATA[<p>Roboticists have employed natural language models to help robots mimic human reasoning over the past few years. However, Yokoyama, a Ph.D. student in robotics, said these models create a “bottleneck” that prevents agents from picking up on visual cues such as room type, size, décor, and lighting.&nbsp;</p><p>Yokoyama presented a new framework for semantic reasoning at the Institute of Electrical and Electronic Engineers (IEEE) <a href="https://www.ieee-ras.org/conferences-workshops/fully-sponsored/icra"><strong>International Conference on Robotics and Automation</strong></a> (ICRA) last month in Yokohama, Japan. ICRA is the world’s largest robotics conference.</p><p>Yokoyama earned a best paper award in the Cognitive Robotics category with his <a href="http://naoki.io/portfolio/vlfm"><strong>Vision-Language Frontier Maps (VLFM) proposal</strong></a>.</p>]]></summary>  <dateline>2024-06-06T00:00:00-04:00</dateline>  <iso_dateline>2024-06-06T00:00:00-04:00</iso_dateline>  <gmt_dateline>2024-06-06 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[ndeen6@gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p>Nathan Deen</p><p>Communications Officer</p><p>School of Interactive Computing</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>674146</item>      </media>  <hg_media>          <item>          <nid>674146</nid>          <type>image</type>          <title><![CDATA[208A9469.jpg]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[208A9469.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/2024/06/06/208A9469.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/2024/06/06/208A9469.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/2024/06/06/208A9469.jpg?itok=xIiN0P1I]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Three students kneeling around a spot robot]]></image_alt>                    <created>1717684031</created>          <gmt_created>2024-06-06 14:27:11</gmt_created>          <changed>1717684031</changed>          <gmt_changed>2024-06-06 14:27:11</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>          <group id="1188"><![CDATA[Research Horizons]]></group>      </groups>  <categories>          <category tid="153"><![CDATA[Computer Science/Information Technology and Security]]></category>          <category tid="135"><![CDATA[Research]]></category>          <category tid="152"><![CDATA[Robotics]]></category>          <category tid="193157"><![CDATA[Student Honors and Achievements]]></category>          <category tid="8862"><![CDATA[Student Research]]></category>      </categories>  <news_terms>          <term tid="153"><![CDATA[Computer Science/Information Technology and Security]]></term>          <term tid="135"><![CDATA[Research]]></term>          <term tid="152"><![CDATA[Robotics]]></term>          <term tid="193157"><![CDATA[Student Honors and Achievements]]></term>          <term tid="8862"><![CDATA[Student Research]]></term>      </news_terms>  <keywords>          <keyword tid="192863"><![CDATA[go-ai]]></keyword>          <keyword tid="187812"><![CDATA[artificial intelligence (AI)]]></keyword>          <keyword tid="10199"><![CDATA[Daily Digest]]></keyword>          <keyword tid="187915"><![CDATA[go-researchnews]]></keyword>      </keywords>  <core_research_areas>          <term tid="193655"><![CDATA[Artificial Intelligence at Georgia Tech]]></term>          <term tid="39521"><![CDATA[Robotics]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="674367">  <title><![CDATA[Why Can’t Robots Outrun Animals?]]></title>  <uid>35575</uid>  <body><![CDATA[<p>Robots that can run, jump, and even talk have shifted from the stuff of science fiction to reality in the past few decades. Yet even in robots specialized for specific movements like running, animals are still able to outmaneuver the most advanced robotic developments.&nbsp;</p><p>Georgia Tech’s <a href="https://physics.gatech.edu/user/simon-sponberg" rel="noreferrer noopener" target="_blank">Simon Sponberg</a> recently collaborated with researchers at the <a href="https://www.washington.edu/" rel="noreferrer noopener" target="_blank">University of Washington</a>, <a href="https://www.sfu.ca/" rel="noreferrer noopener" target="_blank">Simon Fraser University</a>, <a href="https://www.colorado.edu/" rel="noreferrer noopener" target="_blank">University of Colorado Boulder</a>, and <a href="https://www.sri.com/" rel="noreferrer noopener" target="_blank">Stanford Research Institute</a> to answer one deceptively complex question: Why can’t robots outrun animals?&nbsp;</p><p>“This work is about trying to understand how, despite have some really amazing robots, there still seems to be a gulf between the capabilities of animal movement and what we can engineer,” says Sponberg, who is Dunn Family Associate Professor in the <a href="https://physics.gatech.edu/" rel="noreferrer noopener" target="_blank">School of Physics</a> and <a href="https://biosciences.gatech.edu/" rel="noreferrer noopener" target="_blank">School of Biological Sciences</a>.&nbsp;</p><p>Recently published in <em><a href="https://www.science.org/doi/10.1126/scirobotics.adi9754" rel="noreferrer noopener" target="_blank">Science Robotics</a>,</em> their study systematically examines a suite of biological and robotic runners to figure out how to further advance our best robotic designs.&nbsp;</p><p>“In robotics design we are often very component focused — we are used to having to establish specifications for the parts that we need and then finding the best component solution,” said Sponberg, who also serves on the executive committee for Georgia Tech's <a href="neuro.gatech.edu">Neuro Next Initiative</a>. “This is of course not how evolution works. We wondered if we systematically analyzed the performance of animals in the same component way that we design robots, if we might see an obvious gap.”&nbsp;</p><p>The gap turns out not to be in the function of individual robotic components, but rather the ability of those components to work together in the seamless way biological components do, highlighting a field of opportunity for new research in robotic development.&nbsp;</p><p>“This means that the frontier is not necessarily figuring out how to design better motors or sensors or controllers,” says Sponberg, “but rather how to integrate them together — this is where biology really excels.”&nbsp;</p><h4><strong>Read more about man versus machine and the future of bioinspired robotics <a href="https://www.ece.uw.edu/spotlight/why-animals-can-outrun-robots/">here</a>.</strong></h4>]]></body>  <author>adavidson38</author>  <status>1</status>  <created>1713987118</created>  <gmt_created>2024-04-24 19:31:58</gmt_created>  <changed>1714681523</changed>  <gmt_changed>2024-05-02 20:25:23</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Georgia Tech Researcher Simon Sponberg collaborates to ask why robotic advancements have yet to outpace animals — and look at what we can learn from biology to engineer new robotic designs.]]></teaser>  <type>news</type>  <sentence><![CDATA[Georgia Tech Researcher Simon Sponberg collaborates to ask why robotic advancements have yet to outpace animals — and look at what we can learn from biology to engineer new robotic designs.]]></sentence>  <summary><![CDATA[<p>Georgia Tech Researcher Simon Sponberg collaborates to ask why robotic advancements have yet to outpace animals — and look at what we can learn from biology to engineer new robotic designs.</p>]]></summary>  <dateline>2024-05-02T00:00:00-04:00</dateline>  <iso_dateline>2024-05-02T00:00:00-04:00</iso_dateline>  <gmt_dateline>2024-05-02 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[Georgia Tech Researcher Collaborates to Advance Bioinspired Design]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[audra.davidson@research.gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p><strong><a href="mailto:audra.davidson@research.gatech.edu">Audra Davidson</a></strong><br />Research Communications Program Manager<br />Neuro Next Initiative</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>673838</item>      </media>  <hg_media>          <item>          <nid>673838</nid>          <type>image</type>          <title><![CDATA[mCLARI_Spider.jpg]]></title>          <body><![CDATA[<p>Can this small robot outrun a spider? Photo Credit: Animal Inspired Movement and Robotics Lab, CU Boulder.</p>]]></body>                      <image_name><![CDATA[mCLARI_Spider.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/2024/04/24/mCLARI_Spider.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/2024/04/24/mCLARI_Spider.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/2024/04/24/mCLARI_Spider.jpg?itok=oXeE2GqY]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Can this small robot outrun a spider? Photo Credit: Animal Inspired Movement and Robotics Lab, CU Boulder.]]></image_alt>                    <created>1713987354</created>          <gmt_created>2024-04-24 19:35:54</gmt_created>          <changed>1713987354</changed>          <gmt_changed>2024-04-24 19:35:54</gmt_changed>      </item>      </hg_media>  <related>          <link>        <url><![CDATA[https://research.gatech.edu/georgia-tech-partners-15m-nsf-grant-explore-muscle-dynamics]]></url>        <title><![CDATA[Georgia Tech Partners on $15M NSF Grant to Explore Muscle Dynamics]]></title>      </link>          <link>        <url><![CDATA[https://research.gatech.edu/edge-georgia-tech-professors-awarded-curci-grants-emerging-bio-research-0]]></url>        <title><![CDATA[On The Edge: Georgia Tech Professors Awarded Curci Grants for Emerging Bio Research]]></title>      </link>          <link>        <url><![CDATA[https://research.gatech.edu/feature/ultrafast-flight]]></url>        <title><![CDATA[How Insects Evolved to Ultrafast Flight (And Back)]]></title>      </link>      </related>  <files>      </files>  <groups>          <group id="66220"><![CDATA[Neuro]]></group>          <group id="1292"><![CDATA[Parker H. Petit Institute for Bioengineering and Bioscience (IBB)]]></group>          <group id="1188"><![CDATA[Research Horizons]]></group>          <group id="1278"><![CDATA[College of Sciences]]></group>          <group id="1275"><![CDATA[School of Biological Sciences]]></group>          <group id="126011"><![CDATA[School of Physics]]></group>      </groups>  <categories>          <category tid="138"><![CDATA[Biotechnology, Health, Bioengineering, Genetics]]></category>          <category tid="146"><![CDATA[Life Sciences and Biology]]></category>          <category tid="150"><![CDATA[Physics and Physical Sciences]]></category>          <category tid="152"><![CDATA[Robotics]]></category>      </categories>  <news_terms>          <term tid="138"><![CDATA[Biotechnology, Health, Bioengineering, Genetics]]></term>          <term tid="146"><![CDATA[Life Sciences and Biology]]></term>          <term tid="150"><![CDATA[Physics and Physical Sciences]]></term>          <term tid="152"><![CDATA[Robotics]]></term>      </news_terms>  <keywords>          <keyword tid="188087"><![CDATA[go-irim]]></keyword>          <keyword tid="172970"><![CDATA[go-neuro]]></keyword>          <keyword tid="192253"><![CDATA[cos-neuro]]></keyword>          <keyword tid="187423"><![CDATA[go-bio]]></keyword>          <keyword tid="187915"><![CDATA[go-researchnews]]></keyword>          <keyword tid="181469"><![CDATA[bioinspired design]]></keyword>          <keyword tid="193266"><![CDATA[cos-research]]></keyword>      </keywords>  <core_research_areas>          <term tid="193656"><![CDATA[Neuro Next Initiative]]></term>          <term tid="39521"><![CDATA[Robotics]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node></nodes>