<nodes> <node id="688391">  <title><![CDATA[Robot Pollinator Could Produce More, Better Crops for Indoor Farms]]></title>  <uid>36530</uid>  <body><![CDATA[<p>A new robot could solve one of the biggest challenges facing indoor farmers: manual pollination.</p><p>Indoor farms, also known as vertical farms, are popular among agricultural researchers and are expanding across the agricultural industry. Some benefits they have over outdoor farms include:</p><ul><li>Year-round production of food crops</li><li>Less water and land requirements</li><li>Not needing pesticides</li><li>Reducing carbon emissions from shipping</li><li>Reducing food waste</li></ul><p>Additionally,&nbsp;<a href="https://www.agritecture.com/blog/2021/7/20/5-ways-vertical-farming-is-improving-nutrition"><strong>some studies</strong></a> indicate that indoor farms produce more nutritious food for urban communities.&nbsp;</p><p>However, these farms are often inaccessible to birds, bees, and other natural pollinators, leaving the pollination process to humans. The tedious process must be completed by hand for each flower to ensure the indoor crop flourishes.</p><p><a href="https://research.gatech.edu/people/ai-ping-hu"><strong>Ai-Ping Hu</strong></a>, a principal research engineer at the Georgia Tech Research Institute (GTRI), has spent years exploring methods to efficiently pollinate flowering plants and food crops in indoor farms to find a way to efficiently pollinate flower plants and food crops in indoor farms.</p><p>Hu,&nbsp;<a href="https://research.gatech.edu/people/shreyas-kousik"><strong>Assistant Professor Shreyas Kousik of the George W. Woodruff School of Mechanical Engineering</strong></a>, and a rotating group of student interns have developed a robot prototype that may be up to the task.</p><p>The robot can efficiently pollinate plants that have both male and female reproductive parts. These plants only require pollen to be transferred from one part to the other rather than externally from another flower.</p><p>Natural pollinators perform this task outdoors, but Hu said indoor farmers often use a paintbrush or electric tootbrush to ensure these flowers are pollinated.&nbsp;</p><h4><strong>Knowing the Pose</strong></h4><p>An early challenge the research team addressed was teaching the robot to identify the “pose” of each flower. Pose refers to a flower’s orientation, shape, and symmetry. Knowing these details ensures precise delivery of the pollen to maximize reproductive success.&nbsp;</p><p>“It’s crucial to know exactly which way the flowers are facing,” Hu said.</p><p>“You want to approach the flower from the front because that’s where all the biological structures are. Knowing the pose tells you where the stem is. Our device grasps the stem and shakes it to dislodge the pollen.</p><p>“Every flower is going to have its own pose, and you need to know what that is within at least 10 degrees.”</p><h4><strong>Computer Vision Breakthrough</strong></h4><p><strong>Harsh Muriki</strong> is a robotics master’s student at Georgia Tech’s School of Interactive Computing, who used computer vision to solve the pose problem while interning for Hu and GTRI.</p><p>Muriki attached a camera to a FarmBot to capture images of strawberry plants from dozens of angles in a small garden in front of Georgia Tech’s Food Processing Technology Building. The&nbsp;<a href="https://farm.bot/?srsltid=AfmBOoqh1Z8vSs3WflZisgw5DsOUSo8shD4VtY0Y8_VmVpVyt0Iwalxo"><strong>FarmBot</strong></a> is an XYZ-axis robot that waters and sprays pesticides on outdoor gardens, though it is not capable of pollination.</p><p>“We reconstruct the images of the flower into a 3D model and use a technique that converts the 3D model into multiple 2D images with depth information,” Muriki said. “This enables us to send them to object detectors.”</p><p>Muriki said he used a real-time object detection system called YOLO (You Only Look Once) to classify objects. YOLO is known for identifying and classifying objects in a single pass.</p><p><strong>Ved Sengupta</strong>, a computer engineering major who interned with Muriki, fine-tuned the algorithms that converted 3D images into 2D.</p><p>“This was a crucial part of making robot pollination possible,” Sengupta said. “There is a big gap between 3D and 2D image processing.</p><p>“There’s not a lot of data on the internet for 3D object detection, but there’s a ton for 2D. We were able to get great results from the converted images, and I think any sector of technology can take advantage of that.”</p><p>Sengupta, Muriki, and Hu co-authored a paper about their work that was accepted to the 2025 International Conference on Robotics and Automation (ICRA) in Atlanta.</p><h4><strong>Measuring Success</strong></h4><p>The pollination robot, built in Kousik’s Safe Robotics Lab, is now in the prototype phase.&nbsp;</p><p>Hu said the robot can do more than pollinate. It can also analyze each flower to determine how well it was pollinated and whether the chances for reproduction are high.</p><p>“It has an additional capability of microscopic inspection,” Hu said. “It’s the first device we know of that provides visual feedback on how well a flower was pollinated.”</p><p>For more information about the robot, visit the&nbsp;<a href="https://saferoboticslab.me.gatech.edu/research/towards-robotic-pollination/"><strong>Safe Robotics Lab project page</strong></a>.</p>]]></body>  <author>Nathan Deen</author>  <status>1</status>  <created>1771527492</created>  <gmt_created>2026-02-19 18:58:12</gmt_created>  <changed>1774011241</changed>  <gmt_changed>2026-03-20 12:54:01</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[A research team that expands GTRI, the College of Engineering, and the College of Computing have developed a robot capable of pollinating flowers in indoor farms.]]></teaser>  <type>news</type>  <sentence><![CDATA[A research team that expands GTRI, the College of Engineering, and the College of Computing have developed a robot capable of pollinating flowers in indoor farms.]]></sentence>  <summary><![CDATA[<p>Manual pollination is one of the biggest challenges for indoor farmers. These farms are often inaccessible to birds, bees, and other natural pollinators, leaving the pollination process to humans. The tedious process must be completed by hand for each flower to ensure the indoor crop flourishes.</p><p>A Georgia Tech research led by Ai-Ping Hu and Shreyas Kousik team is working to solve that. A robot they've developed can efficiently pollinate plants that have both male and female reproductive parts. These plants only require pollen to be transferred from one part to the other rather than externally from another flower.</p>]]></summary>  <dateline>2026-02-19T00:00:00-05:00</dateline>  <iso_dateline>2026-02-19T00:00:00-05:00</iso_dateline>  <gmt_dateline>2026-02-19 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[<p><a href="mailto:ndeen6@gatech.edu">Nathan Deen</a><br>College of Computing<br>Georgia Tech</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>679370</item>      </media>  <hg_media>          <item>          <nid>679370</nid>          <type>image</type>          <title><![CDATA[Harsh-Muriki_86A0006.jpg]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[Harsh-Muriki_86A0006.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/2026/02/19/Harsh-Muriki_86A0006.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/2026/02/19/Harsh-Muriki_86A0006.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/2026/02/19/Harsh-Muriki_86A0006.jpg?itok=WJg8YQi9]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Harsh Muriki]]></image_alt>                    <created>1771527500</created>          <gmt_created>2026-02-19 18:58:20</gmt_created>          <changed>1771527500</changed>          <gmt_changed>2026-02-19 18:58:20</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1188"><![CDATA[Research Horizons]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>          <category tid="194606"><![CDATA[Artificial Intelligence]]></category>          <category tid="153"><![CDATA[Computer Science/Information Technology and Security]]></category>          <category tid="145"><![CDATA[Engineering]]></category>          <category tid="135"><![CDATA[Research]]></category>          <category tid="152"><![CDATA[Robotics]]></category>      </categories>  <news_terms>          <term tid="194606"><![CDATA[Artificial Intelligence]]></term>          <term tid="153"><![CDATA[Computer Science/Information Technology and Security]]></term>          <term tid="145"><![CDATA[Engineering]]></term>          <term tid="135"><![CDATA[Research]]></term>          <term tid="152"><![CDATA[Robotics]]></term>      </news_terms>  <keywords>          <keyword tid="9153"><![CDATA[Research Horizons]]></keyword>          <keyword tid="187991"><![CDATA[go-robotics]]></keyword>          <keyword tid="192863"><![CDATA[go-ai]]></keyword>          <keyword tid="11506"><![CDATA[computer vision]]></keyword>          <keyword tid="180840"><![CDATA[computer vision systems]]></keyword>          <keyword tid="669"><![CDATA[agriculture]]></keyword>          <keyword tid="194392"><![CDATA[AI in Agriculture]]></keyword>          <keyword tid="170254"><![CDATA[urban gardening]]></keyword>          <keyword tid="94111"><![CDATA[farming]]></keyword>          <keyword tid="14913"><![CDATA[urban farming]]></keyword>          <keyword tid="23911"><![CDATA[bees]]></keyword>          <keyword tid="6660"><![CDATA[flowers]]></keyword>          <keyword tid="187915"><![CDATA[go-researchnews]]></keyword>      </keywords>  <core_research_areas>          <term tid="193655"><![CDATA[Artificial Intelligence at Georgia Tech]]></term>          <term tid="193653"><![CDATA[Georgia Tech Research Institute]]></term>          <term tid="39521"><![CDATA[Robotics]]></term>      </core_research_areas>  <news_room_topics>          <topic tid="71911"><![CDATA[Earth and Environment]]></topic>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="688893">  <title><![CDATA[Sheepdogs Reveal a Better Way to Guide Robot Swarms]]></title>  <uid>27271</uid>  <body><![CDATA[<p>Sheepdogs, bred to control large groups of sheep in open fields, have demonstrated their skills in competitions dating back to the 1870s.</p><p>In these contests, a handler directs a trained dog with whistle signals to guide a small group of sheep across a field and sometimes split the flock cleanly into two groups. But sheep do not always cooperate.</p><p>Researchers at the Georgia Institute of Technology studied how handler–dog teams manage these unpredictable flocks in sheepdog trials and found principles that extend beyond livestock herding.</p><p>In a <a href="https://www.science.org/doi/10.1126/sciadv.adx6791"><strong>study</strong></a> published in <em>Science Advances&nbsp;</em>as the cover feature, the researchers applied those insights to computer simulations showing how similar strategies could improve the control of robot swarms, autonomous vehicles, AI agents, and other networked systems where many machines must coordinate their actions despite uncertain conditions.</p><p><strong>Group Movement Dynamics</strong></p><p>“Birds, bugs, fish, sheep, and many other organisms move in groups because it benefits individuals, including protection from predators,” said <a href="https://bhamla.gatech.edu/"><strong>Saad Bhamla</strong></a>, an associate professor in Georgia Tech’s School of Chemical and Biomolecular Engineering. “The puzzle is that the ‘group’ is not a single organism. It is built from many individuals, each making local, imperfect decisions.”</p><p>When a predator threatens a herd of sheep, individuals near the edge often move toward the center to reduce their own risk, Bhamla explained. “This is ‘selfish herd’ behavior,” he said. “Shepherds exploit that instinct using trained dogs.”</p><p>From examining hours of contest footage, the researchers found that controlling small groups of sheep can be harder than managing large ones. A larger group, with more sheep protected in the center, may behave more coherently than a small group as the animals constantly shift between two instincts: “follow the group” and “flee the dog.”</p><p>“That switching behavior makes the group unpredictable,” said Tuhin Chakrabortty, a former postdoctoral researcher in the Bhamla Lab who co-led the study.</p><p>Looking closely at how dogs and their handlers guide small groups, the researchers found that unpredictability in the flock’s behavior does not always make control harder. “Under the right conditions, that ‘noisy’ behavior might actually be a benefit,” Bhamla said.</p><p><strong>Successful Sheep Herding</strong></p><p>Sheepdog handlers categorize sheep by how strongly they respond to a dog’s threatening pressure. Some very responsive sheep might panic under too much pressure, while others might ignore mild pressure and require stronger positioning by the dog.</p><p>The researchers observed that successful control often followed a two-step pattern. First, the dog subtly influenced the sheep’s orientation while the animals were mostly standing still. Once the flock was aligned in the desired direction, the dog increased pressure to trigger movement. The timing of those actions was critical, because alignment within a small group could disappear quickly as individuals switched between instincts.</p><p>“In our simulations, increasing pressure makes the flock reach the desired orientation faster, but how long the flock stays aligned is set mainly by noise,” Chakrabortty said. “In essence, dogs can steer the direction, but they can’t hold that decision indefinitely, so timing matters.”</p><div><div><div><div><div><p><strong>Developing Computer Models</strong></p><p>To understand the broader implications of that behavior, the team developed computer models that captured how sheep respond both to the dog and to one another. The models allowed the researchers to test different strategies for guiding groups whose members make independent decisions under uncertainty.</p><p>They then applied those ideas to simulations of robotic swarms. Engineers often design such systems so that each robot blends signals from all nearby robots before deciding how to move. While that approach works well when signals are clear, it can break down when information is noisy or conflicting, Bhamla explained.</p></div></div></div></div></div><div><div><div><div><div><p>To explain why that switching strategy can work under noisy conditions, the researchers used an analogy of a smoke-filled room where only one person can see the exit, and no one knows who that person is. If everyone polls everyone else and averages the guesses, the one correct signal can get diluted by many noisy ones.</p><p>“That’s the counterintuitive part. When only one person has the right information, averaging can wash out the signal. But if you follow one person at a time, and keep switching who that is, the right information can spread through the crowd,” Bhamla said.</p><p>Building on that idea, the researchers tested a strategy inspired by the switching behavior they observed in sheep. In the simulations, each robot paid attention to just one source at a time (either a guiding signal or a neighboring robot) and switched that source from one step to the next.</p><p>Under noisy conditions, this switching strategy required less effort to keep the group moving along a desired path than either averaging-based strategies or fixed leader-follower strategies.</p><p>The researchers call their approach the Indecisive Swarm Algorithm. The name reflects a counterintuitive insight: allowing influence to shift among individuals over time can make groups easier to guide when conditions are uncertain.</p><p>“Our findings suggest that the same dynamics that make small animal groups unpredictable may also offer new ways to control complex engineered systems,” Bhamla said.</p><p>CITATION: Tuhin Chakrabortty and Saad Bhamla, “<a href="https://www.science.org/doi/10.1126/sciadv.adx6791"><strong>Controlling noisy herds: Temporal network restructuring improves control of indecisive collectives</strong></a>,” <em>Science Advances</em>, 2026</p><p><em>This research was funded in part by Schmidt Sciences as part of a </em><a href="https://news.gatech.edu/news/2025/09/16/saad-bhamla-named-2025-schmidt-polymath"><em>Schmidt Polymath</em></a><em> grant to Saad Bhamla.</em></p></div></div></div></div></div>]]></body>  <author>Brad Dixon</author>  <status>1</status>  <created>1773259186</created>  <gmt_created>2026-03-11 19:59:46</gmt_created>  <changed>1773330805</changed>  <gmt_changed>2026-03-12 15:53:25</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Georgia Tech researchers studying sheepdog trials found new principles for guiding unpredictable groups and used them to develop computer models that could improve coordination in robot swarms, autonomous vehicles, and other networked systems.]]></teaser>  <type>news</type>  <sentence><![CDATA[Georgia Tech researchers studying sheepdog trials found new principles for guiding unpredictable groups and used them to develop computer models that could improve coordination in robot swarms, autonomous vehicles, and other networked systems.]]></sentence>  <summary><![CDATA[<p>Georgia Tech researchers studying sheepdog trials found new principles for guiding unpredictable groups and used them to develop computer models that could improve coordination in robot swarms, autonomous vehicles, and other networked systems.</p>]]></summary>  <dateline>2026-03-11T00:00:00-04:00</dateline>  <iso_dateline>2026-03-11T00:00:00-04:00</iso_dateline>  <gmt_dateline>2026-03-11 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[braddixon@gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p>Brad Dixon, <a href="mailto: braddixon@gatech.edu">braddixon@gatech.edu</a></p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>679589</item>          <item>679590</item>          <item>679591</item>          <item>679584</item>          <item>679588</item>      </media>  <hg_media>          <item>          <nid>679589</nid>          <type>video</type>          <title><![CDATA[SMART Dogs herding sheep on a farm, looks like flock of bird pattern]]></title>          <body><![CDATA[<p>SMART Dogs herding sheep on a farm, looks like flock of bird pattern</p>]]></body>                      <youtube_id><![CDATA[_CjwqIX6C2I]]></youtube_id>            <video_width><![CDATA[]]></video_width>            <video_height><![CDATA[]]></video_height>            <vimeo_id><![CDATA[]]></vimeo_id>            <video_width><![CDATA[]]></video_width>            <video_height><![CDATA[]]></video_height>            <video_url><![CDATA[https://youtu.be/_CjwqIX6C2I?si=bfsxIT77-iAJCm-2]]></video_url>            <video_width><![CDATA[]]></video_width>            <video_height><![CDATA[]]></video_height>                    <created>1773260200</created>          <gmt_created>2026-03-11 20:16:40</gmt_created>          <changed>1773260200</changed>          <gmt_changed>2026-03-11 20:16:40</gmt_changed>      </item>          <item>          <nid>679590</nid>          <type>video</type>          <title><![CDATA[A dog herding sheep in a sheepdog trial]]></title>          <body><![CDATA[<p><em>A dog herding sheep in a sheepdog trial</em></p>]]></body>                      <youtube_id><![CDATA[cnPOXfUC8rc]]></youtube_id>            <video_width><![CDATA[]]></video_width>            <video_height><![CDATA[]]></video_height>            <vimeo_id><![CDATA[]]></vimeo_id>            <video_width><![CDATA[]]></video_width>            <video_height><![CDATA[]]></video_height>            <video_url><![CDATA[https://youtu.be/cnPOXfUC8rc?si=41jH8u3UQ_qjgqWn]]></video_url>            <video_width><![CDATA[]]></video_width>            <video_height><![CDATA[]]></video_height>                    <created>1773260676</created>          <gmt_created>2026-03-11 20:24:36</gmt_created>          <changed>1773260676</changed>          <gmt_changed>2026-03-11 20:24:36</gmt_changed>      </item>          <item>          <nid>679591</nid>          <type>video</type>          <title><![CDATA[ Controlling 'Noisy' Sheep Herds]]></title>          <body><![CDATA[<p>Controlling 'noisy' sheep herds</p>]]></body>                      <youtube_id><![CDATA[EMHmDPpe8HE]]></youtube_id>            <video_width><![CDATA[]]></video_width>            <video_height><![CDATA[]]></video_height>            <vimeo_id><![CDATA[]]></vimeo_id>            <video_width><![CDATA[]]></video_width>            <video_height><![CDATA[]]></video_height>            <video_url><![CDATA[https://youtu.be/EMHmDPpe8HE?si=_5DFsk_BafsIK78R]]></video_url>            <video_width><![CDATA[]]></video_width>            <video_height><![CDATA[]]></video_height>                    <created>1773260974</created>          <gmt_created>2026-03-11 20:29:34</gmt_created>          <changed>1773260974</changed>          <gmt_changed>2026-03-11 20:29:34</gmt_changed>      </item>          <item>          <nid>679584</nid>          <type>image</type>          <title><![CDATA[Sheepdog herding sheep]]></title>          <body><![CDATA[<p>Sheepdog herding in a sheepdog trial competition</p>]]></body>                      <image_name><![CDATA[sheepdog1.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/2026/03/11/sheepdog1.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/2026/03/11/sheepdog1.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/2026/03/11/sheepdog1.jpg?itok=kTQiLGXI]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Sheepdog herding sheep]]></image_alt>                    <created>1773259589</created>          <gmt_created>2026-03-11 20:06:29</gmt_created>          <changed>1773261394</changed>          <gmt_changed>2026-03-11 20:36:34</gmt_changed>      </item>          <item>          <nid>679588</nid>          <type>image</type>          <title><![CDATA[Sheeping herding resistant sheep]]></title>          <body><![CDATA[<p>Sheepdogs first align the flock’s direction, then apply pressure to trigger movement before the sheep lose alignment.</p>]]></body>                      <image_name><![CDATA[sheepdog2-copy.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/2026/03/11/sheepdog2-copy.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/2026/03/11/sheepdog2-copy.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/2026/03/11/sheepdog2-copy.jpg?itok=5CXyEB8U]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Sheepdog herding seep]]></image_alt>                    <created>1773259967</created>          <gmt_created>2026-03-11 20:12:47</gmt_created>          <changed>1773261607</changed>          <gmt_changed>2026-03-11 20:40:07</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="1188"><![CDATA[Research Horizons]]></group>          <group id="1240"><![CDATA[School of Chemical and Biomolecular Engineering]]></group>      </groups>  <categories>          <category tid="145"><![CDATA[Engineering]]></category>          <category tid="135"><![CDATA[Research]]></category>          <category tid="152"><![CDATA[Robotics]]></category>      </categories>  <news_terms>          <term tid="145"><![CDATA[Engineering]]></term>          <term tid="135"><![CDATA[Research]]></term>          <term tid="152"><![CDATA[Robotics]]></term>      </news_terms>  <keywords>          <keyword tid="667"><![CDATA[robotics]]></keyword>          <keyword tid="194958"><![CDATA[Sheepdogs]]></keyword>          <keyword tid="194959"><![CDATA[Herding]]></keyword>          <keyword tid="187915"><![CDATA[go-researchnews]]></keyword>      </keywords>  <core_research_areas>          <term tid="39521"><![CDATA[Robotics]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="686540">  <title><![CDATA[Real-World Helper Exoskeletons Just Got Closer to Reality]]></title>  <uid>27446</uid>  <body><![CDATA[<p>To make useful wearable robotic devices that can help stroke patients or people with amputated limbs, the computer brains driving the systems must be trained. That takes time and money — lots of time and money. And researchers&nbsp;need specially equipped labs to collect mountains of human data for training.</p><p>Even when engineers have a working device and brain, called a controller, changes and improvements to the exoskeleton system typically mean data collection and training start all over again. The process is expensive and makes bringing fully functional exoskeletons or robotic limbs into the real world largely impractical.</p><p>Not anymore, thanks to Georgia Tech engineers and computer scientists.</p><p>They’ve created an artificial intelligence tool that can turn huge amounts of existing data on how people move into functional exoskeleton controllers. No data collection, retraining, and hours upon hours of additional lab time required for each specific device.</p><p>Their approach has produced an exoskeleton brain capable of offering meaningful assistance across a huge range of hip and knee movements that works as well as the best controllers currently available. <a href="https://doi.org/10.1126/scirobotics.ads8652">Their worked was published Nov. 19 in <em>Science Robotics.</em></a></p><p><a href="https://coe.gatech.edu/news/2025/11/real-world-helper-exoskeletons-just-got-closer-reality"><strong>Full details on the College of Engineering website.</strong></a></p>]]></body>  <author>Joshua Stewart</author>  <status>1</status>  <created>1763577513</created>  <gmt_created>2025-11-19 18:38:33</gmt_created>  <changed>1763579536</changed>  <gmt_changed>2025-11-19 19:12:16</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[Georgia Tech researchers are using AI to quickly train exoskeleton devices, making it much more practical to develop, improve, and ultimately deploy wearable robots for people with impaired mobility.]]></teaser>  <type>news</type>  <sentence><![CDATA[Georgia Tech researchers are using AI to quickly train exoskeleton devices, making it much more practical to develop, improve, and ultimately deploy wearable robots for people with impaired mobility.]]></sentence>  <summary><![CDATA[<p>Georgia Tech researchers are using AI to quickly train exoskeleton devices, making it much more practical to develop, improve, and ultimately deploy wearable robots for people with impaired mobility.</p>]]></summary>  <dateline>2025-11-19T00:00:00-05:00</dateline>  <iso_dateline>2025-11-19T00:00:00-05:00</iso_dateline>  <gmt_dateline>2025-11-19 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[jstewart@gatech.edu]]></email>  <location></location>  <contact><![CDATA[<p><a href="mailto:jstewart@gatech.edu">Joshua Stewart</a><br>College of Engineering</p>]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>678673</item>      </media>  <hg_media>          <item>          <nid>678673</nid>          <type>image</type>          <title><![CDATA[Matthew-Gombolay-Aaron-Young-AI-exoskeleton-control-0337-h.jpg]]></title>          <body><![CDATA[<p>Researchers Matthew Gombolay, left, and Aaron Young used the lower-limb exoskeleton demonstrated in the background to test their new approach to creating exoskeleton controllers. They use huge amounts of existing data on how people move to create functional controllers able to provide meaningful assistance. And unlike earlier controllers, they do not require hours and hours of additional training and data collection with each specific exoskeleton device.</p>]]></body>                      <image_name><![CDATA[Matthew-Gombolay-Aaron-Young-AI-exoskeleton-control-0337-h.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/2025/11/19/Matthew-Gombolay-Aaron-Young-AI-exoskeleton-control-0337-h.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/2025/11/19/Matthew-Gombolay-Aaron-Young-AI-exoskeleton-control-0337-h.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/2025/11/19/Matthew-Gombolay-Aaron-Young-AI-exoskeleton-control-0337-h.jpg?itok=sxJlmrAp]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Matthew Gombolay and Aaron Young pose in the lab while Ph.D. researchers work on a leg exoskeleton device.]]></image_alt>                    <created>1763577576</created>          <gmt_created>2025-11-19 18:39:36</gmt_created>          <changed>1763577576</changed>          <gmt_changed>2025-11-19 18:39:36</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="1237"><![CDATA[College of Engineering]]></group>      </groups>  <categories>          <category tid="194606"><![CDATA[Artificial Intelligence]]></category>          <category tid="145"><![CDATA[Engineering]]></category>          <category tid="135"><![CDATA[Research]]></category>          <category tid="152"><![CDATA[Robotics]]></category>      </categories>  <news_terms>          <term tid="194606"><![CDATA[Artificial Intelligence]]></term>          <term tid="145"><![CDATA[Engineering]]></term>          <term tid="135"><![CDATA[Research]]></term>          <term tid="152"><![CDATA[Robotics]]></term>      </news_terms>  <keywords>          <keyword tid="168835"><![CDATA[Aaron Young]]></keyword>          <keyword tid="175375"><![CDATA[matthew gombolay]]></keyword>          <keyword tid="182630"><![CDATA[exoskeletons]]></keyword>          <keyword tid="187991"><![CDATA[go-robotics]]></keyword>      </keywords>  <core_research_areas>          <term tid="39521"><![CDATA[Robotics]]></term>      </core_research_areas>  <news_room_topics>          <topic tid="71881"><![CDATA[Science and Technology]]></topic>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node><node id="686422">  <title><![CDATA[Ph.D. Student’s Framework Used to Bolster Nvidia’s Cosmos Predict-2 Model]]></title>  <uid>36530</uid>  <body><![CDATA[<p>A new deep learning architectural framework could boost the development and deployment efficiency of autonomous vehicles and humanoid robots. The framework will lower training costs and reduce the amount of real-world data needed for training.</p><p>World foundation models (WFMs) enable physical AI systems to learn and operate within&nbsp;synthetic worlds created by generative artificial intelligence (genAI). For example, these models use predictive capabilities to generate up to 30 seconds of video that accurately reflects the real world.</p><p>The new framework, developed by a Georgia Tech researcher, enhances the processing speed of the neural networks that simulate these real-world environments from text, images, or video inputs.</p><p>The neural networks that make up the architectures of large language models like ChatGPT and visual models like Sora process contextual information using the “attention mechanism.”</p><p>Attention refers to a model’s ability to focus on the most relevant parts of input.</p><p>The Neighborhood Attention Extension (NATTEN) allows models that require GPUs or high-performance computing systems to process information and generate outputs more efficiently.</p><p>Processing speeds can increase by up to 2.6 times, said <a href="https://alihassanijr.com/"><strong>Ali Hassani</strong></a>, a Ph.D. student in the School of Interactive Computing and the creator of NATTEN. Hassani is advised by Associate Professor <a href="https://www.humphreyshi.com/"><strong>Humphrey Shi</strong></a>.</p><p>Hassani is also a research scientist at Nvidia, where he introduced NATTEN to <a href="https://www.nvidia.com/en-us/ai/cosmos/"><strong>Cosmos</strong></a> — a family of WFMs the company uses to train robots, autonomous vehicles, and other physical AI applications.</p><p>“You can map just about anything from a prompt or an image or any combination of frames from an existing video to predict future videos,” Hassani said. “Instead of generating words with an LLM, you’re generating a world.</p><p>“Unlike LLMs that generate a single token at a time, these models are compute-heavy. They generate many images — often hundreds of frames at a time — so the models put a lot of work on the GPU. NATTEN lets us decrease some of that work and proportionately accelerate the model.”</p>]]></body>  <author>Nathan Deen</author>  <status>1</status>  <created>1763068438</created>  <gmt_created>2025-11-13 21:13:58</gmt_created>  <changed>1763068498</changed>  <gmt_changed>2025-11-13 21:14:58</gmt_changed>  <promote>0</promote>  <sticky>0</sticky>  <teaser><![CDATA[A new deep learning architectural framework, Neighborhood Attention Extension (NATTEN), is being used by Nvidia to  increase the processing speed of their Cosmos Predict-2 Model for training autonomous vehicles and humanoid robots.]]></teaser>  <type>news</type>  <sentence><![CDATA[A new deep learning architectural framework, Neighborhood Attention Extension (NATTEN), is being used by Nvidia to  increase the processing speed of their Cosmos Predict-2 Model for training autonomous vehicles and humanoid robots.]]></sentence>  <summary><![CDATA[<p>Georgia Tech Ph.D. student Ali Hassani developed the Neighborhood Attention Extension (NATTEN), a deep learning architectural framework that is being integrated into Nvidia's Cosmos Predict-2 world foundation model. NATTEN enhances the processing speed of neural networks that simulate real-world environments for physical AI systems, which are used to train autonomous vehicles and humanoid robots.&nbsp;</p>]]></summary>  <dateline>2025-11-03T00:00:00-05:00</dateline>  <iso_dateline>2025-11-03T00:00:00-05:00</iso_dateline>  <gmt_dateline>2025-11-03 00:00:00</gmt_dateline>  <subtitle>    <![CDATA[]]>  </subtitle>  <sidebar><![CDATA[]]></sidebar>  <email><![CDATA[]]></email>  <location></location>  <contact><![CDATA[]]></contact>  <boilerplate></boilerplate>  <boilerplate_text><![CDATA[]]></boilerplate_text>  <media>          <item>678621</item>      </media>  <hg_media>          <item>          <nid>678621</nid>          <type>image</type>          <title><![CDATA[2X6A3487.jpg]]></title>          <body><![CDATA[]]></body>                      <image_name><![CDATA[2X6A3487.jpg]]></image_name>            <image_path><![CDATA[/sites/default/files/2025/11/13/2X6A3487.jpg]]></image_path>            <image_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/2025/11/13/2X6A3487.jpg]]></image_full_path>            <image_740><![CDATA[http://hg.gatech.edu/sites/default/files/styles/740xx_scale/public/sites/default/files/2025/11/13/2X6A3487.jpg?itok=TTWF4N4h]]></image_740>            <image_mime>image/jpeg</image_mime>            <image_alt><![CDATA[Humprhey Shi and Ali Hassani]]></image_alt>                    <created>1763068473</created>          <gmt_created>2025-11-13 21:14:33</gmt_created>          <changed>1763068473</changed>          <gmt_changed>2025-11-13 21:14:33</gmt_changed>      </item>      </hg_media>  <related>      </related>  <files>      </files>  <groups>          <group id="47223"><![CDATA[College of Computing]]></group>          <group id="1188"><![CDATA[Research Horizons]]></group>          <group id="50876"><![CDATA[School of Interactive Computing]]></group>      </groups>  <categories>          <category tid="153"><![CDATA[Computer Science/Information Technology and Security]]></category>          <category tid="194609"><![CDATA[Industry]]></category>          <category tid="152"><![CDATA[Robotics]]></category>      </categories>  <news_terms>          <term tid="153"><![CDATA[Computer Science/Information Technology and Security]]></term>          <term tid="194609"><![CDATA[Industry]]></term>          <term tid="152"><![CDATA[Robotics]]></term>      </news_terms>  <keywords>          <keyword tid="192863"><![CDATA[go-ai]]></keyword>          <keyword tid="193860"><![CDATA[Artifical Intelligence]]></keyword>          <keyword tid="194701"><![CDATA[go-resarchnews]]></keyword>          <keyword tid="9153"><![CDATA[Research Horizons]]></keyword>          <keyword tid="14549"><![CDATA[nvidia]]></keyword>          <keyword tid="191138"><![CDATA[artificial neural networks]]></keyword>          <keyword tid="97281"><![CDATA[autonomous vehicles]]></keyword>      </keywords>  <core_research_areas>          <term tid="193655"><![CDATA[Artificial Intelligence at Georgia Tech]]></term>      </core_research_areas>  <news_room_topics>      </news_room_topics>  <files></files>  <related></related>  <userdata><![CDATA[]]></userdata></node></nodes>