{"688391":{"#nid":"688391","#data":{"type":"news","title":"Robot Pollinator Could Produce More, Better Crops for Indoor Farms","body":[{"value":"\u003Cp\u003EA new robot could solve one of the biggest challenges facing indoor farmers: manual pollination.\u003C\/p\u003E\u003Cp\u003EIndoor farms, also known as vertical farms, are popular among agricultural researchers and are expanding across the agricultural industry. Some benefits they have over outdoor farms include:\u003C\/p\u003E\u003Cul\u003E\u003Cli\u003EYear-round production of food crops\u003C\/li\u003E\u003Cli\u003ELess water and land requirements\u003C\/li\u003E\u003Cli\u003ENot needing pesticides\u003C\/li\u003E\u003Cli\u003EReducing carbon emissions from shipping\u003C\/li\u003E\u003Cli\u003EReducing food waste\u003C\/li\u003E\u003C\/ul\u003E\u003Cp\u003EAdditionally,\u0026nbsp;\u003Ca href=\u0022https:\/\/www.agritecture.com\/blog\/2021\/7\/20\/5-ways-vertical-farming-is-improving-nutrition\u0022\u003E\u003Cstrong\u003Esome studies\u003C\/strong\u003E\u003C\/a\u003E indicate that indoor farms produce more nutritious food for urban communities.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EHowever, these farms are often inaccessible to birds, bees, and other natural pollinators, leaving the pollination process to humans. The tedious process must be completed by hand for each flower to ensure the indoor crop flourishes.\u003C\/p\u003E\u003Cp\u003E\u003Ca href=\u0022https:\/\/research.gatech.edu\/people\/ai-ping-hu\u0022\u003E\u003Cstrong\u003EAi-Ping Hu\u003C\/strong\u003E\u003C\/a\u003E, a principal research engineer at the Georgia Tech Research Institute (GTRI), has spent years exploring methods to efficiently pollinate flowering plants and food crops in indoor farms to find a way to efficiently pollinate flower plants and food crops in indoor farms.\u003C\/p\u003E\u003Cp\u003EHu,\u0026nbsp;\u003Ca href=\u0022https:\/\/research.gatech.edu\/people\/shreyas-kousik\u0022\u003E\u003Cstrong\u003EAssistant Professor Shreyas Kousik of the George W. Woodruff School of Mechanical Engineering\u003C\/strong\u003E\u003C\/a\u003E, and a rotating group of student interns have developed a robot prototype that may be up to the task.\u003C\/p\u003E\u003Cp\u003EThe robot can efficiently pollinate plants that have both male and female reproductive parts. These plants only require pollen to be transferred from one part to the other rather than externally from another flower.\u003C\/p\u003E\u003Cp\u003ENatural pollinators perform this task outdoors, but Hu said indoor farmers often use a paintbrush or electric tootbrush to ensure these flowers are pollinated.\u0026nbsp;\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003EKnowing the Pose\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003EAn early challenge the research team addressed was teaching the robot to identify the \u201cpose\u201d of each flower. Pose refers to a flower\u2019s orientation, shape, and symmetry. Knowing these details ensures precise delivery of the pollen to maximize reproductive success.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cIt\u2019s crucial to know exactly which way the flowers are facing,\u201d Hu said.\u003C\/p\u003E\u003Cp\u003E\u201cYou want to approach the flower from the front because that\u2019s where all the biological structures are. Knowing the pose tells you where the stem is. Our device grasps the stem and shakes it to dislodge the pollen.\u003C\/p\u003E\u003Cp\u003E\u201cEvery flower is going to have its own pose, and you need to know what that is within at least 10 degrees.\u201d\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003EComputer Vision Breakthrough\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003E\u003Cstrong\u003EHarsh Muriki\u003C\/strong\u003E is a robotics master\u2019s student at Georgia Tech\u2019s School of Interactive Computing, who used computer vision to solve the pose problem while interning for Hu and GTRI.\u003C\/p\u003E\u003Cp\u003EMuriki attached a camera to a FarmBot to capture images of strawberry plants from dozens of angles in a small garden in front of Georgia Tech\u2019s Food Processing Technology Building. The\u0026nbsp;\u003Ca href=\u0022https:\/\/farm.bot\/?srsltid=AfmBOoqh1Z8vSs3WflZisgw5DsOUSo8shD4VtY0Y8_VmVpVyt0Iwalxo\u0022\u003E\u003Cstrong\u003EFarmBot\u003C\/strong\u003E\u003C\/a\u003E is an XYZ-axis robot that waters and sprays pesticides on outdoor gardens, though it is not capable of pollination.\u003C\/p\u003E\u003Cp\u003E\u201cWe reconstruct the images of the flower into a 3D model and use a technique that converts the 3D model into multiple 2D images with depth information,\u201d Muriki said. \u201cThis enables us to send them to object detectors.\u201d\u003C\/p\u003E\u003Cp\u003EMuriki said he used a real-time object detection system called YOLO (You Only Look Once) to classify objects. YOLO is known for identifying and classifying objects in a single pass.\u003C\/p\u003E\u003Cp\u003E\u003Cstrong\u003EVed Sengupta\u003C\/strong\u003E, a computer engineering major who interned with Muriki, fine-tuned the algorithms that converted 3D images into 2D.\u003C\/p\u003E\u003Cp\u003E\u201cThis was a crucial part of making robot pollination possible,\u201d Sengupta said. \u201cThere is a big gap between 3D and 2D image processing.\u003C\/p\u003E\u003Cp\u003E\u201cThere\u2019s not a lot of data on the internet for 3D object detection, but there\u2019s a ton for 2D. We were able to get great results from the converted images, and I think any sector of technology can take advantage of that.\u201d\u003C\/p\u003E\u003Cp\u003ESengupta, Muriki, and Hu co-authored a paper about their work that was accepted to the 2025 International Conference on Robotics and Automation (ICRA) in Atlanta.\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003EMeasuring Success\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003EThe pollination robot, built in Kousik\u2019s Safe Robotics Lab, is now in the prototype phase.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EHu said the robot can do more than pollinate. It can also analyze each flower to determine how well it was pollinated and whether the chances for reproduction are high.\u003C\/p\u003E\u003Cp\u003E\u201cIt has an additional capability of microscopic inspection,\u201d Hu said. \u201cIt\u2019s the first device we know of that provides visual feedback on how well a flower was pollinated.\u201d\u003C\/p\u003E\u003Cp\u003EFor more information about the robot, visit the\u0026nbsp;\u003Ca href=\u0022https:\/\/saferoboticslab.me.gatech.edu\/research\/towards-robotic-pollination\/\u0022\u003E\u003Cstrong\u003ESafe Robotics Lab project page\u003C\/strong\u003E\u003C\/a\u003E.\u003C\/p\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EManual pollination is one of the biggest challenges for indoor farmers. These farms are often inaccessible to birds, bees, and other natural pollinators, leaving the pollination process to humans. The tedious process must be completed by hand for each flower to ensure the indoor crop flourishes.\u003C\/p\u003E\u003Cp\u003EA Georgia Tech research led by Ai-Ping Hu and Shreyas Kousik team is working to solve that. A robot they\u0027ve developed can efficiently pollinate plants that have both male and female reproductive parts. These plants only require pollen to be transferred from one part to the other rather than externally from another flower.\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"A research team that expands GTRI, the College of Engineering, and the College of Computing have developed a robot capable of pollinating flowers in indoor farms."}],"uid":"36530","created_gmt":"2026-02-19 18:58:12","changed_gmt":"2026-03-20 12:54:01","author":"Nathan Deen","boilerplate_text":"","field_publication":"","field_article_url":"","location":"Atlanta, GA","dateline":{"date":"2026-02-19T00:00:00-05:00","iso_date":"2026-02-19T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"679370":{"id":"679370","type":"image","title":"Harsh-Muriki_86A0006.jpg","body":null,"created":"1771527500","gmt_created":"2026-02-19 18:58:20","changed":"1771527500","gmt_changed":"2026-02-19 18:58:20","alt":"Harsh Muriki","file":{"fid":"263520","name":"Harsh-Muriki_86A0006.jpg","image_path":"\/sites\/default\/files\/2026\/02\/19\/Harsh-Muriki_86A0006.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2026\/02\/19\/Harsh-Muriki_86A0006.jpg","mime":"image\/jpeg","size":140654,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2026\/02\/19\/Harsh-Muriki_86A0006.jpg?itok=rd0rv1Yt"}}},"media_ids":["679370"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1188","name":"Research Horizons"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"194606","name":"Artificial Intelligence"},{"id":"153","name":"Computer Science\/Information Technology and Security"},{"id":"145","name":"Engineering"},{"id":"135","name":"Research"},{"id":"152","name":"Robotics"}],"keywords":[{"id":"9153","name":"Research Horizons"},{"id":"187991","name":"go-robotics"},{"id":"192863","name":"go-ai"},{"id":"11506","name":"computer vision"},{"id":"180840","name":"computer vision systems"},{"id":"669","name":"agriculture"},{"id":"194392","name":"AI in Agriculture"},{"id":"170254","name":"urban gardening"},{"id":"94111","name":"farming"},{"id":"14913","name":"urban farming"},{"id":"23911","name":"bees"},{"id":"6660","name":"flowers"},{"id":"187915","name":"go-researchnews"}],"core_research_areas":[{"id":"193655","name":"Artificial Intelligence at Georgia Tech"},{"id":"193653","name":"Georgia Tech Research Institute"},{"id":"39521","name":"Robotics"}],"news_room_topics":[{"id":"71911","name":"Earth and Environment"}],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003E\u003Ca href=\u0022mailto:ndeen6@gatech.edu\u0022\u003ENathan Deen\u003C\/a\u003E\u003Cbr\u003ECollege of Computing\u003Cbr\u003EGeorgia Tech\u003C\/p\u003E","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"688893":{"#nid":"688893","#data":{"type":"news","title":"Sheepdogs Reveal a Better Way to Guide Robot Swarms","body":[{"value":"\u003Cp\u003ESheepdogs, bred to control large groups of sheep in open fields, have demonstrated their skills in competitions dating back to the 1870s.\u003C\/p\u003E\u003Cp\u003EIn these contests, a handler directs a trained dog with whistle signals to guide a small group of sheep across a field and sometimes split the flock cleanly into two groups. But sheep do not always cooperate.\u003C\/p\u003E\u003Cp\u003EResearchers at the Georgia Institute of Technology studied how handler\u2013dog teams manage these unpredictable flocks in sheepdog trials and found principles that extend beyond livestock herding.\u003C\/p\u003E\u003Cp\u003EIn a \u003Ca href=\u0022https:\/\/www.science.org\/doi\/10.1126\/sciadv.adx6791\u0022\u003E\u003Cstrong\u003Estudy\u003C\/strong\u003E\u003C\/a\u003E published in \u003Cem\u003EScience Advances\u0026nbsp;\u003C\/em\u003Eas the cover feature, the researchers applied those insights to computer simulations showing how similar strategies could improve the control of robot swarms, autonomous vehicles, AI agents, and other networked systems where many machines must coordinate their actions despite uncertain conditions.\u003C\/p\u003E\u003Cp\u003E\u003Cstrong\u003EGroup Movement Dynamics\u003C\/strong\u003E\u003C\/p\u003E\u003Cp\u003E\u201cBirds, bugs, fish, sheep, and many other organisms move in groups because it benefits individuals, including protection from predators,\u201d said \u003Ca href=\u0022https:\/\/bhamla.gatech.edu\/\u0022\u003E\u003Cstrong\u003ESaad Bhamla\u003C\/strong\u003E\u003C\/a\u003E, an associate professor in Georgia Tech\u2019s School of Chemical and Biomolecular Engineering. \u201cThe puzzle is that the \u2018group\u2019 is not a single organism. It is built from many individuals, each making local, imperfect decisions.\u201d\u003C\/p\u003E\u003Cp\u003EWhen a predator threatens a herd of sheep, individuals near the edge often move toward the center to reduce their own risk, Bhamla explained. \u201cThis is \u2018selfish herd\u2019 behavior,\u201d he said. \u201cShepherds exploit that instinct using trained dogs.\u201d\u003C\/p\u003E\u003Cp\u003EFrom examining hours of contest footage, the researchers found that controlling small groups of sheep can be harder than managing large ones. A larger group, with more sheep protected in the center, may behave more coherently than a small group as the animals constantly shift between two instincts: \u201cfollow the group\u201d and \u201cflee the dog.\u201d\u003C\/p\u003E\u003Cp\u003E\u201cThat switching behavior makes the group unpredictable,\u201d said Tuhin Chakrabortty, a former postdoctoral researcher in the Bhamla Lab who co-led the study.\u003C\/p\u003E\u003Cp\u003ELooking closely at how dogs and their handlers guide small groups, the researchers found that unpredictability in the flock\u2019s behavior does not always make control harder. \u201cUnder the right conditions, that \u2018noisy\u2019 behavior might actually be a benefit,\u201d Bhamla said.\u003C\/p\u003E\u003Cp\u003E\u003Cstrong\u003ESuccessful Sheep Herding\u003C\/strong\u003E\u003C\/p\u003E\u003Cp\u003ESheepdog handlers categorize sheep by how strongly they respond to a dog\u2019s threatening pressure. Some very responsive sheep might panic under too much pressure, while others might ignore mild pressure and require stronger positioning by the dog.\u003C\/p\u003E\u003Cp\u003EThe researchers observed that successful control often followed a two-step pattern. First, the dog subtly influenced the sheep\u2019s orientation while the animals were mostly standing still. Once the flock was aligned in the desired direction, the dog increased pressure to trigger movement. The timing of those actions was critical, because alignment within a small group could disappear quickly as individuals switched between instincts.\u003C\/p\u003E\u003Cp\u003E\u201cIn our simulations, increasing pressure makes the flock reach the desired orientation faster, but how long the flock stays aligned is set mainly by noise,\u201d Chakrabortty said. \u201cIn essence, dogs can steer the direction, but they can\u2019t hold that decision indefinitely, so timing matters.\u201d\u003C\/p\u003E\u003Cdiv\u003E\u003Cdiv\u003E\u003Cdiv\u003E\u003Cdiv\u003E\u003Cdiv\u003E\u003Cp\u003E\u003Cstrong\u003EDeveloping Computer Models\u003C\/strong\u003E\u003C\/p\u003E\u003Cp\u003ETo understand the broader implications of that behavior, the team developed computer models that captured how sheep respond both to the dog and to one another. The models allowed the researchers to test different strategies for guiding groups whose members make independent decisions under uncertainty.\u003C\/p\u003E\u003Cp\u003EThey then applied those ideas to simulations of robotic swarms. Engineers often design such systems so that each robot blends signals from all nearby robots before deciding how to move. While that approach works well when signals are clear, it can break down when information is noisy or conflicting, Bhamla explained.\u003C\/p\u003E\u003C\/div\u003E\u003C\/div\u003E\u003C\/div\u003E\u003C\/div\u003E\u003C\/div\u003E\u003Cdiv\u003E\u003Cdiv\u003E\u003Cdiv\u003E\u003Cdiv\u003E\u003Cdiv\u003E\u003Cp\u003ETo explain why that switching strategy can work under noisy conditions, the researchers used an analogy of a smoke-filled room where only one person can see the exit, and no one knows who that person is. If everyone polls everyone else and averages the guesses, the one correct signal can get diluted by many noisy ones.\u003C\/p\u003E\u003Cp\u003E\u201cThat\u2019s the counterintuitive part. When only one person has the right information, averaging can wash out the signal. But if you follow one person at a time, and keep switching who that is, the right information can spread through the crowd,\u201d Bhamla said.\u003C\/p\u003E\u003Cp\u003EBuilding on that idea, the researchers tested a strategy inspired by the switching behavior they observed in sheep. In the simulations, each robot paid attention to just one source at a time (either a guiding signal or a neighboring robot) and switched that source from one step to the next.\u003C\/p\u003E\u003Cp\u003EUnder noisy conditions, this switching strategy required less effort to keep the group moving along a desired path than either averaging-based strategies or fixed leader-follower strategies.\u003C\/p\u003E\u003Cp\u003EThe researchers call their approach the Indecisive Swarm Algorithm. The name reflects a counterintuitive insight: allowing influence to shift among individuals over time can make groups easier to guide when conditions are uncertain.\u003C\/p\u003E\u003Cp\u003E\u201cOur findings suggest that the same dynamics that make small animal groups unpredictable may also offer new ways to control complex engineered systems,\u201d Bhamla said.\u003C\/p\u003E\u003Cp\u003ECITATION: Tuhin Chakrabortty and Saad Bhamla, \u201c\u003Ca href=\u0022https:\/\/www.science.org\/doi\/10.1126\/sciadv.adx6791\u0022\u003E\u003Cstrong\u003EControlling noisy herds: Temporal network restructuring improves control of indecisive collectives\u003C\/strong\u003E\u003C\/a\u003E,\u201d \u003Cem\u003EScience Advances\u003C\/em\u003E, 2026\u003C\/p\u003E\u003Cp\u003E\u003Cem\u003EThis research was funded in part by Schmidt Sciences as part of a \u003C\/em\u003E\u003Ca href=\u0022https:\/\/news.gatech.edu\/news\/2025\/09\/16\/saad-bhamla-named-2025-schmidt-polymath\u0022\u003E\u003Cem\u003ESchmidt Polymath\u003C\/em\u003E\u003C\/a\u003E\u003Cem\u003E grant to Saad Bhamla.\u003C\/em\u003E\u003C\/p\u003E\u003C\/div\u003E\u003C\/div\u003E\u003C\/div\u003E\u003C\/div\u003E\u003C\/div\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EGeorgia Tech researchers studying sheepdog trials found new principles for guiding unpredictable groups and used them to develop computer models that could improve coordination in robot swarms, autonomous vehicles, and other networked systems.\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"Georgia Tech researchers studying sheepdog trials found new principles for guiding unpredictable groups and used them to develop computer models that could improve coordination in robot swarms, autonomous vehicles, and other networked systems."}],"uid":"27271","created_gmt":"2026-03-11 19:59:46","changed_gmt":"2026-03-12 15:53:25","author":"Brad Dixon","boilerplate_text":"","field_publication":"","field_article_url":"","location":"Atlanta, GA","dateline":{"date":"2026-03-11T00:00:00-04:00","iso_date":"2026-03-11T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"679589":{"id":"679589","type":"video","title":"SMART Dogs herding sheep on a farm, looks like flock of bird pattern","body":"\u003Cp\u003ESMART Dogs herding sheep on a farm, looks like flock of bird pattern\u003C\/p\u003E","created":"1773260200","gmt_created":"2026-03-11 20:16:40","changed":"1773260200","gmt_changed":"2026-03-11 20:16:40","video":{"youtube_id":"_CjwqIX6C2I","video_url":"https:\/\/youtu.be\/_CjwqIX6C2I?si=bfsxIT77-iAJCm-2"}},"679590":{"id":"679590","type":"video","title":"A dog herding sheep in a sheepdog trial","body":"\u003Cp\u003E\u003Cem\u003EA dog herding sheep in a sheepdog trial\u003C\/em\u003E\u003C\/p\u003E","created":"1773260676","gmt_created":"2026-03-11 20:24:36","changed":"1773260676","gmt_changed":"2026-03-11 20:24:36","video":{"youtube_id":"cnPOXfUC8rc","video_url":"https:\/\/youtu.be\/cnPOXfUC8rc?si=41jH8u3UQ_qjgqWn"}},"679591":{"id":"679591","type":"video","title":" Controlling \u0027Noisy\u0027 Sheep Herds","body":"\u003Cp\u003EControlling \u0027noisy\u0027 sheep herds\u003C\/p\u003E","created":"1773260974","gmt_created":"2026-03-11 20:29:34","changed":"1773260974","gmt_changed":"2026-03-11 20:29:34","video":{"youtube_id":"EMHmDPpe8HE","video_url":"https:\/\/youtu.be\/EMHmDPpe8HE?si=_5DFsk_BafsIK78R"}},"679584":{"id":"679584","type":"image","title":"Sheepdog herding sheep","body":"\u003Cp\u003ESheepdog herding in a sheepdog trial competition\u003C\/p\u003E","created":"1773259589","gmt_created":"2026-03-11 20:06:29","changed":"1773261394","gmt_changed":"2026-03-11 20:36:34","alt":"Sheepdog herding sheep","file":{"fid":"263762","name":"sheepdog1.jpg","image_path":"\/sites\/default\/files\/2026\/03\/11\/sheepdog1.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2026\/03\/11\/sheepdog1.jpg","mime":"image\/jpeg","size":226432,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2026\/03\/11\/sheepdog1.jpg?itok=sbHIPJIH"}},"679588":{"id":"679588","type":"image","title":"Sheeping herding resistant sheep","body":"\u003Cp\u003ESheepdogs first align the flock\u2019s direction, then apply pressure to trigger movement before the sheep lose alignment.\u003C\/p\u003E","created":"1773259967","gmt_created":"2026-03-11 20:12:47","changed":"1773261607","gmt_changed":"2026-03-11 20:40:07","alt":"Sheepdog herding seep","file":{"fid":"263766","name":"sheepdog2-copy.jpg","image_path":"\/sites\/default\/files\/2026\/03\/11\/sheepdog2-copy.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2026\/03\/11\/sheepdog2-copy.jpg","mime":"image\/jpeg","size":196318,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2026\/03\/11\/sheepdog2-copy.jpg?itok=F3wbneis"}}},"media_ids":["679589","679590","679591","679584","679588"],"groups":[{"id":"1188","name":"Research Horizons"},{"id":"1240","name":"School of Chemical and Biomolecular Engineering"}],"categories":[{"id":"145","name":"Engineering"},{"id":"135","name":"Research"},{"id":"152","name":"Robotics"}],"keywords":[{"id":"667","name":"robotics"},{"id":"194958","name":"Sheepdogs"},{"id":"194959","name":"Herding"},{"id":"187915","name":"go-researchnews"}],"core_research_areas":[{"id":"39521","name":"Robotics"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EBrad Dixon, \u003Ca href=\u0022mailto: braddixon@gatech.edu\u0022\u003Ebraddixon@gatech.edu\u003C\/a\u003E\u003C\/p\u003E","format":"limited_html"}],"email":["braddixon@gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"686540":{"#nid":"686540","#data":{"type":"news","title":"Real-World Helper Exoskeletons Just Got Closer to Reality","body":[{"value":"\u003Cp\u003ETo make useful wearable robotic devices that can help stroke patients or people with amputated limbs, the computer brains driving the systems must be trained. That takes time and money \u2014 lots of time and money. And researchers\u0026nbsp;need specially equipped labs to collect mountains of human data for training.\u003C\/p\u003E\u003Cp\u003EEven when engineers have a working device and brain, called a controller, changes and improvements to the exoskeleton system typically mean data collection and training start all over again. The process is expensive and makes bringing fully functional exoskeletons or robotic limbs into the real world largely impractical.\u003C\/p\u003E\u003Cp\u003ENot anymore, thanks to Georgia Tech engineers and computer scientists.\u003C\/p\u003E\u003Cp\u003EThey\u2019ve created an artificial intelligence tool that can turn huge amounts of existing data on how people move into functional exoskeleton controllers. No data collection, retraining, and hours upon hours of additional lab time required for each specific device.\u003C\/p\u003E\u003Cp\u003ETheir approach has produced an exoskeleton brain capable of offering meaningful assistance across a huge range of hip and knee movements that works as well as the best controllers currently available. \u003Ca href=\u0022https:\/\/doi.org\/10.1126\/scirobotics.ads8652\u0022\u003ETheir worked was published Nov. 19 in \u003Cem\u003EScience Robotics.\u003C\/em\u003E\u003C\/a\u003E\u003C\/p\u003E\u003Cp\u003E\u003Ca href=\u0022https:\/\/coe.gatech.edu\/news\/2025\/11\/real-world-helper-exoskeletons-just-got-closer-reality\u0022\u003E\u003Cstrong\u003EFull details on the College of Engineering website.\u003C\/strong\u003E\u003C\/a\u003E\u003C\/p\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EGeorgia Tech researchers are using AI to quickly train exoskeleton devices, making it much more practical to develop, improve, and ultimately deploy wearable robots for people with impaired mobility.\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"Georgia Tech researchers are using AI to quickly train exoskeleton devices, making it much more practical to develop, improve, and ultimately deploy wearable robots for people with impaired mobility."}],"uid":"27446","created_gmt":"2025-11-19 18:38:33","changed_gmt":"2025-11-19 19:12:16","author":"Joshua Stewart","boilerplate_text":"","field_publication":"","field_article_url":"","location":"Atlanta, GA","dateline":{"date":"2025-11-19T00:00:00-05:00","iso_date":"2025-11-19T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"678673":{"id":"678673","type":"image","title":"Matthew-Gombolay-Aaron-Young-AI-exoskeleton-control-0337-h.jpg","body":"\u003Cp\u003EResearchers Matthew Gombolay, left, and Aaron Young used the lower-limb exoskeleton demonstrated in the background to test their new approach to creating exoskeleton controllers. They use huge amounts of existing data on how people move to create functional controllers able to provide meaningful assistance. And unlike earlier controllers, they do not require hours and hours of additional training and data collection with each specific exoskeleton device.\u003C\/p\u003E","created":"1763577576","gmt_created":"2025-11-19 18:39:36","changed":"1763577576","gmt_changed":"2025-11-19 18:39:36","alt":"Matthew Gombolay and Aaron Young pose in the lab while Ph.D. researchers work on a leg exoskeleton device.","file":{"fid":"262731","name":"Matthew-Gombolay-Aaron-Young-AI-exoskeleton-control-0337-h.jpg","image_path":"\/sites\/default\/files\/2025\/11\/19\/Matthew-Gombolay-Aaron-Young-AI-exoskeleton-control-0337-h.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2025\/11\/19\/Matthew-Gombolay-Aaron-Young-AI-exoskeleton-control-0337-h.jpg","mime":"image\/jpeg","size":985612,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2025\/11\/19\/Matthew-Gombolay-Aaron-Young-AI-exoskeleton-control-0337-h.jpg?itok=qFUHgDV1"}}},"media_ids":["678673"],"groups":[{"id":"1237","name":"College of Engineering"}],"categories":[{"id":"194606","name":"Artificial Intelligence"},{"id":"145","name":"Engineering"},{"id":"135","name":"Research"},{"id":"152","name":"Robotics"}],"keywords":[{"id":"168835","name":"Aaron Young"},{"id":"175375","name":"matthew gombolay"},{"id":"182630","name":"exoskeletons"},{"id":"187991","name":"go-robotics"}],"core_research_areas":[{"id":"39521","name":"Robotics"}],"news_room_topics":[{"id":"71881","name":"Science and Technology"}],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003E\u003Ca href=\u0022mailto:jstewart@gatech.edu\u0022\u003EJoshua Stewart\u003C\/a\u003E\u003Cbr\u003ECollege of Engineering\u003C\/p\u003E","format":"limited_html"}],"email":["jstewart@gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"686422":{"#nid":"686422","#data":{"type":"news","title":"Ph.D. Student\u2019s Framework Used to Bolster Nvidia\u2019s Cosmos Predict-2 Model","body":[{"value":"\u003Cp\u003EA new deep learning architectural framework could boost the development and deployment efficiency of autonomous vehicles and humanoid robots. The framework will lower training costs and reduce the amount of real-world data needed for training.\u003C\/p\u003E\u003Cp\u003EWorld foundation models (WFMs) enable physical AI systems to learn and operate within\u0026nbsp;synthetic worlds created by generative artificial intelligence (genAI). For example, these models use predictive capabilities to generate up to 30 seconds of video that accurately reflects the real world.\u003C\/p\u003E\u003Cp\u003EThe new framework, developed by a Georgia Tech researcher, enhances the processing speed of the neural networks that simulate these real-world environments from text, images, or video inputs.\u003C\/p\u003E\u003Cp\u003EThe neural networks that make up the architectures of large language models like ChatGPT and visual models like Sora process contextual information using the \u201cattention mechanism.\u201d\u003C\/p\u003E\u003Cp\u003EAttention refers to a model\u2019s ability to focus on the most relevant parts of input.\u003C\/p\u003E\u003Cp\u003EThe Neighborhood Attention Extension (NATTEN) allows models that require GPUs or high-performance computing systems to process information and generate outputs more efficiently.\u003C\/p\u003E\u003Cp\u003EProcessing speeds can increase by up to 2.6 times, said \u003Ca href=\u0022https:\/\/alihassanijr.com\/\u0022\u003E\u003Cstrong\u003EAli Hassani\u003C\/strong\u003E\u003C\/a\u003E, a Ph.D. student in the School of Interactive Computing and the creator of NATTEN. Hassani is advised by Associate Professor \u003Ca href=\u0022https:\/\/www.humphreyshi.com\/\u0022\u003E\u003Cstrong\u003EHumphrey Shi\u003C\/strong\u003E\u003C\/a\u003E.\u003C\/p\u003E\u003Cp\u003EHassani is also a research scientist at Nvidia, where he introduced NATTEN to \u003Ca href=\u0022https:\/\/www.nvidia.com\/en-us\/ai\/cosmos\/\u0022\u003E\u003Cstrong\u003ECosmos\u003C\/strong\u003E\u003C\/a\u003E \u2014 a family of WFMs the company uses to train robots, autonomous vehicles, and other physical AI applications.\u003C\/p\u003E\u003Cp\u003E\u201cYou can map just about anything from a prompt or an image or any combination of frames from an existing video to predict future videos,\u201d Hassani said. \u201cInstead of generating words with an LLM, you\u2019re generating a world.\u003C\/p\u003E\u003Cp\u003E\u201cUnlike LLMs that generate a single token at a time, these models are compute-heavy. They generate many images \u2014 often hundreds of frames at a time \u2014 so the models put a lot of work on the GPU. NATTEN lets us decrease some of that work and proportionately accelerate the model.\u201d\u003C\/p\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EGeorgia Tech Ph.D. student Ali Hassani developed the Neighborhood Attention Extension (NATTEN), a deep learning architectural framework that is being integrated into Nvidia\u0027s Cosmos Predict-2 world foundation model. NATTEN enhances the processing speed of neural networks that simulate real-world environments for physical AI systems, which are used to train autonomous vehicles and humanoid robots.\u0026nbsp;\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"A new deep learning architectural framework, Neighborhood Attention Extension (NATTEN), is being used by Nvidia to  increase the processing speed of their Cosmos Predict-2 Model for training autonomous vehicles and humanoid robots."}],"uid":"36530","created_gmt":"2025-11-13 21:13:58","changed_gmt":"2025-11-13 21:14:58","author":"Nathan Deen","boilerplate_text":"","field_publication":"","field_article_url":"","location":"Atlanta, GA","dateline":{"date":"2025-11-03T00:00:00-05:00","iso_date":"2025-11-03T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"678621":{"id":"678621","type":"image","title":"2X6A3487.jpg","body":null,"created":"1763068473","gmt_created":"2025-11-13 21:14:33","changed":"1763068473","gmt_changed":"2025-11-13 21:14:33","alt":"Humprhey Shi and Ali Hassani","file":{"fid":"262676","name":"2X6A3487.jpg","image_path":"\/sites\/default\/files\/2025\/11\/13\/2X6A3487.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2025\/11\/13\/2X6A3487.jpg","mime":"image\/jpeg","size":93105,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2025\/11\/13\/2X6A3487.jpg?itok=axfoqv8i"}}},"media_ids":["678621"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1188","name":"Research Horizons"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"153","name":"Computer Science\/Information Technology and Security"},{"id":"194609","name":"Industry"},{"id":"152","name":"Robotics"}],"keywords":[{"id":"192863","name":"go-ai"},{"id":"193860","name":"Artifical Intelligence"},{"id":"194701","name":"go-resarchnews"},{"id":"9153","name":"Research Horizons"},{"id":"14549","name":"nvidia"},{"id":"191138","name":"artificial neural networks"},{"id":"97281","name":"autonomous vehicles"}],"core_research_areas":[{"id":"193655","name":"Artificial Intelligence at Georgia Tech"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[],"email":[],"slides":[],"orientation":[],"userdata":""}}}