{"688391":{"#nid":"688391","#data":{"type":"news","title":"Robot Pollinator Could Produce More, Better Crops for Indoor Farms","body":[{"value":"\u003Cp\u003EA new robot could solve one of the biggest challenges facing indoor farmers: manual pollination.\u003C\/p\u003E\u003Cp\u003EIndoor farms, also known as vertical farms, are popular among agricultural researchers and are expanding across the agricultural industry. Some benefits they have over outdoor farms include:\u003C\/p\u003E\u003Cul\u003E\u003Cli\u003EYear-round production of food crops\u003C\/li\u003E\u003Cli\u003ELess water and land requirements\u003C\/li\u003E\u003Cli\u003ENot needing pesticides\u003C\/li\u003E\u003Cli\u003EReducing carbon emissions from shipping\u003C\/li\u003E\u003Cli\u003EReducing food waste\u003C\/li\u003E\u003C\/ul\u003E\u003Cp\u003EAdditionally,\u0026nbsp;\u003Ca href=\u0022https:\/\/www.agritecture.com\/blog\/2021\/7\/20\/5-ways-vertical-farming-is-improving-nutrition\u0022\u003E\u003Cstrong\u003Esome studies\u003C\/strong\u003E\u003C\/a\u003E indicate that indoor farms produce more nutritious food for urban communities.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EHowever, these farms are often inaccessible to birds, bees, and other natural pollinators, leaving the pollination process to humans. The tedious process must be completed by hand for each flower to ensure the indoor crop flourishes.\u003C\/p\u003E\u003Cp\u003E\u003Ca href=\u0022https:\/\/research.gatech.edu\/people\/ai-ping-hu\u0022\u003E\u003Cstrong\u003EAi-Ping Hu\u003C\/strong\u003E\u003C\/a\u003E, a principal research engineer at the Georgia Tech Research Institute (GTRI), has spent years exploring methods to efficiently pollinate flowering plants and food crops in indoor farms to find a way to efficiently pollinate flower plants and food crops in indoor farms.\u003C\/p\u003E\u003Cp\u003EHu,\u0026nbsp;\u003Ca href=\u0022https:\/\/research.gatech.edu\/people\/shreyas-kousik\u0022\u003E\u003Cstrong\u003EAssistant Professor Shreyas Kousik of the George W. Woodruff School of Mechanical Engineering\u003C\/strong\u003E\u003C\/a\u003E, and a rotating group of student interns have developed a robot prototype that may be up to the task.\u003C\/p\u003E\u003Cp\u003EThe robot can efficiently pollinate plants that have both male and female reproductive parts. These plants only require pollen to be transferred from one part to the other rather than externally from another flower.\u003C\/p\u003E\u003Cp\u003ENatural pollinators perform this task outdoors, but Hu said indoor farmers often use a paintbrush or electric tootbrush to ensure these flowers are pollinated.\u0026nbsp;\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003EKnowing the Pose\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003EAn early challenge the research team addressed was teaching the robot to identify the \u201cpose\u201d of each flower. Pose refers to a flower\u2019s orientation, shape, and symmetry. Knowing these details ensures precise delivery of the pollen to maximize reproductive success.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cIt\u2019s crucial to know exactly which way the flowers are facing,\u201d Hu said.\u003C\/p\u003E\u003Cp\u003E\u201cYou want to approach the flower from the front because that\u2019s where all the biological structures are. Knowing the pose tells you where the stem is. Our device grasps the stem and shakes it to dislodge the pollen.\u003C\/p\u003E\u003Cp\u003E\u201cEvery flower is going to have its own pose, and you need to know what that is within at least 10 degrees.\u201d\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003EComputer Vision Breakthrough\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003E\u003Cstrong\u003EHarsh Muriki\u003C\/strong\u003E is a robotics master\u2019s student at Georgia Tech\u2019s School of Interactive Computing, who used computer vision to solve the pose problem while interning for Hu and GTRI.\u003C\/p\u003E\u003Cp\u003EMuriki attached a camera to a FarmBot to capture images of strawberry plants from dozens of angles in a small garden in front of Georgia Tech\u2019s Food Processing Technology Building. The\u0026nbsp;\u003Ca href=\u0022https:\/\/farm.bot\/?srsltid=AfmBOoqh1Z8vSs3WflZisgw5DsOUSo8shD4VtY0Y8_VmVpVyt0Iwalxo\u0022\u003E\u003Cstrong\u003EFarmBot\u003C\/strong\u003E\u003C\/a\u003E is an XYZ-axis robot that waters and sprays pesticides on outdoor gardens, though it is not capable of pollination.\u003C\/p\u003E\u003Cp\u003E\u201cWe reconstruct the images of the flower into a 3D model and use a technique that converts the 3D model into multiple 2D images with depth information,\u201d Muriki said. \u201cThis enables us to send them to object detectors.\u201d\u003C\/p\u003E\u003Cp\u003EMuriki said he used a real-time object detection system called YOLO (You Only Look Once) to classify objects. YOLO is known for identifying and classifying objects in a single pass.\u003C\/p\u003E\u003Cp\u003E\u003Cstrong\u003EVed Sengupta\u003C\/strong\u003E, a computer engineering major who interned with Muriki, fine-tuned the algorithms that converted 3D images into 2D.\u003C\/p\u003E\u003Cp\u003E\u201cThis was a crucial part of making robot pollination possible,\u201d Sengupta said. \u201cThere is a big gap between 3D and 2D image processing.\u003C\/p\u003E\u003Cp\u003E\u201cThere\u2019s not a lot of data on the internet for 3D object detection, but there\u2019s a ton for 2D. We were able to get great results from the converted images, and I think any sector of technology can take advantage of that.\u201d\u003C\/p\u003E\u003Cp\u003ESengupta, Muriki, and Hu co-authored a paper about their work that was accepted to the 2025 International Conference on Robotics and Automation (ICRA) in Atlanta.\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003EMeasuring Success\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003EThe pollination robot, built in Kousik\u2019s Safe Robotics Lab, is now in the prototype phase.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EHu said the robot can do more than pollinate. It can also analyze each flower to determine how well it was pollinated and whether the chances for reproduction are high.\u003C\/p\u003E\u003Cp\u003E\u201cIt has an additional capability of microscopic inspection,\u201d Hu said. \u201cIt\u2019s the first device we know of that provides visual feedback on how well a flower was pollinated.\u201d\u003C\/p\u003E\u003Cp\u003EFor more information about the robot, visit the\u0026nbsp;\u003Ca href=\u0022https:\/\/saferoboticslab.me.gatech.edu\/research\/towards-robotic-pollination\/\u0022\u003E\u003Cstrong\u003ESafe Robotics Lab project page\u003C\/strong\u003E\u003C\/a\u003E.\u003C\/p\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EManual pollination is one of the biggest challenges for indoor farmers. These farms are often inaccessible to birds, bees, and other natural pollinators, leaving the pollination process to humans. The tedious process must be completed by hand for each flower to ensure the indoor crop flourishes.\u003C\/p\u003E\u003Cp\u003EA Georgia Tech research led by Ai-Ping Hu and Shreyas Kousik team is working to solve that. A robot they\u0027ve developed can efficiently pollinate plants that have both male and female reproductive parts. These plants only require pollen to be transferred from one part to the other rather than externally from another flower.\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"A research team that expands GTRI, the College of Engineering, and the College of Computing have developed a robot capable of pollinating flowers in indoor farms."}],"uid":"36530","created_gmt":"2026-02-19 18:58:12","changed_gmt":"2026-03-20 12:54:01","author":"Nathan Deen","boilerplate_text":"","field_publication":"","field_article_url":"","location":"Atlanta, GA","dateline":{"date":"2026-02-19T00:00:00-05:00","iso_date":"2026-02-19T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"679370":{"id":"679370","type":"image","title":"Harsh-Muriki_86A0006.jpg","body":null,"created":"1771527500","gmt_created":"2026-02-19 18:58:20","changed":"1771527500","gmt_changed":"2026-02-19 18:58:20","alt":"Harsh Muriki","file":{"fid":"263520","name":"Harsh-Muriki_86A0006.jpg","image_path":"\/sites\/default\/files\/2026\/02\/19\/Harsh-Muriki_86A0006.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2026\/02\/19\/Harsh-Muriki_86A0006.jpg","mime":"image\/jpeg","size":140654,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2026\/02\/19\/Harsh-Muriki_86A0006.jpg?itok=rd0rv1Yt"}}},"media_ids":["679370"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1188","name":"Research Horizons"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"194606","name":"Artificial Intelligence"},{"id":"153","name":"Computer Science\/Information Technology and Security"},{"id":"145","name":"Engineering"},{"id":"135","name":"Research"},{"id":"152","name":"Robotics"}],"keywords":[{"id":"9153","name":"Research Horizons"},{"id":"187991","name":"go-robotics"},{"id":"192863","name":"go-ai"},{"id":"11506","name":"computer vision"},{"id":"180840","name":"computer vision systems"},{"id":"669","name":"agriculture"},{"id":"194392","name":"AI in Agriculture"},{"id":"170254","name":"urban gardening"},{"id":"94111","name":"farming"},{"id":"14913","name":"urban farming"},{"id":"23911","name":"bees"},{"id":"6660","name":"flowers"},{"id":"187915","name":"go-researchnews"}],"core_research_areas":[{"id":"193655","name":"Artificial Intelligence at Georgia Tech"},{"id":"193653","name":"Georgia Tech Research Institute"},{"id":"39521","name":"Robotics"}],"news_room_topics":[{"id":"71911","name":"Earth and Environment"}],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003E\u003Ca href=\u0022mailto:ndeen6@gatech.edu\u0022\u003ENathan Deen\u003C\/a\u003E\u003Cbr\u003ECollege of Computing\u003Cbr\u003EGeorgia Tech\u003C\/p\u003E","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"688893":{"#nid":"688893","#data":{"type":"news","title":"Sheepdogs Reveal a Better Way to Guide Robot Swarms","body":[{"value":"\u003Cp\u003ESheepdogs, bred to control large groups of sheep in open fields, have demonstrated their skills in competitions dating back to the 1870s.\u003C\/p\u003E\u003Cp\u003EIn these contests, a handler directs a trained dog with whistle signals to guide a small group of sheep across a field and sometimes split the flock cleanly into two groups. But sheep do not always cooperate.\u003C\/p\u003E\u003Cp\u003EResearchers at the Georgia Institute of Technology studied how handler\u2013dog teams manage these unpredictable flocks in sheepdog trials and found principles that extend beyond livestock herding.\u003C\/p\u003E\u003Cp\u003EIn a \u003Ca href=\u0022https:\/\/www.science.org\/doi\/10.1126\/sciadv.adx6791\u0022\u003E\u003Cstrong\u003Estudy\u003C\/strong\u003E\u003C\/a\u003E published in \u003Cem\u003EScience Advances\u0026nbsp;\u003C\/em\u003Eas the cover feature, the researchers applied those insights to computer simulations showing how similar strategies could improve the control of robot swarms, autonomous vehicles, AI agents, and other networked systems where many machines must coordinate their actions despite uncertain conditions.\u003C\/p\u003E\u003Cp\u003E\u003Cstrong\u003EGroup Movement Dynamics\u003C\/strong\u003E\u003C\/p\u003E\u003Cp\u003E\u201cBirds, bugs, fish, sheep, and many other organisms move in groups because it benefits individuals, including protection from predators,\u201d said \u003Ca href=\u0022https:\/\/bhamla.gatech.edu\/\u0022\u003E\u003Cstrong\u003ESaad Bhamla\u003C\/strong\u003E\u003C\/a\u003E, an associate professor in Georgia Tech\u2019s School of Chemical and Biomolecular Engineering. \u201cThe puzzle is that the \u2018group\u2019 is not a single organism. It is built from many individuals, each making local, imperfect decisions.\u201d\u003C\/p\u003E\u003Cp\u003EWhen a predator threatens a herd of sheep, individuals near the edge often move toward the center to reduce their own risk, Bhamla explained. \u201cThis is \u2018selfish herd\u2019 behavior,\u201d he said. \u201cShepherds exploit that instinct using trained dogs.\u201d\u003C\/p\u003E\u003Cp\u003EFrom examining hours of contest footage, the researchers found that controlling small groups of sheep can be harder than managing large ones. A larger group, with more sheep protected in the center, may behave more coherently than a small group as the animals constantly shift between two instincts: \u201cfollow the group\u201d and \u201cflee the dog.\u201d\u003C\/p\u003E\u003Cp\u003E\u201cThat switching behavior makes the group unpredictable,\u201d said Tuhin Chakrabortty, a former postdoctoral researcher in the Bhamla Lab who co-led the study.\u003C\/p\u003E\u003Cp\u003ELooking closely at how dogs and their handlers guide small groups, the researchers found that unpredictability in the flock\u2019s behavior does not always make control harder. \u201cUnder the right conditions, that \u2018noisy\u2019 behavior might actually be a benefit,\u201d Bhamla said.\u003C\/p\u003E\u003Cp\u003E\u003Cstrong\u003ESuccessful Sheep Herding\u003C\/strong\u003E\u003C\/p\u003E\u003Cp\u003ESheepdog handlers categorize sheep by how strongly they respond to a dog\u2019s threatening pressure. Some very responsive sheep might panic under too much pressure, while others might ignore mild pressure and require stronger positioning by the dog.\u003C\/p\u003E\u003Cp\u003EThe researchers observed that successful control often followed a two-step pattern. First, the dog subtly influenced the sheep\u2019s orientation while the animals were mostly standing still. Once the flock was aligned in the desired direction, the dog increased pressure to trigger movement. The timing of those actions was critical, because alignment within a small group could disappear quickly as individuals switched between instincts.\u003C\/p\u003E\u003Cp\u003E\u201cIn our simulations, increasing pressure makes the flock reach the desired orientation faster, but how long the flock stays aligned is set mainly by noise,\u201d Chakrabortty said. \u201cIn essence, dogs can steer the direction, but they can\u2019t hold that decision indefinitely, so timing matters.\u201d\u003C\/p\u003E\u003Cdiv\u003E\u003Cdiv\u003E\u003Cdiv\u003E\u003Cdiv\u003E\u003Cdiv\u003E\u003Cp\u003E\u003Cstrong\u003EDeveloping Computer Models\u003C\/strong\u003E\u003C\/p\u003E\u003Cp\u003ETo understand the broader implications of that behavior, the team developed computer models that captured how sheep respond both to the dog and to one another. The models allowed the researchers to test different strategies for guiding groups whose members make independent decisions under uncertainty.\u003C\/p\u003E\u003Cp\u003EThey then applied those ideas to simulations of robotic swarms. Engineers often design such systems so that each robot blends signals from all nearby robots before deciding how to move. While that approach works well when signals are clear, it can break down when information is noisy or conflicting, Bhamla explained.\u003C\/p\u003E\u003C\/div\u003E\u003C\/div\u003E\u003C\/div\u003E\u003C\/div\u003E\u003C\/div\u003E\u003Cdiv\u003E\u003Cdiv\u003E\u003Cdiv\u003E\u003Cdiv\u003E\u003Cdiv\u003E\u003Cp\u003ETo explain why that switching strategy can work under noisy conditions, the researchers used an analogy of a smoke-filled room where only one person can see the exit, and no one knows who that person is. If everyone polls everyone else and averages the guesses, the one correct signal can get diluted by many noisy ones.\u003C\/p\u003E\u003Cp\u003E\u201cThat\u2019s the counterintuitive part. When only one person has the right information, averaging can wash out the signal. But if you follow one person at a time, and keep switching who that is, the right information can spread through the crowd,\u201d Bhamla said.\u003C\/p\u003E\u003Cp\u003EBuilding on that idea, the researchers tested a strategy inspired by the switching behavior they observed in sheep. In the simulations, each robot paid attention to just one source at a time (either a guiding signal or a neighboring robot) and switched that source from one step to the next.\u003C\/p\u003E\u003Cp\u003EUnder noisy conditions, this switching strategy required less effort to keep the group moving along a desired path than either averaging-based strategies or fixed leader-follower strategies.\u003C\/p\u003E\u003Cp\u003EThe researchers call their approach the Indecisive Swarm Algorithm. The name reflects a counterintuitive insight: allowing influence to shift among individuals over time can make groups easier to guide when conditions are uncertain.\u003C\/p\u003E\u003Cp\u003E\u201cOur findings suggest that the same dynamics that make small animal groups unpredictable may also offer new ways to control complex engineered systems,\u201d Bhamla said.\u003C\/p\u003E\u003Cp\u003ECITATION: Tuhin Chakrabortty and Saad Bhamla, \u201c\u003Ca href=\u0022https:\/\/www.science.org\/doi\/10.1126\/sciadv.adx6791\u0022\u003E\u003Cstrong\u003EControlling noisy herds: Temporal network restructuring improves control of indecisive collectives\u003C\/strong\u003E\u003C\/a\u003E,\u201d \u003Cem\u003EScience Advances\u003C\/em\u003E, 2026\u003C\/p\u003E\u003Cp\u003E\u003Cem\u003EThis research was funded in part by Schmidt Sciences as part of a \u003C\/em\u003E\u003Ca href=\u0022https:\/\/news.gatech.edu\/news\/2025\/09\/16\/saad-bhamla-named-2025-schmidt-polymath\u0022\u003E\u003Cem\u003ESchmidt Polymath\u003C\/em\u003E\u003C\/a\u003E\u003Cem\u003E grant to Saad Bhamla.\u003C\/em\u003E\u003C\/p\u003E\u003C\/div\u003E\u003C\/div\u003E\u003C\/div\u003E\u003C\/div\u003E\u003C\/div\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EGeorgia Tech researchers studying sheepdog trials found new principles for guiding unpredictable groups and used them to develop computer models that could improve coordination in robot swarms, autonomous vehicles, and other networked systems.\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"Georgia Tech researchers studying sheepdog trials found new principles for guiding unpredictable groups and used them to develop computer models that could improve coordination in robot swarms, autonomous vehicles, and other networked systems."}],"uid":"27271","created_gmt":"2026-03-11 19:59:46","changed_gmt":"2026-03-12 15:53:25","author":"Brad Dixon","boilerplate_text":"","field_publication":"","field_article_url":"","location":"Atlanta, GA","dateline":{"date":"2026-03-11T00:00:00-04:00","iso_date":"2026-03-11T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"679589":{"id":"679589","type":"video","title":"SMART Dogs herding sheep on a farm, looks like flock of bird pattern","body":"\u003Cp\u003ESMART Dogs herding sheep on a farm, looks like flock of bird pattern\u003C\/p\u003E","created":"1773260200","gmt_created":"2026-03-11 20:16:40","changed":"1773260200","gmt_changed":"2026-03-11 20:16:40","video":{"youtube_id":"_CjwqIX6C2I","video_url":"https:\/\/youtu.be\/_CjwqIX6C2I?si=bfsxIT77-iAJCm-2"}},"679590":{"id":"679590","type":"video","title":"A dog herding sheep in a sheepdog trial","body":"\u003Cp\u003E\u003Cem\u003EA dog herding sheep in a sheepdog trial\u003C\/em\u003E\u003C\/p\u003E","created":"1773260676","gmt_created":"2026-03-11 20:24:36","changed":"1773260676","gmt_changed":"2026-03-11 20:24:36","video":{"youtube_id":"cnPOXfUC8rc","video_url":"https:\/\/youtu.be\/cnPOXfUC8rc?si=41jH8u3UQ_qjgqWn"}},"679591":{"id":"679591","type":"video","title":" Controlling \u0027Noisy\u0027 Sheep Herds","body":"\u003Cp\u003EControlling \u0027noisy\u0027 sheep herds\u003C\/p\u003E","created":"1773260974","gmt_created":"2026-03-11 20:29:34","changed":"1773260974","gmt_changed":"2026-03-11 20:29:34","video":{"youtube_id":"EMHmDPpe8HE","video_url":"https:\/\/youtu.be\/EMHmDPpe8HE?si=_5DFsk_BafsIK78R"}},"679584":{"id":"679584","type":"image","title":"Sheepdog herding sheep","body":"\u003Cp\u003ESheepdog herding in a sheepdog trial competition\u003C\/p\u003E","created":"1773259589","gmt_created":"2026-03-11 20:06:29","changed":"1773261394","gmt_changed":"2026-03-11 20:36:34","alt":"Sheepdog herding sheep","file":{"fid":"263762","name":"sheepdog1.jpg","image_path":"\/sites\/default\/files\/2026\/03\/11\/sheepdog1.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2026\/03\/11\/sheepdog1.jpg","mime":"image\/jpeg","size":226432,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2026\/03\/11\/sheepdog1.jpg?itok=sbHIPJIH"}},"679588":{"id":"679588","type":"image","title":"Sheeping herding resistant sheep","body":"\u003Cp\u003ESheepdogs first align the flock\u2019s direction, then apply pressure to trigger movement before the sheep lose alignment.\u003C\/p\u003E","created":"1773259967","gmt_created":"2026-03-11 20:12:47","changed":"1773261607","gmt_changed":"2026-03-11 20:40:07","alt":"Sheepdog herding seep","file":{"fid":"263766","name":"sheepdog2-copy.jpg","image_path":"\/sites\/default\/files\/2026\/03\/11\/sheepdog2-copy.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2026\/03\/11\/sheepdog2-copy.jpg","mime":"image\/jpeg","size":196318,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2026\/03\/11\/sheepdog2-copy.jpg?itok=F3wbneis"}}},"media_ids":["679589","679590","679591","679584","679588"],"groups":[{"id":"1188","name":"Research Horizons"},{"id":"1240","name":"School of Chemical and Biomolecular Engineering"}],"categories":[{"id":"145","name":"Engineering"},{"id":"135","name":"Research"},{"id":"152","name":"Robotics"}],"keywords":[{"id":"667","name":"robotics"},{"id":"194958","name":"Sheepdogs"},{"id":"194959","name":"Herding"},{"id":"187915","name":"go-researchnews"}],"core_research_areas":[{"id":"39521","name":"Robotics"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EBrad Dixon, \u003Ca href=\u0022mailto: braddixon@gatech.edu\u0022\u003Ebraddixon@gatech.edu\u003C\/a\u003E\u003C\/p\u003E","format":"limited_html"}],"email":["braddixon@gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"686540":{"#nid":"686540","#data":{"type":"news","title":"Real-World Helper Exoskeletons Just Got Closer to Reality","body":[{"value":"\u003Cp\u003ETo make useful wearable robotic devices that can help stroke patients or people with amputated limbs, the computer brains driving the systems must be trained. That takes time and money \u2014 lots of time and money. And researchers\u0026nbsp;need specially equipped labs to collect mountains of human data for training.\u003C\/p\u003E\u003Cp\u003EEven when engineers have a working device and brain, called a controller, changes and improvements to the exoskeleton system typically mean data collection and training start all over again. The process is expensive and makes bringing fully functional exoskeletons or robotic limbs into the real world largely impractical.\u003C\/p\u003E\u003Cp\u003ENot anymore, thanks to Georgia Tech engineers and computer scientists.\u003C\/p\u003E\u003Cp\u003EThey\u2019ve created an artificial intelligence tool that can turn huge amounts of existing data on how people move into functional exoskeleton controllers. No data collection, retraining, and hours upon hours of additional lab time required for each specific device.\u003C\/p\u003E\u003Cp\u003ETheir approach has produced an exoskeleton brain capable of offering meaningful assistance across a huge range of hip and knee movements that works as well as the best controllers currently available. \u003Ca href=\u0022https:\/\/doi.org\/10.1126\/scirobotics.ads8652\u0022\u003ETheir worked was published Nov. 19 in \u003Cem\u003EScience Robotics.\u003C\/em\u003E\u003C\/a\u003E\u003C\/p\u003E\u003Cp\u003E\u003Ca href=\u0022https:\/\/coe.gatech.edu\/news\/2025\/11\/real-world-helper-exoskeletons-just-got-closer-reality\u0022\u003E\u003Cstrong\u003EFull details on the College of Engineering website.\u003C\/strong\u003E\u003C\/a\u003E\u003C\/p\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EGeorgia Tech researchers are using AI to quickly train exoskeleton devices, making it much more practical to develop, improve, and ultimately deploy wearable robots for people with impaired mobility.\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"Georgia Tech researchers are using AI to quickly train exoskeleton devices, making it much more practical to develop, improve, and ultimately deploy wearable robots for people with impaired mobility."}],"uid":"27446","created_gmt":"2025-11-19 18:38:33","changed_gmt":"2025-11-19 19:12:16","author":"Joshua Stewart","boilerplate_text":"","field_publication":"","field_article_url":"","location":"Atlanta, GA","dateline":{"date":"2025-11-19T00:00:00-05:00","iso_date":"2025-11-19T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"678673":{"id":"678673","type":"image","title":"Matthew-Gombolay-Aaron-Young-AI-exoskeleton-control-0337-h.jpg","body":"\u003Cp\u003EResearchers Matthew Gombolay, left, and Aaron Young used the lower-limb exoskeleton demonstrated in the background to test their new approach to creating exoskeleton controllers. They use huge amounts of existing data on how people move to create functional controllers able to provide meaningful assistance. And unlike earlier controllers, they do not require hours and hours of additional training and data collection with each specific exoskeleton device.\u003C\/p\u003E","created":"1763577576","gmt_created":"2025-11-19 18:39:36","changed":"1763577576","gmt_changed":"2025-11-19 18:39:36","alt":"Matthew Gombolay and Aaron Young pose in the lab while Ph.D. researchers work on a leg exoskeleton device.","file":{"fid":"262731","name":"Matthew-Gombolay-Aaron-Young-AI-exoskeleton-control-0337-h.jpg","image_path":"\/sites\/default\/files\/2025\/11\/19\/Matthew-Gombolay-Aaron-Young-AI-exoskeleton-control-0337-h.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2025\/11\/19\/Matthew-Gombolay-Aaron-Young-AI-exoskeleton-control-0337-h.jpg","mime":"image\/jpeg","size":985612,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2025\/11\/19\/Matthew-Gombolay-Aaron-Young-AI-exoskeleton-control-0337-h.jpg?itok=qFUHgDV1"}}},"media_ids":["678673"],"groups":[{"id":"1237","name":"College of Engineering"}],"categories":[{"id":"194606","name":"Artificial Intelligence"},{"id":"145","name":"Engineering"},{"id":"135","name":"Research"},{"id":"152","name":"Robotics"}],"keywords":[{"id":"168835","name":"Aaron Young"},{"id":"175375","name":"matthew gombolay"},{"id":"182630","name":"exoskeletons"},{"id":"187991","name":"go-robotics"}],"core_research_areas":[{"id":"39521","name":"Robotics"}],"news_room_topics":[{"id":"71881","name":"Science and Technology"}],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003E\u003Ca href=\u0022mailto:jstewart@gatech.edu\u0022\u003EJoshua Stewart\u003C\/a\u003E\u003Cbr\u003ECollege of Engineering\u003C\/p\u003E","format":"limited_html"}],"email":["jstewart@gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"686422":{"#nid":"686422","#data":{"type":"news","title":"Ph.D. Student\u2019s Framework Used to Bolster Nvidia\u2019s Cosmos Predict-2 Model","body":[{"value":"\u003Cp\u003EA new deep learning architectural framework could boost the development and deployment efficiency of autonomous vehicles and humanoid robots. The framework will lower training costs and reduce the amount of real-world data needed for training.\u003C\/p\u003E\u003Cp\u003EWorld foundation models (WFMs) enable physical AI systems to learn and operate within\u0026nbsp;synthetic worlds created by generative artificial intelligence (genAI). For example, these models use predictive capabilities to generate up to 30 seconds of video that accurately reflects the real world.\u003C\/p\u003E\u003Cp\u003EThe new framework, developed by a Georgia Tech researcher, enhances the processing speed of the neural networks that simulate these real-world environments from text, images, or video inputs.\u003C\/p\u003E\u003Cp\u003EThe neural networks that make up the architectures of large language models like ChatGPT and visual models like Sora process contextual information using the \u201cattention mechanism.\u201d\u003C\/p\u003E\u003Cp\u003EAttention refers to a model\u2019s ability to focus on the most relevant parts of input.\u003C\/p\u003E\u003Cp\u003EThe Neighborhood Attention Extension (NATTEN) allows models that require GPUs or high-performance computing systems to process information and generate outputs more efficiently.\u003C\/p\u003E\u003Cp\u003EProcessing speeds can increase by up to 2.6 times, said \u003Ca href=\u0022https:\/\/alihassanijr.com\/\u0022\u003E\u003Cstrong\u003EAli Hassani\u003C\/strong\u003E\u003C\/a\u003E, a Ph.D. student in the School of Interactive Computing and the creator of NATTEN. Hassani is advised by Associate Professor \u003Ca href=\u0022https:\/\/www.humphreyshi.com\/\u0022\u003E\u003Cstrong\u003EHumphrey Shi\u003C\/strong\u003E\u003C\/a\u003E.\u003C\/p\u003E\u003Cp\u003EHassani is also a research scientist at Nvidia, where he introduced NATTEN to \u003Ca href=\u0022https:\/\/www.nvidia.com\/en-us\/ai\/cosmos\/\u0022\u003E\u003Cstrong\u003ECosmos\u003C\/strong\u003E\u003C\/a\u003E \u2014 a family of WFMs the company uses to train robots, autonomous vehicles, and other physical AI applications.\u003C\/p\u003E\u003Cp\u003E\u201cYou can map just about anything from a prompt or an image or any combination of frames from an existing video to predict future videos,\u201d Hassani said. \u201cInstead of generating words with an LLM, you\u2019re generating a world.\u003C\/p\u003E\u003Cp\u003E\u201cUnlike LLMs that generate a single token at a time, these models are compute-heavy. They generate many images \u2014 often hundreds of frames at a time \u2014 so the models put a lot of work on the GPU. NATTEN lets us decrease some of that work and proportionately accelerate the model.\u201d\u003C\/p\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EGeorgia Tech Ph.D. student Ali Hassani developed the Neighborhood Attention Extension (NATTEN), a deep learning architectural framework that is being integrated into Nvidia\u0027s Cosmos Predict-2 world foundation model. NATTEN enhances the processing speed of neural networks that simulate real-world environments for physical AI systems, which are used to train autonomous vehicles and humanoid robots.\u0026nbsp;\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"A new deep learning architectural framework, Neighborhood Attention Extension (NATTEN), is being used by Nvidia to  increase the processing speed of their Cosmos Predict-2 Model for training autonomous vehicles and humanoid robots."}],"uid":"36530","created_gmt":"2025-11-13 21:13:58","changed_gmt":"2025-11-13 21:14:58","author":"Nathan Deen","boilerplate_text":"","field_publication":"","field_article_url":"","location":"Atlanta, GA","dateline":{"date":"2025-11-03T00:00:00-05:00","iso_date":"2025-11-03T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"678621":{"id":"678621","type":"image","title":"2X6A3487.jpg","body":null,"created":"1763068473","gmt_created":"2025-11-13 21:14:33","changed":"1763068473","gmt_changed":"2025-11-13 21:14:33","alt":"Humprhey Shi and Ali Hassani","file":{"fid":"262676","name":"2X6A3487.jpg","image_path":"\/sites\/default\/files\/2025\/11\/13\/2X6A3487.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2025\/11\/13\/2X6A3487.jpg","mime":"image\/jpeg","size":93105,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2025\/11\/13\/2X6A3487.jpg?itok=axfoqv8i"}}},"media_ids":["678621"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1188","name":"Research Horizons"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"153","name":"Computer Science\/Information Technology and Security"},{"id":"194609","name":"Industry"},{"id":"152","name":"Robotics"}],"keywords":[{"id":"192863","name":"go-ai"},{"id":"193860","name":"Artifical Intelligence"},{"id":"194701","name":"go-resarchnews"},{"id":"9153","name":"Research Horizons"},{"id":"14549","name":"nvidia"},{"id":"191138","name":"artificial neural networks"},{"id":"97281","name":"autonomous vehicles"}],"core_research_areas":[{"id":"193655","name":"Artificial Intelligence at Georgia Tech"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[],"email":[],"slides":[],"orientation":[],"userdata":""}},"684058":{"#nid":"684058","#data":{"type":"news","title":"Tiny Fans on the Feet of Water Bugs Could Lead to Energy Efficient, Mini Robots","body":[{"value":"\u003Cdiv\u003E\u003Cdiv\u003E\u003Cdiv\u003E\u003Cdiv\u003E\u003Cdiv\u003E\u003Cp\u003EA new study explains how tiny water bugs use fan-like propellers to zip across streams at speeds up to 120 body lengths per second. The researchers then created a similar fan structure and used it to propel and maneuver an insect-sized robot.\u003C\/p\u003E\u003Cp\u003EThe discovery offers new possibilities for designing small machines that could operate during floods or other challenging situations.\u003C\/p\u003E\u003C\/div\u003E\u003C\/div\u003E\u003C\/div\u003E\u003C\/div\u003E\u003C\/div\u003E\u003Cdiv\u003E\u003Cdiv\u003E\u003Cdiv\u003E\u003Cdiv\u003E\u003Cdiv\u003E\u003Cp\u003EInstead of relying on their muscles, the insects about the size of a grain of rice use the water\u2019s surface tension and elastic forces to morph the ribbon-shaped fans on the end of their legs to slice the water surface and change directions.\u0026nbsp;\u003Cbr\u003E\u003Cbr\u003EOnce they understood the mechanism, the team built a self-deployable, one-milligram fan and installed it into an insect-sized robot capable of accelerating, braking, and maneuvering right and left.\u003C\/p\u003E\u003Cp\u003EThe study is featured\u003Cstrong\u003E \u003C\/strong\u003Eon the cover of the journal \u003Cem\u003EScience.\u0026nbsp;\u003C\/em\u003E\u003Cbr\u003E\u003Cbr\u003E\u003Ca href=\u0022https:\/\/coe.gatech.edu\/news\/2025\/08\/tiny-fans-feet-water-bugs-could-lead-energy-efficient-mini-robots\u0022\u003ERead the entire story and see the robot in action on the College of Engineering website.\u0026nbsp;\u003C\/a\u003E\u003C\/p\u003E\u003C\/div\u003E\u003C\/div\u003E\u003C\/div\u003E\u003C\/div\u003E\u003C\/div\u003E","summary":"","format":"limited_html"}],"field_subtitle":[{"value":"Researchers built an insect-sized robot that uses surface water and collapsable propellers as an idea to improve fast-moving machines that can operate in rivers or flooded areas. "}],"field_summary":[{"value":"\u003Cp\u003EA new study explains how tiny water bugs use fan-like propellers to zip across streams at speeds up to 120 body lengths per second. The researchers then created a similar fan structure and used it to propel and maneuver an insect-sized robot.\u003C\/p\u003E\u003Cp\u003EThe discovery offers new possibilities for designing small machines that could operate during floods or other challenging situations.\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"A new study explains how tiny water bugs use fan-like propellers to zip across streams at speeds up to 120 body lengths per second"}],"uid":"27560","created_gmt":"2025-08-21 20:11:55","changed_gmt":"2025-10-24 19:13:09","author":"Jason Maderer","boilerplate_text":"","field_publication":"","field_article_url":"","location":"Atlanta, GA","dateline":{"date":"2025-08-21T00:00:00-04:00","iso_date":"2025-08-21T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"677766":{"id":"677766","type":"image","title":"water-bug-hero.jpg","body":null,"created":"1755807401","gmt_created":"2025-08-21 20:16:41","changed":"1755807401","gmt_changed":"2025-08-21 20:16:41","alt":"a water bug standing on water","file":{"fid":"261702","name":"water-bug-hero.jpg","image_path":"\/sites\/default\/files\/2025\/08\/21\/water-bug-hero.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2025\/08\/21\/water-bug-hero.jpg","mime":"image\/jpeg","size":1405312,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2025\/08\/21\/water-bug-hero.jpg?itok=uud43Bki"}}},"media_ids":["677766"],"groups":[{"id":"142761","name":"IRIM"},{"id":"1292","name":"Parker H. Petit Institute for Bioengineering and Bioscience (IBB)"},{"id":"1188","name":"Research Horizons"}],"categories":[{"id":"135","name":"Research"},{"id":"152","name":"Robotics"}],"keywords":[{"id":"194701","name":"go-resarchnews"},{"id":"187915","name":"go-researchnews"},{"id":"187423","name":"go-bio"}],"core_research_areas":[{"id":"39441","name":"Bioengineering and Bioscience"},{"id":"39521","name":"Robotics"}],"news_room_topics":[{"id":"71881","name":"Science and Technology"}],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EJason Maderer\u003Cbr\u003ECollege of Engineering\u003Cbr\u003Emaderer@gatech.edu\u003C\/p\u003E","format":"limited_html"}],"email":["maderer@gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"685798":{"#nid":"685798","#data":{"type":"news","title":"This Eighth Grader Is Shaping the Future of Wearable Robotics","body":[{"value":"\u003Cp\u003ECase Neel, 13, is a busy kid who loves coding and robotics, captains his school\u2019s quiz bowl team, and lives with his family on a farm northwest of Atlanta.\u003C\/p\u003E\u003Cp\u003EHe also has cerebral palsy \u2014 and for the past four years, he has played a key role in improving one of the most exciting medical devices at Georgia Tech.\u003C\/p\u003E\u003Cp\u003E\u201cMy role here is as a participant in exoskeleton research studies,\u201d Case explained. \u201cWhen I come in, researchers hook me up to sensors that monitor my gait when I\u2019m walking in the device, and then they get a whole lot of data based off that.\u201d\u003C\/p\u003E\u003Cp\u003E\u003Ca href=\u0022https:\/\/research.gatech.edu\/node\/44098\u0022\u003E\u003Cstrong\u003ERead more \u00bb\u003C\/strong\u003E\u003C\/a\u003E\u003C\/p\u003E","summary":"","format":"limited_html"}],"field_subtitle":[{"value":"How a middle schooler with cerebral palsy became a vital contributor to Georgia Tech\u2019s cutting-edge robotic exoskeleton research \u2014 offering data, feedback, and a passion for innovation."}],"field_summary":[{"value":"\u003Cp\u003EHow a middle schooler with cerebral palsy became a vital contributor to Georgia Tech\u2019s cutting-edge robotic exoskeleton research \u2014 offering data, feedback, and a passion for innovation.\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"Like many people with cerebral palsy, Case walks with impaired knee movement. Georgia Tech\u2019s pediatric knee exoskeleton is designed to help children and adolescents walk with increased stability and mobility. Case\u2019s data enables the researchers to analyze"}],"uid":"27255","created_gmt":"2025-10-17 19:17:01","changed_gmt":"2025-10-17 19:19:56","author":"Josie Giles","boilerplate_text":"","field_publication":"","field_article_url":"","location":"Atlanta, GA","dateline":{"date":"2025-10-13T00:00:00-04:00","iso_date":"2025-10-13T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"678385":{"id":"678385","type":"image","title":"26-R10410-P29-006_EDITED.jpg","body":"\u003Cp\u003EKinsey Herrin, principal research scientist in the George W. Woodruff School of Mechanical Engineering, leads exoskeleton and prosthetic studies and fosters meaningful connections with the participant community.\u003C\/p\u003E","created":"1760728650","gmt_created":"2025-10-17 19:17:30","changed":"1760728650","gmt_changed":"2025-10-17 19:17:30","alt":"Person wearing a floral-patterned shirt interacting with a group of people indoors; one individual is dressed in a bright yellow button-up shirt.","file":{"fid":"262405","name":"26-R10410-P29-006_EDITED.jpg","image_path":"\/sites\/default\/files\/2025\/10\/17\/26-R10410-P29-006_EDITED.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2025\/10\/17\/26-R10410-P29-006_EDITED.jpg","mime":"image\/jpeg","size":9249881,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2025\/10\/17\/26-R10410-P29-006_EDITED.jpg?itok=xAcj4WUS"}}},"media_ids":["678385"],"groups":[{"id":"66220","name":"Neuro"},{"id":"1188","name":"Research Horizons"}],"categories":[{"id":"152","name":"Robotics"}],"keywords":[],"core_research_areas":[{"id":"193656","name":"Neuro Next Initiative"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[],"email":[],"slides":[],"orientation":[],"userdata":""}},"685070":{"#nid":"685070","#data":{"type":"news","title":"The Robotic Breakthrough That Could Help Stroke Survivors Reclaim Their Stride","body":[{"value":"\u003Cp\u003ECrossing a room shouldn\u2019t feel like a marathon. But for many stroke survivors, even the smallest number of steps carries enormous weight. Each movement becomes a reminder of lost coordination, muscle weakness, and physical vulnerability.\u003C\/p\u003E\u003Cp\u003EA team of Georgia Tech researchers wanted to ease that struggle, and robotic exoskeletons offered a promising path. Their findings point to a simple but powerful shift: exoskeletons that adapt to people, rather than forcing people to adapt to the machine. Using artificial intelligence (AI) to learn the rhythm of patients\u2019 strides in real time, the team showed how these devices can reduce strain and increase efficiency. They also demonstrated how the technology can help restore confidence for stroke survivors.\u0026nbsp;\u003Cbr\u003E\u003Cbr\u003E\u003Cstrong\u003EThe Robot Finds the Rhythm\u003C\/strong\u003E\u003C\/p\u003E\u003Cp\u003EA robotic exoskeleton is a wearable device that helps people move with mechanical support. Traditional exoskeletons require endless manual adjustments \u2014 turning knobs, calibrating settings, and tweaking controls.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cIt can be frustrating, even nearly impossible, to get it right for each person,\u201d said \u003Ca href=\u0022https:\/\/www.me.gatech.edu\/faculty\/young\u0022\u003EAaron Young\u003C\/a\u003E, associate professor in the \u003Ca href=\u0022https:\/\/www.me.gatech.edu\/\u0022\u003EGeorge W. Woodruff School of Mechanical Engineering.\u003C\/a\u003E \u201cWith AI, the exoskeleton figures out the mapping itself. It learns the timing of someone\u2019s gait through a neural network, without an engineer needing to hand-tune everything.\u201d\u003C\/p\u003E\u003Cp\u003EThe software monitors each step, instantly updates, and fine-tunes the support it provides. Over time, the exoskeleton aligns its movements with the unique gait of the person wearing it. In this study, the research team used a hip exoskeleton, which provides torque at the hip joint \u2014 in other words, adding power to help stroke survivors walk or move their legs more easily.\u003Cbr\u003E\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u003Cstrong\u003ETaking Smarter Steps\u003C\/strong\u003E\u003C\/p\u003E\u003Cp\u003EWalking after a stroke can be tough and unpredictable. A patient\u2019s stride can change from one day to the next, and even from one step to the next. Most exoskeletons aren\u2019t built for that kind of variation. They are designed around the steady, even gait of healthy young adults, which can leave stroke survivors feeling more unsteady than supported.\u003C\/p\u003E\u003Cp\u003EYoung\u2019s breakthrough, detailed in \u003Ca href=\u0022https:\/\/ieeexplore.ieee.org\/abstract\/document\/11112638\u0022\u003E\u003Cem\u003EIEEE Transactions on Robotics\u003C\/em\u003E,\u003C\/a\u003E is a neural network \u2014 a type of AI that learns patterns much like the human brain does. Sensors at the hip pick up how someone is moving, and the network translates those signals into just the right boost of power to support each step. It quickly figures out a person\u2019s unique walking pattern. But lead clinician Kinsey Herrin said the AI\u2019s learning doesn\u2019t stop there. It keeps adjusting as the patient walks, so the exoskeleton can stay in sync even during stride shifts.\u003C\/p\u003E\u003Cp\u003E\u201cThe speed really surprised us,\u201d Young said. \u201cIn just one to two minutes of walking, the system had already learned a person\u2019s gait pattern with high accuracy. That\u2019s a big deal, to adapt that quickly and then keep adapting as they move.\u201d\u003C\/p\u003E\u003Cp\u003ETests showed the system was far more accurate than the standard exoskeleton. It reduced errors in tracking stroke patients\u2019 walking patterns by 70%.\u003C\/p\u003E\u003Cp\u003EYoung emphasized that this research is about more than metrics. \u201cWhen you see someone able to walk farther without becoming exhausted, that\u2019s when you realize this isn\u2019t just about robotics \u2014 it\u2019s about giving people back a measure of independence,\u201d he said.\u003Cbr\u003E\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u003Cstrong\u003EAdapting Anywhere\u003C\/strong\u003E\u003C\/p\u003E\u003Cp\u003EEvery exoskeleton comes with its own set of sensors, so the data they collect can look completely different from one device to the next. A neural network trained on one machine often stumbles when it\u2019s moved to another. To get around that, Young\u2019s team designed software that works like a universal adapter plug \u2014 no matter what device it\u2019s connected to, it converts the signals into a form the AI can use. After just 10 strides of calibration, the system cut error rates by more than 75%.\u003C\/p\u003E\u003Cp\u003E\u201cThe goal is that someone could strap on a device, and, within a minute, it feels like it was built just for them,\u201d Young said.\u003Cbr\u003E\u003Cbr\u003E\u003Cbr\u003E\u003Cstrong\u003EA Step Toward the Future\u003C\/strong\u003E\u003C\/p\u003E\u003Cp\u003EWhile the study centered on stroke survivors, the implications are far broader. The same adaptive approach could support older adults coping with age-related muscle weakness, people with conditions like Parkinson\u2019s or osteoarthritis, or even children with neurological disabilities.\u0026nbsp;\u003Cbr\u003EYoung and his team are now running clinical trials to measure how well the AI-powered exoskeleton supports people in a wide range of everyday activities.\u003C\/p\u003E\u003Cp\u003E\u201cThere\u2019s no such thing as an \u2018average\u2019 user,\u201d Young said. \u201cThe real challenge is designing technology that can adapt to the full spectrum of human mobility.\u201d\u003C\/p\u003E\u003Cp\u003EIf Georgia Tech\u2019s exoskeleton can rise to that challenge, the promise goes well beyond the lab. It could mean a world where technology doesn\u2019t just help people walk \u2014 it learns to walk with them.\u003C\/p\u003E\u003Cp\u003EInseung Kang, who holds a B.S., M.S., and Ph.D. from Georgia Tech, is the paper\u2019s lead author and now an assistant professor of mechanical engineering at Carnegie Mellon University. He explained that the real promise is in what comes next.\u0026nbsp;\u003Cbr\u003E\u003Cbr\u003E\u201cWe\u2019ve developed a system that can adjust to a person\u2019s walking style in just minutes. But the potential is even greater. Imagine an exoskeleton that keeps learning with you over your lifetime, adjusting as your body and mobility change. Think of it as a robot companion that understands how you walk and gives you the right assistance every step of the way.\u201d\u003Cbr\u003E\u003Cbr\u003E\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u003Cem\u003EAaron Young is affiliated with Georgia Tech\u2019s\u0026nbsp;\u003C\/em\u003E\u003Ca href=\u0022https:\/\/research.gatech.edu\/robotics\u0022\u003E\u003Cem\u003EInstitute for Robotics and Intelligent Machines\u003C\/em\u003E\u003C\/a\u003E.\u003C\/p\u003E\u003Cp\u003E\u003Cem\u003EThis research was primarily funded by a grant (DP2HD111709-01)\u0026nbsp;from the National Institutes of Health New Innovator Award Program. \u003C\/em\u003EGeorgia Tech researchers have created the first lung-on-a-chip with a functioning immune system, allowing it to respond to infections much like a real human lung. The breakthrough, published in \u003Cem\u003ENature Biomedical Engineering\u003C\/em\u003E, provides a more accurate way to study diseases, test therapies, and reduce reliance on animal models. With potential applications in conditions from influenza to cancer, the technology opens the door to personalized medicine that predicts how individual patients will respond to treatment.\u003C\/p\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EGeorgia Tech researchers have developed an AI-powered hip exoskeleton that adapts in real time to a stroke survivor\u2019s changing gait, reducing errors by 70% and helping patients walk with greater ease and confidence. Unlike traditional devices that require constant manual tuning, the system learns each person\u2019s unique stride within minutes and continues adjusting as they move. The breakthrough could extend beyond stroke recovery, offering personalized mobility support for people of all ages and conditions.\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"Georgia Tech\u0027s AI-fueled exoskeleton adapts to every step, helping patients relearn to walk with less effort and more confidence."}],"uid":"36410","created_gmt":"2025-09-18 15:26:54","changed_gmt":"2025-09-24 15:08:59","author":"mazriel3","boilerplate_text":"","field_publication":"","field_article_url":"","location":"Atlanta, GA","dateline":{"date":"2025-09-18T00:00:00-04:00","iso_date":"2025-09-18T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"678071":{"id":"678071","type":"video","title":"The Robotic Breakthrough That Could Help Stroke Survivors Reclaim Their Stride","body":"\u003Cp\u003EGeorgia Tech\u0027s AI-fueled exoskeleton adapts to every step, helping patients relearn to walk with less effort and more confidence.\r\n\r\nTraditional robotic exoskeleton models require extensive manual calibration, but Aaron Young, associate professor in the George W. Woodruff School of Mechanical Engineering, and his team developed AI-driven software that automatically adapts to each user\u2019s gait. By using a neural network, the system continuously monitors and adjusts support with each step, gradually syncing with the wearer\u2019s unique movement. In this study, the team used a hip exoskeleton that delivers torque at the hip joint to help stroke survivors walk more easily.\u003C\/p\u003E","created":"1758208325","gmt_created":"2025-09-18 15:12:05","changed":"1758208325","gmt_changed":"2025-09-18 15:12:05","video":{"youtube_id":"RPHz2mU9sBA","video_url":"https:\/\/youtu.be\/RPHz2mU9sBA"}}},"media_ids":["678071"],"groups":[{"id":"66220","name":"Neuro"},{"id":"1188","name":"Research Horizons"}],"categories":[{"id":"138","name":"Biotechnology, Health, Bioengineering, Genetics"},{"id":"152","name":"Robotics"}],"keywords":[{"id":"194701","name":"go-resarchnews"},{"id":"13169","name":"autonomous robots"},{"id":"98751","name":"College of Engineering; George W. Woodruff School of Mechanical Engineering"},{"id":"172970","name":"go-neuro"}],"core_research_areas":[{"id":"39441","name":"Bioengineering and Bioscience"},{"id":"39521","name":"Robotics"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EMichelle Azriel Sr. Writer - Editor\u003C\/p\u003E","format":"limited_html"}],"email":["mazriel3@gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"685002":{"#nid":"685002","#data":{"type":"news","title":"Two IC Faculty Receive NSF CAREER for Robotics and AR\/VR Initiatives","body":[{"value":"\u003Cp\u003EPractice may not make perfect for robots, but new machine learning models from Georgia Tech are allowing them to improve their skillsets to more effectively assist humans in the real world.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u003Ca href=\u0022https:\/\/faculty.cc.gatech.edu\/~danfei\/\u0022\u003E\u003Cstrong\u003EDanfei Xu\u003C\/strong\u003E\u003C\/a\u003E, an assistant professor in \u003Ca href=\u0022https:\/\/ic.gatech.edu\/\u0022\u003E\u003Cstrong\u003EGeorgia Tech\u2019s School of Interactive Computing\u003C\/strong\u003E\u003C\/a\u003E, is introducing new models that provide robots with \u201con-the-job\u201d training.\u003C\/p\u003E\u003Cp\u003EThe National Science Foundation (NSF) awarded Xu its CAREER award given to early career faculty. The award will enable Xu to expand his research and refine his models, which could accelerate the process of robot deployment and alleviate manufacturers from the burden of achieving perfection.\u003C\/p\u003E\u003Cp\u003E\u201cThe main problem we\u2019re trying to tackle is how to allow robots to learn on the job,\u201d Xu said. \u201cHow should it self-improve based on the performance or the new requirements or new user preferences in each home or working environment? You cannot expect a robot manufacturer to program all of that.\u003C\/p\u003E\u003Cp\u003E\u201cThe challenging thing about robotics is that the robot must get feedback from the physical environment. It must try to solve a problem to understand the limits of its abilities so it can decide how to improve its own performance.\u201d\u003C\/p\u003E\u003Cp\u003EAs with humans, Xu views practice as the most effective way for a robot to improve a skill. His models train the robot to identify the point at which it failed in its task performance.\u003C\/p\u003E\u003Cp\u003E\u201cIt identifies that skill and sets up an environment where it can practice,\u201d he said. \u201cIf it needs to improve opening a drawer, it will navigate itself to the drawer and practice opening it.\u201d\u003C\/p\u003E\u003Cp\u003EThe models allow the robot to split tasks into smaller parts and evaluate its own skill level using reward functions. Cooking dinner, for example, can be divided into steps like turning on the stove and opening the fridge, which are necessary to achieve the overall goal.\u003C\/p\u003E\u003Cp\u003E\u201cPlanning is a complex problem because you must predict what\u2019s going to happen in the physical world,\u201d Xu said. \u201cWe use machine learning techniques that our group has developed over the past two years, using generated models to generate positive futures. They\u2019re very good at modeling long-horizon phenomena.\u003C\/p\u003E\u003Cp\u003E\u201cThe robot knows when it\u2019s failed because there\u2019s a value that tells it how well it performed the task and whether it received its reward. While we don\u2019t know how to tell the robot why it failed, we have ways for it to improve its skills based on that measurement.\u201d\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EOne of the biggest barriers that keeps many robots from being made available for public use is the pressure on manufacturers to make the robot as close to perfect as possible at deployment. Xu said it\u2019s more practical to accept that robots will have learning gaps that need to be filled and to implement more efficient real-world learning models.\u003C\/p\u003E\u003Cp\u003E\u201cWe work under the pressure of getting everything correct before deployment,\u201d he said. \u201cWe need to meet the basic safety requirements, but in terms of competence, it is difficult to get that perfect at deployment. This takes some of the pressure off because it will be able to self-adapt.\u201d\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003EVirtual Workspace for Data Workers\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003E\u003Ca href=\u0022https:\/\/ivi.cc.gatech.edu\/people.html\u0022\u003E\u003Cstrong\u003EYalong Yang\u003C\/strong\u003E\u003C\/a\u003E, another assistant professor in the School of IC, also received the NSF CAREER Award for a research proposal that will design augmented and virtual reality (AR\/VR) workspaces for data workers.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cIn 10 years, I envision everyone will use AR\/VR in their office, and it will replace their laptop or their monitor,\u201d Yang said.\u003C\/p\u003E\u003Cp\u003EYang said he is also working with Google on the project and using Google Gemini to bring conventional applications to immersive space, with data tools being the most complicated systems to re-design for immersive environments.\u003C\/p\u003E\u003Cp\u003EThe immersive workspace and interface will also enable teams of data workers to collaborate and share their data in real-time.\u003C\/p\u003E\u003Cp\u003E\u201cI want to support the end-to-end process,\u201d Yang said. \u201cWe have visualization tools for data, but it\u2019s not enough. Data science is a pipeline \u2014 from collecting data to processing, visualizing, modeling and then communicating. If you only support one, people will need to switch to other platforms for the other steps.\u201d\u003C\/p\u003E\u003Cp\u003EYang also noted that prior research has shown that VR can enhance cognitive abilities, such as memory and attention and support multitasking. The results of his project could lead to maximizing worker efficiency without them feeling strained.\u003C\/p\u003E\u003Cp\u003E\u201cWe all have a cognitive limit in our working memory. Using AR\/VR can increase those limits and process more information. We can expand people\u2019s spatial ability to help them build a better mental model of the data presented to them.\u201d\u003C\/p\u003E\u003Cp\u003EYang was also recently named a \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/news\/tiktok-photoshop-generative-ai-could-bring-millions-apps-3d-reality\u0022\u003E\u003Cstrong\u003E2025 Google Research Scholar\u003C\/strong\u003E\u003C\/a\u003E as he seeks to build a new artificial intelligence (AI) tool that converts mobile apps into 3D immersive environments.\u003C\/p\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003ETwo assistant professors in Georgia Tech\u2019s School of Interactive Computing \u2014 Danfei Xu and Yalong Yang \u2014 have each won NSF CAREER Awards for their respective research in robotics and AR\/VR initiatives. Xu\u2019s work will develop machine learning models that let robots learn \u201con the job,\u201d adapting from feedback and failure in real-world environments rather than being perfectly preprogrammed. Yang\u2019s project aims to build immersive AR\/VR workspaces to support data workers across the full data pipeline, including a collaboration with Google to bring conventional apps into immersive environments.\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"Two Georgia Tech professors, Danfei Xu and Yalong Yang, have received the prestigious NSF CAREER award for their research in robotics, which focuses on teaching robots to self-improve, and in augmented and virtual reality (AR\/VR), which aims to create imm"}],"uid":"36530","created_gmt":"2025-09-17 18:24:23","changed_gmt":"2025-09-17 18:28:51","author":"Nathan Deen","boilerplate_text":"","field_publication":"","field_article_url":"","location":"Atlanta, GA","dateline":{"date":"2025-09-17T00:00:00-04:00","iso_date":"2025-09-17T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"678055":{"id":"678055","type":"image","title":"ICRA-2025_86A9079-Enhanced-NR.jpg","body":null,"created":"1758133475","gmt_created":"2025-09-17 18:24:35","changed":"1758133475","gmt_changed":"2025-09-17 18:24:35","alt":"Danfei Xu","file":{"fid":"262033","name":"ICRA-2025_86A9079-Enhanced-NR.jpg","image_path":"\/sites\/default\/files\/2025\/09\/17\/ICRA-2025_86A9079-Enhanced-NR.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2025\/09\/17\/ICRA-2025_86A9079-Enhanced-NR.jpg","mime":"image\/jpeg","size":132463,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2025\/09\/17\/ICRA-2025_86A9079-Enhanced-NR.jpg?itok=Dt9A0bu8"}}},"media_ids":["678055"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1188","name":"Research Horizons"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"153","name":"Computer Science\/Information Technology and Security"},{"id":"152","name":"Robotics"}],"keywords":[{"id":"191934","name":"National Science Foundation (NSF)"},{"id":"7842","name":"NSF CAREER Award"},{"id":"188776","name":"go-research"},{"id":"9153","name":"Research Horizons"},{"id":"145251","name":"virtual reality"},{"id":"1597","name":"Augmented Reality"}],"core_research_areas":[{"id":"39501","name":"People and Technology"},{"id":"39521","name":"Robotics"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[],"email":[],"slides":[],"orientation":[],"userdata":""}},"684700":{"#nid":"684700","#data":{"type":"news","title":"Georgia Tech Team Designing Robot Guide Dog to Assist the Visually Impaired","body":[{"value":"\u003Cp\u003EPeople who are visually impaired and cannot afford or care for service animals might have a practical alternative in a robotic guide dog being developed at Georgia Tech.\u003C\/p\u003E\u003Cp\u003EBefore launching its prototype, a research team within Georgia Tech\u2019s School of Interactive Computing, led by Professor \u003Cstrong\u003EBruce Walker\u003C\/strong\u003E and Assistant Professor \u003Cstrong\u003ESehoon Ha\u003C\/strong\u003E, is working to improve its methods and designs based on research within blind and visually impaired (BVI) communities.\u003C\/p\u003E\u003Cp\u003E\u201cThere\u2019s been research on the technical aspects and functionality of robotic guide dogs, but not a lot of emphasis on the aesthetics or form factors,\u201d said \u003Cstrong\u003EAvery\u003C\/strong\u003E \u003Cstrong\u003EGong\u003C\/strong\u003E, a recent master\u2019s graduate who worked in Walker\u2019s lab. \u201cWe wanted to fill this gap.\u201d\u003C\/p\u003E\u003Cp\u003ETraining a guide dog can cost up to $50,000, and while there are nonprofit organizations that can cover these costs for potential owners, there is still a gap between the amount of available guide dogs and BVI individuals who need them. Not all BVI individuals are able to care for a dog and feed it. The dog also has fewer than 10 working years before it needs replacement.\u003C\/p\u003E\u003Cp\u003EGong co-authored a paper on the design implications of the robotic guide dog that was presented at the 2025 International Conference on Robotics and Automation (ICRA) in Atlanta in May.\u003C\/p\u003E\u003Cp\u003EThe consensus among the study\u2019s participants indicates they prefer a robotic guide dog that:\u003C\/p\u003E\u003Cul\u003E\u003Cli\u003Eresembles a real dog and appears approachable\u003C\/li\u003E\u003Cli\u003Ehas a clear identifier of being a guide dog, such as a vest\u003C\/li\u003E\u003Cli\u003Ehas built-in GPS and Bluetooth connectivity\u003C\/li\u003E\u003Cli\u003Ehas control options such as voice command\u003C\/li\u003E\u003Cli\u003Ehas soft textures without feeling furry\u003C\/li\u003E\u003Cli\u003Ehas long battery life and self-charging capability\u003C\/li\u003E\u003C\/ul\u003E\u003Cp\u003E\u201cA lot of people said they didn\u2019t want the dog to look too cute or appealing because it would draw too much attention,\u201d said \u003Cstrong\u003EAviv Cohav\u003C\/strong\u003E, another lead author of the paper and recent master\u2019s graduate.\u003C\/p\u003E\u003Cp\u003E\u201cMany people have issues with taking their guide dog to places, whether it\u2019s little kids wanting to play with the dog or people not liking dogs or people being scared of them, and that reflects on the owners themselves. We wanted to look at what would be a good balance between having a functional robot that wouldn\u2019t scare people away or be a distraction.\u201d\u003C\/p\u003E\u003Cp\u003EThe researchers also had to consider the perspectives of sighted individuals and how society at large might view a robotic guide dog.\u003C\/p\u003E\u003Cp\u003EAn example of this is the amount of noise the dog makes while walking. The owner needs to hear the dog is active, but the clanky sound many off-the-shelf robots make could create disturbances in indoor spaces that amplify sounds. To offset the noise, the team developed algorithms that allow the robot to move more quietly.\u003C\/p\u003E\u003Cp\u003EWalker and his lab have examined similar scenarios that must take public perception into account.\u003C\/p\u003E\u003Cp\u003E\u201cWe like to think of Georgia Tech as going the extra mile,\u201d Walker said. \u201cLet\u2019s not just make a robot, but a robot that\u2019s going to fit into society.\u003C\/p\u003E\u003Cp\u003E\u201cTo have impact, the technologies we produce must be produced with society in mind. This is a holistic design that considers the users and all the people with whom the users interact.\u201d\u003C\/p\u003E\u003Cp\u003E\u003Cstrong\u003ETaery Kim\u003C\/strong\u003E, a computer science Ph.D. student, began working on the concept of a robotic guide dog when she came to Georgia Tech in 2022. She and Ha, her advisor, have authored papers on building the robot\u2019s navigation and safety components.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cWhen I started, I thought it would be as simple as giving the guide dog a command to take me to Starbucks or the grocery store, and it would just take me,\u201d Kim said. \u201cBut the user must give waypoint directions \u2014 \u2018go left here,\u2019 \u2018turn right,\u2019 \u2018go forward,\u2019 \u2018stop.\u2019 Detailed commands must be delivered to the dog.\u201d\u003C\/p\u003E\u003Cp\u003EWhile a real dog has naturally enhanced senses of hearing and smell that can\u2019t be replicated, technology can provide interconnected safety features during an emergency. The researchers envision a camera system equipped with a 360-degree field of view, computer vision algorithms that detect obstacles or hazards, and voice recognition that recognizes calls for help. An SOS function could automatically call 911 at the owner\u2019s request or if the owner is unresponsive.\u003C\/p\u003E\u003Cp\u003EKim said the robot should also have explainability features to enhance communication with the owner. For example, if the robot suddenly stops or ignores an owner\u2019s commands, it should tell the owner that it\u2019s detecting a hazard in their path.\u003C\/p\u003E\u003Cp\u003EManufacturing a robot at scale would initially be expensive, but the researchers believe the cost would eventually be offset because of its longevity. BVI individuals may only need to purchase one during their lifetime.\u003C\/p\u003E\u003Cp\u003ETo introduce a prototype, the multidisciplinary research team recognizes that it needs to enlist experts from other fields to adequately address the various implications and research gaps inherent in the project.\u003C\/p\u003E\u003Cp\u003EWalker said the teams welcome additional partners who are keen to tackle challenges ranging from design and engineering to battery life to human-robot interaction.\u003C\/p\u003E\u003Cp\u003ETeam member \u003Cstrong\u003EJ. Taery Kim\u003C\/strong\u003E was supported by the National Science Foundation\u0027s Graduate Research Fellowship Program (NSF GRFP) under Grant No. DGE-2039655.\u003C\/p\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EGeorgia Tech researchers from the School of Interactive Computing are using survey information from individuals who are blind or visually impaired (BVI) to develop a robotic service dog.\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"Researchers rely on feedback from blind and visually impaired (BVI) communities to create service animal prototype."}],"uid":"32045","created_gmt":"2025-09-10 12:57:59","changed_gmt":"2025-09-17 16:44:07","author":"Ben Snedeker","boilerplate_text":"","field_publication":"","field_article_url":"","location":"Atlanta, GA","dateline":{"date":"2025-09-10T00:00:00-04:00","iso_date":"2025-09-10T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"677956":{"id":"677956","type":"image","title":"Georgia Tech researchers test their prototype of a robotic guide dog. Photo by Terence Rushin\/College of Computing.","body":null,"created":"1757509562","gmt_created":"2025-09-10 13:06:02","changed":"1757509562","gmt_changed":"2025-09-10 13:06:02","alt":"Georgia Tech researchers test their prototype of a robotic guide dog. Photo by Terence Rushin\/College of Computing.","file":{"fid":"261920","name":"Robotic-Seeing-Eye-Dog_86A0019-Enhanced-NR.jpg","image_path":"\/sites\/default\/files\/2025\/09\/10\/Robotic-Seeing-Eye-Dog_86A0019-Enhanced-NR.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2025\/09\/10\/Robotic-Seeing-Eye-Dog_86A0019-Enhanced-NR.jpg","mime":"image\/jpeg","size":221759,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2025\/09\/10\/Robotic-Seeing-Eye-Dog_86A0019-Enhanced-NR.jpg?itok=WEOIHeFO"}},"677957":{"id":"677957","type":"image","title":"A graphic depicts design considerations for the prototype.","body":null,"created":"1757509677","gmt_created":"2025-09-10 13:07:57","changed":"1757509677","gmt_changed":"2025-09-10 13:07:57","alt":"A graphic depicts design considerations for the prototype.","file":{"fid":"261921","name":"Robotic-Dog-Story-01-20-.jpg","image_path":"\/sites\/default\/files\/2025\/09\/10\/Robotic-Dog-Story-01-20-.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2025\/09\/10\/Robotic-Dog-Story-01-20-.jpg","mime":"image\/jpeg","size":109946,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2025\/09\/10\/Robotic-Dog-Story-01-20-.jpg?itok=VSx4JbmF"}}},"media_ids":["677956","677957"],"related_links":[{"url":"https:\/\/youtu.be\/4CzDPxaVWkI?feature=shared","title":"VIDEO: Robotic guide dogs could reshape the future for the blind and visually impaired"}],"groups":[{"id":"1278","name":"College of Sciences"},{"id":"1188","name":"Research Horizons"},{"id":"443951","name":"School of Psychology"}],"categories":[{"id":"135","name":"Research"},{"id":"152","name":"Robotics"}],"keywords":[{"id":"10199","name":"Daily Digest"},{"id":"181991","name":"Georgia Tech News Center"},{"id":"187915","name":"go-researchnews"},{"id":"188087","name":"go-irim"},{"id":"667","name":"robotics"},{"id":"172970","name":"go-neuro"}],"core_research_areas":[{"id":"193656","name":"Neuro Next Initiative"},{"id":"39521","name":"Robotics"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003ENathan Deen, Communications Officer\u003Cbr\u003ESchool of Interactive Computing\u003C\/p\u003E\u003Cp\u003Enathan.deen@cc.gatech.edu\u003C\/p\u003E","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"683686":{"#nid":"683686","#data":{"type":"news","title":"Research Combining Humans, Robots, and Unicycles Receives NSF Award","body":[{"value":"\u003Cp\u003EResearch into tailored assistive and rehabilitative devices has seen recent advancements but the goal remains out of reach due to the sparsity of data on how humans learn complex balance tasks. To address this gap, a collaborating team of interdisciplinary faculty from Florida State University and Georgia Tech have been awarded ~$798,000 by the NSF to launch a study to better understand human motor learning as well as gain greater understanding into human robot interaction dynamics during the learning process.\u003C\/p\u003E\u003Cp\u003E\u0026nbsp;Led by PI:\u0026nbsp;\u003Ca href=\u0022https:\/\/rthmlab.wixsite.com\/taylorgambon\u0022\u003ETaylor Higgins\u003C\/a\u003E, Assistant Professor, FAMU-FSU Department of Mechanical Engineering, partnering with Co-PIs\u0026nbsp;\u003Ca href=\u0022https:\/\/www.shreyaskousik.com\/\u0022\u003EShreyas Kousik\u003C\/a\u003E, Assistant Professor, Georgia Tech, George W. Woodruff School of Mechanical Engineering, and\u0026nbsp;\u003Ca href=\u0022https:\/\/annescollege.fsu.edu\/faculty-staff\/dr-brady-decouto\u0022\u003EBrady DeCouto,\u003C\/a\u003E Assistant Professor, FSU\u0026nbsp;Anne Spencer Daves College of Education, Health, and Human Sciences, the research will use the acquisition of unicycle riding skill by participants to gain a better grasp on human motor learning in tasks requiring balance and complex movement in space. Although it might sound a bit odd, the fact that most people don\u2019t know how to ride a unicycle, and the fact that it requires balance, mean that the data will cover the learning process from novice to skilled across the participant pool.\u003C\/p\u003E\u003Cp\u003EUsing data acquired from human participants, the team will develop a \u201crobotics assistive unicycle\u201d that will be used in the training of the next pool of novice unicycle riders. \u0026nbsp;This is to gauge if, and how rapidly, human motor learning outcomes improve with the assistive unicycle. The participants that engage with the robotic unicycle will also give valuable insight into developing effective human-robot collaboration strategies.\u003C\/p\u003E\u003Cp\u003EThe fact that deciding to get on a unicycle requires a bit of bravery might not be great for the participants, but it\u2019s great for the research team. The project will also allow exploration into the interconnection between anxiety and human motor learning to discover possible alleviation strategies, thus increasing the likelihood of positive outcomes for future patients and consumers of these devices.\u003C\/p\u003E\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EAuthor\u003Cbr\u003E-Christa M. Ernst\u003C\/p\u003E\u003Cp\u003EThis Article Refers to NSF Award # 2449160\u003C\/p\u003E","summary":"","format":"limited_html"}],"field_subtitle":[{"value":"Trio from Florida State University and Georgia Tech aim to develop better assistive and rehabilitative technologies and strategies using novel approach."}],"field_summary":[{"value":"\u003Cp\u003EA collaborating team of interdisciplinary faculty from Florida State University and Georgia Tech have been awarded ~$798,000 by the NSF to launch a study to better understand human motor learning as well as gain greater understanding into human robot interaction dynamics during the learning process.\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"Novel research to improve tailored assistive and rehabilitative devices wins NSF Grant"}],"uid":"27863","created_gmt":"2025-08-08 19:35:55","changed_gmt":"2025-08-12 14:15:37","author":"Christa Ernst","boilerplate_text":"","field_publication":"","field_article_url":"","location":"Atlanta, GA","dateline":{"date":"2025-08-08T00:00:00-04:00","iso_date":"2025-08-08T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"677632":{"id":"677632","type":"image","title":"Kousik-NSF-Award-News-Graphic.png","body":null,"created":"1754681767","gmt_created":"2025-08-08 19:36:07","changed":"1754681767","gmt_changed":"2025-08-08 19:36:07","alt":"Graphic of person using an assistive device thinking about how a robot could hep learn riding a unicycle","file":{"fid":"261548","name":"Kousik-NSF-Award-News-Graphic.png","image_path":"\/sites\/default\/files\/2025\/08\/08\/Kousik-NSF-Award-News-Graphic.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2025\/08\/08\/Kousik-NSF-Award-News-Graphic.png","mime":"image\/png","size":267611,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2025\/08\/08\/Kousik-NSF-Award-News-Graphic.png?itok=mwCCwIQv"}}},"media_ids":["677632"],"groups":[{"id":"545781","name":"Institute for Data Engineering and Science"},{"id":"142761","name":"IRIM"},{"id":"1188","name":"Research Horizons"}],"categories":[{"id":"194606","name":"Artificial Intelligence"},{"id":"138","name":"Biotechnology, Health, Bioengineering, Genetics"},{"id":"145","name":"Engineering"},{"id":"135","name":"Research"},{"id":"152","name":"Robotics"}],"keywords":[{"id":"78841","name":"human-robot interaction"},{"id":"5525","name":"assistive technologies"},{"id":"187915","name":"go-researchnews"},{"id":"187582","name":"go-ibb"}],"core_research_areas":[{"id":"193655","name":"Artificial Intelligence at Georgia Tech"},{"id":"39441","name":"Bioengineering and Bioscience"},{"id":"193656","name":"Neuro Next Initiative"},{"id":"39521","name":"Robotics"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cdiv\u003E\u003Cstrong\u003EChrista M. Ernst\u003C\/strong\u003E\u003C\/div\u003E\u003Cdiv\u003EResearch Communications Program Manager\u003C\/div\u003E\u003Cdiv\u003EKlaus Advance Computing Building 1120E | 266 Ferst Drive | Atlanta GA | 30332\u003C\/div\u003E\u003Cdiv\u003E\u003Cstrong\u003ETopic Expertise: Robotics | Data Sciences | Semiconductor Design \u0026amp; Fab\u003C\/strong\u003E\u003C\/div\u003E\u003Cdiv\u003Echrista.ernst@research.gatech.edu\u003C\/div\u003E","format":"limited_html"}],"email":["christa.ernst@research.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"682404":{"#nid":"682404","#data":{"type":"news","title":"Researchers Say Stress \u201cSweet Spot\u201d Can Improve Remote Operators\u0027 Performance","body":[{"value":"\u003Cp\u003EMilitary drone pilots, disaster search and rescue teams, and astronauts stationed on the International Space Station are often required to remotely control robots while maintaining their concentration for hours at a time.\u003C\/p\u003E\u003Cp\u003EGeorgia Tech roboticists are attempting to identify the most stressful periods that human teleoperators experience while performing tasks remotely. A novel study provides new insights into determining when a teleoperator needs to operate at a high level of focus and which parts of the task can be delegated to robot automation.\u003C\/p\u003E\u003Cp\u003ESchool of Interactive Computing Associate Professor \u003Cstrong\u003EMatthew\u003C\/strong\u003E \u003Cstrong\u003EGombolay\u003C\/strong\u003E calls it the \u201csweet spot\u201d of human ingenuity and robotic precision. Gombolay and students from his \u003Ca href=\u0022https:\/\/core-robotics.gatech.edu\/\u0022\u003E\u003Cstrong\u003ECORE Robotics Lab\u003C\/strong\u003E\u003C\/a\u003Econducted a novel study that measures stress and workload on human teleoperators.\u003C\/p\u003E\u003Cp\u003EGombolay said it can inform military officials on how to strategically implement task automation and maximize human teleoperator performance.\u003C\/p\u003E\u003Cp\u003EHumans continue to hand over more tasks to robots to perform, but Gombolay said that some functions will still require human input and oversight for the foreseeable future.\u003C\/p\u003E\u003Cp\u003ESpecific applications, such as space exploration, commercial and military aviation, disaster relief, and search and rescue, pose substantial safety concerns. Astronauts stationed on the International Space Station, for example, manually control robots that bring in supplies, move cargo, and make structural repairs.\u003C\/p\u003E\u003Cp\u003E\u201cIt\u2019s brutal from a psychological perspective,\u201d Gombolay said.\u003C\/p\u003E\u003Cp\u003EThe question often asked about automating a task in these fields is, at what point can a robot be trusted more than a human?\u003C\/p\u003E\u003Cp\u003EA recent paper by Gombolay and his current and former students \u2014 \u003Cstrong\u003ESam\u003C\/strong\u003E \u003Cstrong\u003EYi\u003C\/strong\u003E \u003Cstrong\u003ETing\u003C\/strong\u003E, \u003Cstrong\u003EErin\u003C\/strong\u003E \u003Cstrong\u003EHedlund\u003C\/strong\u003E-\u003Cstrong\u003EBotti\u003C\/strong\u003E, and \u003Cstrong\u003EManisha\u003C\/strong\u003E \u003Cstrong\u003ENatarajan\u003C\/strong\u003E \u2014 sheds new light on the debate. The paper was published in the IEEE Robotics and Automation Letters and will be presented at the International Conference on Robotics and Automation in Atlanta.\u003C\/p\u003E\u003Cp\u003EThe NASA-funded study can identify which aspects of tedious, time-consuming tasks can be automated and which require human supervision. If roboticists can pinpoint the elements of a task that cause the least stress, they can automate these components and enable humans to oversee the more challenging aspects.\u003C\/p\u003E\u003Cp\u003E\u201cIf we\u2019re talking about repetitive tasks, robots do better with that, so if you can automate it, you should,\u201d said Ting, a former grad student and lead author of the paper. \u201cI don\u2019t think humans enjoy doing repetitive tasks. We can move toward a better future with automation.\u201d\u003C\/p\u003E\u003Cp\u003EMilitary officials, for example, could measure the stress of remote drone pilots and know which times during a pilot\u2019s shift require the highest level of attention.\u003C\/p\u003E\u003Cp\u003E\u201cWe can get a sense of how stressed you are and create models of how divided your attention is and the performance rate of the tasks you\u2019re doing,\u201d Gombolay said.\u003C\/p\u003E\u003Cp\u003E\u201cIt can be a low-stress or high-stress situation depending on the stakes and what\u2019s going on with you personally. Are you well-caffeinated? Well-rested? Is there stress from home you\u2019re bringing with you to the workplace? The goal is to predict how good your task performance will be. If it indicates it might be poor, we may need to outsource work to other people or create a safe space for the operator to destress.\u201d\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003EThe Stress Test\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003EFor their study, the researchers cut a small river-shaped path into a medium-density fiberboard. The exercise required the 24 participants to use a remote robotic arm to navigate through the path from one end to the other without touching the edges.\u003C\/p\u003E\u003Cp\u003EThe experiment grew more challenging as new stress conditions and workload requirements were introduced. The changing conditions required the test participants to multitask to complete the assignment.\u003C\/p\u003E\u003Cp\u003EGombolay said the study supports the Yerkes-Dodson Law, which states that moderate levels of stress increase human performance.\u003C\/p\u003E\u003Cp\u003EThe experiment showed that operators felt overwhelmed and performed poorly when multitasking was introduced. Too much stress led to poor performance, but a moderate amount of stress induced more engagement and enhanced teleoperator focus.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003ETing said finding that ideal stress zone can lead to a higher performance rating.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cYou would think the more stressed you are, the more your performance decreases,\u201d Ting said. \u201cMost people didn\u2019t react that way. As stress increased, performance increased, but when you increased workload and gave them more to do, that\u2019s when you started seeing deteriorating performance.\u201d\u003C\/p\u003E\u003Cp\u003EGombolay said no stress can be just as detrimental as too much stress. Performing a task without stress tends to cause teleoperators to become disinterested, especially if it is repetitive and time-consuming.\u003C\/p\u003E\u003Cp\u003E\u201cNo stress led to complacency,\u201d Gombolay said. \u201cThey weren\u2019t as engaged in completing the task.\u003C\/p\u003E\u003Cp\u003E\u201cIf your excitement is too low, you get so bored you can\u2019t muster the cognitive energy to reason about robot operation problems.\u201d\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003EThe Human Factor\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003ERoboticists have made significant leaps in recent years to remove teleoperators from the equation. Still, Gombolay said it\u2019s too early to tell whether robots can be trusted with any task that a human can perform.\u003C\/p\u003E\u003Cp\u003E\u201cWe\u2019re a long way from full autonomy,\u201d he said. \u201cThere\u2019s a lot that robots still can\u2019t do without a human operator. Search and rescue operations, if a building collapses, we don\u2019t have much training data for robots to go through rubble by themselves to rescue people. There are ethical needs for humans to be able to supervise or take direct control of robots.\u201d\u003C\/p\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EResearchers at Georgia Tech are exploring the relationship between stress levels and the performance of remote robot operators. They found a moderate level of of stress can enhance performance and keep operators engaged and focused.\u0026nbsp;\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"Georgia Tech researchers say there\u0027s a \u0022sweet spot\u0022 of stress that can enhance performance of remote robot operators such as drone pilots and astronauts."}],"uid":"36530","created_gmt":"2025-05-15 13:08:48","changed_gmt":"2025-07-15 15:05:39","author":"Nathan Deen","boilerplate_text":"","field_publication":"","field_article_url":"","location":"Atlanta, GA","dateline":{"date":"2025-05-13T00:00:00-04:00","iso_date":"2025-05-13T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1188","name":"Research Horizons"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"153","name":"Computer Science\/Information Technology and Security"},{"id":"147","name":"Military Technology"},{"id":"135","name":"Research"},{"id":"152","name":"Robotics"},{"id":"8862","name":"Student Research"}],"keywords":[],"core_research_areas":[{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[],"email":[],"slides":[],"orientation":[],"userdata":""}},"682890":{"#nid":"682890","#data":{"type":"news","title":"Tech Researchers Tabbed to Build AI Systems for Medical Robots in South Korea","body":[{"value":"\u003Cp\u003EOverwhelmed doctors and nurses struggling to provide adequate patient care in South Korea are getting support from Georgia Tech and Korean-based researchers through an AI-powered robotic medical assistant.\u003C\/p\u003E\u003Cp\u003ETop South Korean research institutes have enlisted Georgia Tech researchers \u003Cstrong\u003ESehoon\u003C\/strong\u003E \u003Cstrong\u003EHa\u003C\/strong\u003E and \u003Cstrong\u003EJennifer G.\u003C\/strong\u003E \u003Cstrong\u003EKim\u003C\/strong\u003E to develop artificial intelligence (AI) to help the humanoid assistant navigate hospitals and interact with doctors, nurses, and patients.\u003C\/p\u003E\u003Cp\u003EHa and Kim will partner with Neuromeka, a South Korean robotics company, on a five-year, 10 billion won (about $7.2 million US) grant from the South Korean government. Georgia Tech will receive about $1.8 million of the grant.\u003C\/p\u003E\u003Cp\u003EHa and Kim, assistant professors in the School of Interactive Computing, will lead Tech\u2019s efforts and also work with researchers from the Korea Advanced Institute of Science and Technology and the Electronics and Telecommunications Research Institute.\u003C\/p\u003E\u003Cp\u003ENeuromeka has built industrial robots since its founding in 2013 and recently decided to expand into humanoid service robots.\u003C\/p\u003E\u003Cp\u003ELee, the group leader of the humanoid medical assistant project, said he fielded partnership requests from many academic researchers. Ha and Kim stood out as an ideal match because of their robotics, AI, and human-computer interaction expertise.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EFor Ha, the project is an opportunity to test navigation and control algorithms he\u2019s developed through research that earned him the National Science Foundation CAREER Award. Ha combines computer simulation and real-world training data to make robots more deployable in high-stress, chaotic environments.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cDr. Ha has everything we want to put into our system, including his navigation policies,\u201d Lee said. \u201cHe works with robots and AI, and there weren\u2019t many candidates in that space. We needed a collaborator who can create the software and has experience running it on robots.\u201d\u003C\/p\u003E\u003Cp\u003EHa said he is already considering how his algorithms could scale beyond hospitals and become a universal means of robot navigation in unstructured real-world environments.\u003C\/p\u003E\u003Cp\u003E\u201cFor now, we\u2019re focusing on a customized navigation model for Korean environments, but there are ways to transfer the data set to different environments, such as the U.S. or European healthcare systems,\u201d Ha said.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cThe final product can be deployed to other systems and industries. It can help industrial workers at factories, retail stores, any place where workers can get overwhelmed by a high volume of tasks.\u201d\u003C\/p\u003E\u003Cp\u003EKim will focus on making the robot\u2019s design and interaction features more human. She\u2019ll develop a large-language model (LLM) AI system to communicate with patients, nurses, and doctors. She\u2019ll also develop an app that will allow users to input their commands and queries.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cThis project is not just about controlling robots, which is why Dr. Kim\u2019s expertise in human-computer interaction design through natural language was essential.,\u201d Lee said.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EKim is interviewing stakeholders from three South Korean hospitals to identify service and care pain points. The issues she\u2019s identified so far relate to doctor-patient communication, a lack of emotional support for patients, and an excessive number of small tasks that consume nurses\u2019 time.\u003C\/p\u003E\u003Cp\u003E\u201cOur goal is to develop this robot in a very human-centered way,\u201d she said. \u201cOne way is to give patients a way to communicate about the quality of their care and how the robot can support their emotional well-being.\u003C\/p\u003E\u003Cp\u003E\u201cWe found that patients often hesitate to ask busy nurses for small things like getting a cup of water. We believe this is an area a robot can support.\u201d\u003C\/p\u003E\u003Cp\u003EThe robot\u2019s hardware will be built in Korea, while Ha and Kim will develop the software in the U.S.\u003C\/p\u003E\u003Cp\u003EJong-hoon Park, CEO of Neuromeka, said in a press release the goal is to have a commercialized product as soon as possible.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cThrough this project, we will solve problems that existing collaborative robots could not,\u201d Park said. \u201cWe expect the medical AI humanoid robot technology being developed will contribute to reducing the daily work burden of medical and healthcare workers in the field.\u201d\u003C\/p\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EGeorgia Tech researchers Sehoon Ha and Jennifer Kim are working with South Korean institutions to create an AI-powered medical assistant robot. This five-year project, funded by a $7.2 million grant from the South Korean government, aims to alleviate the workload of healthcare professionals in South Korea by enabling the robot to navigate hospitals and interact with staff and patients.\u0026nbsp;\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"Georgia Tech researchers are collaborating with South Korean research institutes on a five-year grant to develop an AI-powered humanoid medical assistant to help doctors and nurses in South Korea."}],"uid":"36530","created_gmt":"2025-06-25 19:49:57","changed_gmt":"2025-06-25 19:55:15","author":"Nathan Deen","boilerplate_text":"","field_publication":"","field_article_url":"","location":"Atlanta, GA","dateline":{"date":"2025-06-25T00:00:00-04:00","iso_date":"2025-06-25T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"677282":{"id":"677282","type":"image","title":"IMG_4499-copy.jpg","body":"\u003Cp\u003E\u003Cem\u003ESchool of Interactive Computing Assistant Professor Sehoon Ha, Neuromeka researchers Joonho Lee and Yunho Kim, School of IC Assistant Professor Jennifer Kim, and Electronics and Telecommunications Research Institute researcher Dongyeop Kang, are collaborating to develop a medical assistant robot to support doctors and nurses in Korea. Photo by Nathan Deen\/College of Computing.\u003C\/em\u003E\u003C\/p\u003E","created":"1750881009","gmt_created":"2025-06-25 19:50:09","changed":"1750881009","gmt_changed":"2025-06-25 19:50:09","alt":"Researchers","file":{"fid":"261166","name":"IMG_4499-copy.jpg","image_path":"\/sites\/default\/files\/2025\/06\/25\/IMG_4499-copy.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2025\/06\/25\/IMG_4499-copy.jpg","mime":"image\/jpeg","size":126414,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2025\/06\/25\/IMG_4499-copy.jpg?itok=v92OOgVu"}}},"media_ids":["677282"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1188","name":"Research Horizons"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"194606","name":"Artificial Intelligence"},{"id":"153","name":"Computer Science\/Information Technology and Security"},{"id":"152","name":"Robotics"}],"keywords":[{"id":"192863","name":"go-ai"},{"id":"187812","name":"artificial intelligence (AI)"},{"id":"9153","name":"Research Horizons"},{"id":"187915","name":"go-researchnews"},{"id":"78681","name":"medical robotics"},{"id":"194391","name":"AI in Healthcare"}],"core_research_areas":[{"id":"193655","name":"Artificial Intelligence at Georgia Tech"},{"id":"39501","name":"People and Technology"},{"id":"39521","name":"Robotics"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[],"email":[],"slides":[],"orientation":[],"userdata":""}},"682761":{"#nid":"682761","#data":{"type":"news","title":"Georgia Tech Team Takes Second Place at ICRA Robot Teleoperation Contest","body":[{"value":"\u003Cp\u003EAn algorithmic breakthrough from School of Interactive Computing researchers that\u0026nbsp;\u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/news\/new-algorithm-teaches-robots-through-human-perspective\u0022\u003E\u003Cstrong\u003Eearned a Meta partnership\u003C\/strong\u003E\u003C\/a\u003Edrew more attention at the IEEE International Conference on Robotics and Automation (ICRA).\u003C\/p\u003E\u003Cp\u003EMeta announced in February its partnership with the labs of professors\u0026nbsp;\u003Ca href=\u0022https:\/\/faculty.cc.gatech.edu\/~danfei\/\u0022\u003E\u003Cstrong\u003EDanfei Xu\u003C\/strong\u003E\u003C\/a\u003E and\u0026nbsp;\u003Ca href=\u0022https:\/\/faculty.cc.gatech.edu\/~judy\/\u0022\u003E\u003Cstrong\u003EJudy Hoffman\u003C\/strong\u003E\u003C\/a\u003E on a novel computer vision-based algorithm called EgoMimic. It enables robots to learn new skills by imitating human tasks from first-person video footage captured by Meta\u2019s Aria smart glasses.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EXu\u2019s\u0026nbsp;\u003Ca href=\u0022https:\/\/rl2.cc.gatech.edu\/\u0022\u003E\u003Cstrong\u003ERobot Learning and Reasoning Lab (RL2)\u003C\/strong\u003E\u003C\/a\u003E displayed EgoMimic in action at ICRA May 19-23 at the World Congress Center in Atlanta.\u003C\/p\u003E\u003Cp\u003ELawrence Zhu, Pranav Kuppili, and Patcharapong \u201cElmo\u201d Aphiwetsa \u2014 students from Xu\u2019s lab \u2014 used Egomimic to compete in a robot teleoperation contest at ICRA. The team finished second in the event titled What Bimanual Teleoperation and Learning from Demonstration Can Do Today, earning a $10,000 cash prize.\u003C\/p\u003E\u003Cp\u003ETeams were challenged to perform tasks by remotely controlling a robot gripper. The robot had to fold a tablecloth, open a vacuum-sealed container, place an object into the container, and then reseal it in succession without any errors.\u003C\/p\u003E\u003Cp\u003ETeams completed the tasks as many times as possible in 30 minutes, earning points for each successful attempt.\u003C\/p\u003E\u003Cp\u003EThe competition also offered different challenge levels that increased the points awarded. Teams could directly operate the robot with a full workstation view and receive one point for each task completion. Or, as the RL2 team chose, teams could opt for the second challenge level.\u003C\/p\u003E\u003Cp\u003EThe second level required an operator to control the task with no view of the workstation except for what was provided to through a video feed. The RL2 team completed the task seven times and received double points for the challenge level.\u003C\/p\u003E\u003Cp\u003EThe third challenge level required teams to operate remotely from another location. At this level, teams could earn four times the number of points for each successful task completed. The fourth level challenged teams to deploy an algorithm for task performance and awarded eight points for each completion.\u003C\/p\u003E\u003Cp\u003EUsing two of Meta\u2019s Quest wireless controllers, Zhu controlled the robot under the direction of Aphiwetsa, while Kuppili monitored the coding from his laptop.\u003C\/p\u003E\u003Cp\u003E\u201cIt\u2019s physically difficult to teleoperate for half an hour,\u201d Zhu said. \u201cMy hands were shaking from holding the controllers in the air for that long.\u201d\u003C\/p\u003E\u003Cp\u003EBeing in constant communication with Aphiwetsa helped him stay focused throughout the contest.\u003C\/p\u003E\u003Cp\u003E\u201cI helped him strategize the teleoperation and noticed he could skip some of the steps in the folding,\u201d Aphiwetsa said. \u201cThere were many ways to do it, so I just told him what he could fix and how to do it faster.\u201d\u003C\/p\u003E\u003Cp\u003EZhu said he and his team had intended to tackle the fourth challenge level with the EgoMimic algorithm. However, due to unexpected time constraints, they decided to switch to the second level the day before the competition due to unexpected time constraints.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cI think we realized the day before the competition training the robot on our model would take a huge amount of time,\u201d Zhu said. \u201cWe decided to go for the teleoperation and started practicing.\u201d\u003C\/p\u003E\u003Cp\u003EHe said the team wants to tackle the highest challenge level and use a training model for next year\u2019s ICRA competition in Vienna, Austria.\u003C\/p\u003E\u003Cp\u003EICRA is the world\u2019s largest robotics conference, and\u0026nbsp;\u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/news\/georgia-tech-leads-robotics-world-converges-atlanta-icra-2025\u0022\u003E\u003Cstrong\u003EAtlanta hosted the event\u003C\/strong\u003E\u003C\/a\u003E for the third time in its history, drawing a record-breaking attendance of over 7,000.\u003C\/p\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EStudents from Georgia Tech\u0027s Robot Learning and Reasoning Lab earned second place and a $10,000 cash prize in a robot teleoperation contest at the 2025 International Conference on Robotics and Automation in Atlanta. The RL2 lab announced a partnership with Meta in February on a novel computer vision-based algorithm called EgoMimic. It enables robots to learn new skills by imitating human tasks from first-person video footage captured by Meta\u2019s Aria smart glasses.\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"A Georgia Tech team earned second place in the ICRA Robot Teleoperation Contest for their EgoMimic algorithm, which allows robots to learn skills by mimicking human tasks from first-person video."}],"uid":"36530","created_gmt":"2025-06-11 15:24:42","changed_gmt":"2025-06-12 11:52:56","author":"Nathan Deen","boilerplate_text":"","field_publication":"","field_article_url":"","location":"Atlanta, GA","dateline":{"date":"2025-06-11T00:00:00-04:00","iso_date":"2025-06-11T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"677223":{"id":"677223","type":"image","title":"IMG_4291-2-copy.jpg","body":null,"created":"1749729142","gmt_created":"2025-06-12 11:52:22","changed":"1749729142","gmt_changed":"2025-06-12 11:52:22","alt":"ICRA","file":{"fid":"261102","name":"IMG_4291-2-copy.jpg","image_path":"\/sites\/default\/files\/2025\/06\/12\/IMG_4291-2-copy.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2025\/06\/12\/IMG_4291-2-copy.jpg","mime":"image\/jpeg","size":151809,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2025\/06\/12\/IMG_4291-2-copy.jpg?itok=Ag2Xn9Oj"}}},"media_ids":["677223"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1188","name":"Research Horizons"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"153","name":"Computer Science\/Information Technology and Security"},{"id":"152","name":"Robotics"},{"id":"193158","name":"Student Competition Winners (academic, innovation, and research)"}],"keywords":[{"id":"181920","name":"cc-research; ic-ai-ml; ic-robotics"},{"id":"187812","name":"artificial intelligence (AI)"},{"id":"192863","name":"go-ai"},{"id":"187915","name":"go-researchnews"},{"id":"9153","name":"Research Horizons"},{"id":"167585","name":"student competition"}],"core_research_areas":[{"id":"193655","name":"Artificial Intelligence at Georgia Tech"},{"id":"39521","name":"Robotics"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[],"email":[],"slides":[],"orientation":[],"userdata":""}},"682424":{"#nid":"682424","#data":{"type":"news","title":"Rule the Pool This Summer and Make the Biggest Splash","body":[{"value":"\u003Cp\u003EWant to create the biggest splash in the pool this summer? Forget the bellyflop and the cannonball.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cPopping the Manu\u201d will make you a winner.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EGeorgia Tech researchers studied dives by the M\u0101ori, the indigenous people of New Zealand, who have made Manu jumping a cultural tradition. By hitting the water in a \u201cV\u201d shape, then quickly extending their bodies underwater, they\u2019ve perfected the art of huge splashes.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003ESee a video on how to make the splash and \u003Ca href=\u0022https:\/\/coe.gatech.edu\/news\/2025\/05\/rule-pool-summer-and-make-biggest-splash\u0022\u003Eread the entire story on the College of Engineering homepage\u003C\/a\u003E.\u0026nbsp;\u003C\/p\u003E","summary":"","format":"limited_html"}],"field_subtitle":[{"value":"Georgia Tech roboticists explain the physics of epic pool jumps and the New Zealanders who have mastered them"}],"field_summary":[{"value":"\u003Cp\u003EGeorgia Tech researchers studied dives by the M\u0101ori, the indigenous people of New Zealand, who have made Manu jumping a cultural tradition. By hitting the water in a \u201cV\u201d shape, then quickly extending their bodies underwater, they\u2019ve perfected the art of huge splashes.\u0026nbsp;\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"By hitting the water in a \u201cV\u201d shape, then quickly extending their bodies underwater, the M\u0101ori have perfected the art of huge splashes. "}],"uid":"27560","created_gmt":"2025-05-16 15:41:33","changed_gmt":"2025-05-16 16:37:41","author":"Jason Maderer","boilerplate_text":"","field_publication":"","field_article_url":"","location":"Atlanta, GA","dateline":{"date":"2025-05-16T00:00:00-04:00","iso_date":"2025-05-16T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"677084":{"id":"677084","type":"video","title":"Make a Big Splash in the Pool","body":"\u003Cp\u003EGeorgia Tech researchers learned the physics of epic pool jumps and the New Zealanders who have mastered them.\u003C\/p\u003E","created":"1747412201","gmt_created":"2025-05-16 16:16:41","changed":"1747412201","gmt_changed":"2025-05-16 16:16:41","video":{"youtube_id":"POda_NwypSM","video_url":"https:\/\/www.youtube.com\/watch?v=POda_NwypSM"}}},"media_ids":["677084"],"groups":[{"id":"1237","name":"College of Engineering"}],"categories":[{"id":"135","name":"Research"},{"id":"152","name":"Robotics"}],"keywords":[{"id":"187915","name":"go-researchnews"},{"id":"188776","name":"go-research"}],"core_research_areas":[],"news_room_topics":[{"id":"71881","name":"Science and Technology"}],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EJason Maderer\u003Cbr\u003ECollege of Engineering\u003Cbr\u003Emaderer@gatech.edu\u003C\/p\u003E","format":"limited_html"}],"email":["maderer@gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"681961":{"#nid":"681961","#data":{"type":"news","title":"Thesis on Human-Centered AI Earns Honors from International Computing Organization","body":[{"value":"\u003Cp\u003EA Georgia Tech alum\u2019s dissertation introduced ways to make artificial intelligence (AI) more accessible, interpretable, and accountable. Although it\u2019s been a year since his doctoral defense,\u0026nbsp;\u003Ca href=\u0022https:\/\/zijie.wang\/\u0022\u003E\u003Cstrong\u003EZijie (Jay) Wang\u003C\/strong\u003E\u003C\/a\u003E\u2019s (Ph.D. ML-CSE 2024) work continues to resonate with researchers.\u003C\/p\u003E\u003Cp\u003EWang is a recipient of the\u0026nbsp;\u003Ca href=\u0022https:\/\/medium.com\/sigchi\/announcing-the-2025-acm-sigchi-awards-17c1feaf865f\u0022\u003E\u003Cstrong\u003E2025 Outstanding Dissertation Award from the Association for Computing Machinery Special Interest Group on Computer-Human Interaction (ACM SIGCHI)\u003C\/strong\u003E\u003C\/a\u003E. The award recognizes Wang for his lifelong work on democratizing human-centered AI.\u003C\/p\u003E\u003Cp\u003E\u201cThroughout my Ph.D. and industry internships, I observed a gap in existing research: there is a strong need for practical tools for applying human-centered approaches when designing AI systems,\u201d said Wang, now a safety researcher at OpenAI.\u003C\/p\u003E\u003Cp\u003E\u201cMy work not only helps people understand AI and guide its behavior but also provides user-friendly tools that fit into existing workflows.\u201d\u003C\/p\u003E\u003Cp\u003E[Related: \u003Ca href=\u0022https:\/\/sites.gatech.edu\/research\/chi-2025\/\u0022\u003EGeorgia Tech College of Computing Swarms to Yokohama, Japan, for CHI 2025\u003C\/a\u003E]\u003C\/p\u003E\u003Cp\u003EWang\u2019s dissertation presented techniques in visual explanation and interactive guidance to align AI models with user knowledge and values. The work culminated from years of research, fellowship support, and internships.\u003C\/p\u003E\u003Cp\u003EWang\u2019s most influential projects formed the core of his dissertation. These included:\u003C\/p\u003E\u003Cul\u003E\u003Cli\u003E\u003Ca href=\u0022https:\/\/poloclub.github.io\/cnn-explainer\/\u0022\u003E\u003Cstrong\u003ECNN Explainer\u003C\/strong\u003E\u003C\/a\u003E: an open-source tool developed for deep-learning beginners. Since its release in July 2020, more than 436,000 global visitors have used the tool.\u003C\/li\u003E\u003Cli\u003E\u003Ca href=\u0022https:\/\/poloclub.github.io\/diffusiondb\/\u0022\u003E\u003Cstrong\u003EDiffusionDB\u003C\/strong\u003E\u003C\/a\u003E: a first-of-its-kind large-scale dataset that lays a foundation to help people better understand generative AI. This work could lead to new research in detecting deepfakes and designing human-AI interaction tools to help people more easily use these models.\u003C\/li\u003E\u003Cli\u003E\u003Ca href=\u0022https:\/\/interpret.ml\/gam-changer\/\u0022\u003E\u003Cstrong\u003EGAM Changer\u003C\/strong\u003E\u003C\/a\u003E: an interface that empowers users in healthcare, finance, or other domains to edit ML models to include knowledge and values specific to their domain, which improves reliability.\u003C\/li\u003E\u003Cli\u003E\u003Ca href=\u0022https:\/\/www.jennwv.com\/papers\/gamcoach.pdf\u0022\u003E\u003Cstrong\u003EGAM Coach\u003C\/strong\u003E\u003C\/a\u003E: an interactive ML tool that could help people who have been rejected for a loan by automatically letting an applicant know what is needed for them to receive loan approval. \u003C\/li\u003E\u003Cli\u003E\u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/news\/new-tool-teaches-responsible-ai-practices-when-using-large-language-models\u0022\u003E\u003Cstrong\u003EFarsight\u003C\/strong\u003E\u003C\/a\u003E: a tool that alerts developers when they write prompts in large language models that could be harmful and misused. \u0026nbsp;\u003C\/li\u003E\u003C\/ul\u003E\u003Cp\u003E\u201cI feel extremely honored and lucky to receive this award, and I am deeply grateful to many who have supported me along the way, including Polo, mentors, collaborators, and friends,\u201d said Wang, who was advised by School of Computational Science and Engineering (CSE) Professor\u0026nbsp;\u003Ca href=\u0022https:\/\/poloclub.github.io\/polochau\/\u0022\u003E\u003Cstrong\u003EPolo Chau\u003C\/strong\u003E\u003C\/a\u003E.\u003C\/p\u003E\u003Cp\u003E\u201cThis recognition also inspired me to continue striving to design and develop easy-to-use tools that help everyone to easily interact with AI systems.\u201d\u003C\/p\u003E\u003Cp\u003ELike Wang, Chau advised Georgia Tech alumnus\u0026nbsp;\u003Ca href=\u0022https:\/\/fredhohman.com\/\u0022\u003EFred Hohman\u003C\/a\u003E (Ph.D. CSE 2020).\u0026nbsp;\u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/news\/alumnus-building-legacy-through-dissertation-and-mentorship\u0022\u003EHohman won the ACM SIGCHI Outstanding Dissertation Award in 2022\u003C\/a\u003E.\u003C\/p\u003E\u003Cp\u003E\u003Ca href=\u0022https:\/\/poloclub.github.io\/\u0022\u003EChau\u2019s group\u003C\/a\u003E synthesizes machine learning (ML) and visualization techniques into scalable, interactive, and trustworthy tools. These tools increase understanding and interaction with large-scale data and ML models.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EChau is the associate director of corporate relations for the Machine Learning Center at Georgia Tech. Wang called the School of CSE his home unit while a student in the ML program under Chau.\u003C\/p\u003E\u003Cp\u003EWang is one of five recipients of this year\u2019s award to be presented at the 2025 Conference on Human Factors in Computing Systems (\u003Ca href=\u0022https:\/\/chi2025.acm.org\/\u0022\u003ECHI 2025\u003C\/a\u003E). The conference occurs April 25-May 1 in Yokohama, Japan.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003ESIGCHI is the world\u2019s largest association of human-computer interaction professionals and practitioners. The group sponsors or co-sponsors 26 conferences, including CHI.\u003C\/p\u003E\u003Cp\u003EWang\u2019s outstanding dissertation award is the latest recognition of a career decorated with achievement.\u003C\/p\u003E\u003Cp\u003EMonths after graduating from Georgia Tech,\u0026nbsp;\u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/news\/research-ai-safety-lands-recent-graduate-forbes-30-under-30\u0022\u003EForbes named Wang to its 30 Under 30 in Science for 2025\u003C\/a\u003E for his dissertation. Wang was one of 15 Yellow Jackets included in nine different 30 Under 30 lists and the only Georgia Tech-affiliated individual on the 30 Under 30 in Science list.\u003C\/p\u003E\u003Cp\u003EWhile a Georgia Tech student, Wang earned recognition from big names in business and technology. He received the\u0026nbsp;\u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/news\/student-named-apple-scholar-connecting-people-machine-learning\u0022\u003EApple Scholars in AI\/ML Ph.D. Fellowship in 2023\u003C\/a\u003E and was in the\u0026nbsp;\u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/news\/georgia-tech-machine-learning-students-earn-jp-morgan-ai-phd-fellowships\u0022\u003E2022 cohort of the J.P. Morgan AI Ph.D. Fellowships Program\u003C\/a\u003E.\u003C\/p\u003E\u003Cp\u003EAlong with the CHI award, Wang\u2019s dissertation earned him awards this year at banquets across campus. The\u0026nbsp;\u003Ca href=\u0022https:\/\/bpb-us-e1.wpmucdn.com\/sites.gatech.edu\/dist\/0\/283\/files\/2025\/03\/2025-Sigma-Xi-Research-Award-Winners.pdf\u0022\u003EGeorgia Tech chapter of Sigma Xi presented Wang with the Best Ph.D. Thesis Award\u003C\/a\u003E. He also received the College of Computing\u2019s Outstanding Dissertation Award.\u003C\/p\u003E\u003Cp\u003E\u201cGeorgia Tech attracts many great minds, and I\u2019m glad that some, like Jay, chose to join our group,\u201d Chau said. \u201cIt has been a joy to work alongside them and witness the many wonderful things they have accomplished, and with many more to come in their careers.\u201d\u003C\/p\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EA Georgia Tech alum\u2019s dissertation introduced ways to make artificial intelligence (AI) more accessible, interpretable, and accountable. Although it\u2019s been a year since his doctoral defense,\u0026nbsp;\u003Ca href=\u0022https:\/\/zijie.wang\/\u0022\u003E\u003Cstrong\u003EZijie (Jay) Wang\u003C\/strong\u003E\u003C\/a\u003E\u2019s (Ph.D. ML-CSE 2024) work continues to resonate with researchers.\u003C\/p\u003E\u003Cp\u003EWang is a recipient of the\u0026nbsp;\u003Ca href=\u0022https:\/\/medium.com\/sigchi\/announcing-the-2025-acm-sigchi-awards-17c1feaf865f\u0022\u003E\u003Cstrong\u003E2025 Outstanding Dissertation Award from the Association for Computing Machinery Special Interest Group on Computer-Human Interaction (ACM SIGCHI)\u003C\/strong\u003E\u003C\/a\u003E. The award recognizes Wang for his lifelong work on democratizing human-centered AI.\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":" Zijie (Jay) Wang (Ph.D. ML-CSE 2024) is a recipient of the 2025 Outstanding Dissertation Award from the Association for Computing Machinery Special Interest Group on Computer-Human Interaction (ACM SIGCHI)."}],"uid":"36319","created_gmt":"2025-04-22 14:24:46","changed_gmt":"2025-04-22 14:29:07","author":"Bryant Wine","boilerplate_text":"","field_publication":"","field_article_url":"","location":"Atlanta, GA","dateline":{"date":"2025-04-17T00:00:00-04:00","iso_date":"2025-04-17T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"676903":{"id":"676903","type":"image","title":"Jay-Wang-SIGCHI-Dissertation-Award.jpg","body":null,"created":"1745331896","gmt_created":"2025-04-22 14:24:56","changed":"1745331896","gmt_changed":"2025-04-22 14:24:56","alt":"Zijie (Jay) Wang CHI 2025","file":{"fid":"260750","name":"Jay-Wang-SIGCHI-Dissertation-Award.jpg","image_path":"\/sites\/default\/files\/2025\/04\/22\/Jay-Wang-SIGCHI-Dissertation-Award.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2025\/04\/22\/Jay-Wang-SIGCHI-Dissertation-Award.jpg","mime":"image\/jpeg","size":99526,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2025\/04\/22\/Jay-Wang-SIGCHI-Dissertation-Award.jpg?itok=_QvwIP00"}},"673947":{"id":"673947","type":"image","title":"Farsight CHI.jpg","body":null,"created":"1714954253","gmt_created":"2024-05-06 00:10:53","changed":"1714954253","gmt_changed":"2024-05-06 00:10:53","alt":"CHI 2024 Farsight","file":{"fid":"257404","name":"Farsight CHI.jpg","image_path":"\/sites\/default\/files\/2024\/05\/05\/Farsight%20CHI.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2024\/05\/05\/Farsight%20CHI.jpg","mime":"image\/jpeg","size":139358,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2024\/05\/05\/Farsight%20CHI.jpg?itok=6genJVjw"}}},"media_ids":["676903","673947"],"related_links":[{"url":"https:\/\/www.cc.gatech.edu\/news\/thesis-human-centered-ai-earns-honors-international-computing-organization","title":"Thesis on Human-Centered AI Earns Honors from International Computing Organization"}],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1188","name":"Research Horizons"},{"id":"50877","name":"School of Computational Science and Engineering"}],"categories":[{"id":"153","name":"Computer Science\/Information Technology and Security"},{"id":"155","name":"Congressional Testimony"},{"id":"143","name":"Digital Media and Entertainment"},{"id":"131","name":"Economic Development and Policy"},{"id":"42911","name":"Education"},{"id":"144","name":"Energy"},{"id":"145","name":"Engineering"},{"id":"154","name":"Environment"},{"id":"42921","name":"Exhibitions"},{"id":"42891","name":"Georgia Tech Arts"},{"id":"179356","name":"Industrial Design"},{"id":"129","name":"Institute and Campus"},{"id":"132","name":"Institute Leadership"},{"id":"194248","name":"International Education"},{"id":"146","name":"Life Sciences and Biology"},{"id":"147","name":"Military Technology"},{"id":"148","name":"Music and Music Technology"},{"id":"149","name":"Nanotechnology and Nanoscience"},{"id":"42931","name":"Performances"},{"id":"150","name":"Physics and Physical Sciences"},{"id":"151","name":"Policy, Social Sciences, and Liberal Arts"},{"id":"135","name":"Research"},{"id":"152","name":"Robotics"},{"id":"133","name":"Special Events and Guest Speakers"},{"id":"193157","name":"Student Honors and Achievements"},{"id":"8862","name":"Student Research"}],"keywords":[{"id":"654","name":"College of Computing"},{"id":"166983","name":"School of Computational Science and Engineering"},{"id":"187812","name":"artificial intelligence (AI)"},{"id":"181991","name":"Georgia Tech News Center"},{"id":"10199","name":"Daily Digest"},{"id":"9153","name":"Research Horizons"},{"id":"187915","name":"go-researchnews"},{"id":"192863","name":"go-ai"}],"core_research_areas":[{"id":"193655","name":"Artificial Intelligence at Georgia Tech"},{"id":"39431","name":"Data Engineering and Science"},{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EBryant Wine, Communications Officer\u003Cbr\u003E\u003Ca href=\u0022mailto:bryant.wine@cc.gatech.edu\u0022\u003Ebryant.wine@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"680875":{"#nid":"680875","#data":{"type":"news","title":"Securing Tomorrow\u2019s Autonomous Robots Today","body":[{"value":"\u003Cp\u003EEvery year, people in California risk their lives battling wildfires, but in the future, machines powered by artificial intelligence will be on the front lines, not firefighters.\u003C\/p\u003E\u003Cp\u003EHowever, this new generation of self-thinking robots will need security protocols to ensure they aren\u2019t susceptible to hackers. To integrate such robots into society, they must come with assurances that they will behave safely around humans.\u003C\/p\u003E\u003Cp\u003EIt begs the question: can you guarantee the safety of something that doesn\u2019t exist yet? It\u2019s something Assistant Professor \u003Ca href=\u0022https:\/\/glenchou.github.io\/\u0022\u003E\u003Cstrong\u003EGlen Chou\u003C\/strong\u003E\u003C\/a\u003E hopes to accomplish by developing algorithms that will enable autonomous systems to learn and adapt while acting with safety and security assurances.\u003C\/p\u003E\u003Cp\u003EHe plans to launch research initiatives, in collaboration with the \u003Ca href=\u0022https:\/\/scp.cc.gatech.edu\/\u0022\u003E\u003Cstrong\u003ESchool of Cybersecurity and Privacy\u003C\/strong\u003E\u003C\/a\u003E and the \u003Ca href=\u0022https:\/\/ae.gatech.edu\/\u0022\u003E\u003Cstrong\u003EDaniel Guggenheim School of Aerospace Engineering\u003C\/strong\u003E\u003C\/a\u003E, to secure this new technological frontier as it develops.\u003C\/p\u003E\u003Cp\u003E\u201cTo operate in uncertain real-world environments, robots and other autonomous systems need to leverage and adapt a complex network of perception and control algorithms to turn sensor data into actions,\u201d he said. \u201cTo obtain realistic assurances, we must do a joint safety and security analysis on these sensors and algorithms simultaneously, rather than one at a time.\u201d\u003C\/p\u003E\u003Cp\u003EThis end-to-end method would proactively look for flaws in the robot\u2019s systems rather than wait for them to be exploited. This would lead to intrinsically robust robotic systems that can recover from failures.\u003C\/p\u003E\u003Cp\u003E\u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/news\/new-algorithm-teaches-robots-through-human-perspective\u0022\u003E[RELATED: New Algorithm Teaches Robots Through Human Perspective]\u003C\/a\u003E\u003C\/p\u003E\u003Cp\u003EChou said this research will be helpful in other domains, including advanced space exploration. If a space rover is sent to one of Saturn\u2019s moons, for example, it needs to be able to act and think independently of scientists on Earth.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EAside from fighting fires and exploring space, this technology could perform maintenance in nuclear reactors, automatically maintain the power grid, and make autonomous surgery safer. It could also bring assistive robots into the home, enabling higher standards of care.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EThis is a challenging domain where safety, security, and privacy concerns are paramount due to frequent, close contact with humans.\u003C\/p\u003E\u003Cp\u003EThis will start in the newly established \u003Ca href=\u0022https:\/\/trustworthyrobotics.github.io\/\u0022\u003E\u003Cstrong\u003ETrustworthy Robotics Lab\u003C\/strong\u003E\u003C\/a\u003E at Georgia Tech, which Chou directs. He and his Ph.D. students will design principled algorithms that enable general-purpose robots and autonomous systems to operate capably, safely, and securely with humans while remaining resilient to real-world failures and uncertainty.\u003C\/p\u003E\u003Cp\u003EChou earned dual bachelor\u2019s degrees in electrical engineering and computer sciences as well as mechanical engineering from the University of California, Berkeley, in 2017, a master\u2019s and Ph.D. in electrical and computer engineering from the University of Michigan in 2019 and 2022, respectively.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EHe was a postdoc at the Massachusetts Institute of Technology Computer Science \u0026amp; Artificial Intelligence Laboratory before joining Georgia Tech in November 2024. He received the National Defense Science and Engineering Graduate fellowship program, NSF Graduate Research fellowships, and was named a Robotics: Science and Systems Pioneer in 2022.\u003C\/p\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EThe Trustworthy Robotics Lab is a new interdisciplinary venture led by School of Cybersecurity \u0026amp; Privacy Assistant Professor \u003Cstrong\u003EGlen\u003C\/strong\u003E \u003Cstrong\u003EChou\u003C\/strong\u003E. The lab\u0027s mission is to enable robots and autonomous systems to operate safely with humans while remaining resilient to real-world challenges.\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"The Trustworthy Robotics Lab enables robots and autonomous systems to operate safely with humans while remaining resilient to real-world challenges."}],"uid":"32045","created_gmt":"2025-03-04 16:55:18","changed_gmt":"2025-03-26 01:18:28","author":"Ben Snedeker","boilerplate_text":"","field_publication":"","field_article_url":"","location":"Atlanta, GA","dateline":{"date":"2025-03-04T00:00:00-05:00","iso_date":"2025-03-04T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"676448":{"id":"676448","type":"image","title":"Georgia Tech Assistant Professor Glen Chou with the School of Cybersecurity and Privacy works through an equation on a transparent writing board.","body":"\u003Cp\u003EAssistant Professor \u003Ca href=\u0022https:\/\/glenchou.github.io\/\u0022\u003E\u003Cstrong\u003EGlen Chou\u003C\/strong\u003E\u003C\/a\u003E is launching research initiatives to develop algorithms enabling autonomous systems to learn and adapt while acting with safety and security assurances. Photo by Terence Rushin, College of Computing\u003C\/p\u003E","created":"1741107406","gmt_created":"2025-03-04 16:56:46","changed":"1741107406","gmt_changed":"2025-03-04 16:56:46","alt":"Georgia Tech Assistant Professor Glen Chou with the School of Cybersecurity and Privacy works through an equation on a transparent writing board.","file":{"fid":"260240","name":"Glen-Header-Image.jpeg","image_path":"\/sites\/default\/files\/2025\/03\/04\/Glen-Header-Image.jpeg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2025\/03\/04\/Glen-Header-Image.jpeg","mime":"image\/jpeg","size":25313,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2025\/03\/04\/Glen-Header-Image.jpeg?itok=MAoJRnb5"}}},"media_ids":["676448"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1188","name":"Research Horizons"}],"categories":[{"id":"153","name":"Computer Science\/Information Technology and Security"},{"id":"135","name":"Research"},{"id":"152","name":"Robotics"}],"keywords":[{"id":"187915","name":"go-researchnews"},{"id":"181991","name":"Georgia Tech News Center"},{"id":"187812","name":"artificial intelligence (AI)"},{"id":"78271","name":"IRIM"}],"core_research_areas":[{"id":"193655","name":"Artificial Intelligence at Georgia Tech"},{"id":"145171","name":"Cybersecurity"},{"id":"39521","name":"Robotics"}],"news_room_topics":[{"id":"71881","name":"Science and Technology"}],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EJ.P. Popham, Communications Officer\u003C\/p\u003E\u003Cp\u003EGeorgia Tech\u003C\/p\u003E\u003Cp\u003ESchool of Cybersecurity \u0026amp; Privacy\u003C\/p\u003E\u003Cp\u003Ejohn.popham@cc.gatech.edu\u003C\/p\u003E","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"680735":{"#nid":"680735","#data":{"type":"news","title":"New Algorithms Developed at Georgia Tech are Lunar Bound","body":[{"value":"\u003Cp\u003EIn the past five years, five lunar landers have launched into space, marking a series of first successful landings in decades. The future will see more of these type of missions, including \u003Ca href=\u0022https:\/\/www.nasa.gov\/humans-in-space\/artemis\/\u0022\u003E\u003Cstrong\u003ENASA\u2019s Artemis program\u003C\/strong\u003E\u003C\/a\u003E and various private ventures. These missions need reliable and quick navigation abilities to successfully complete missions, especially if ground stations on Earth are overburdened or disconnected.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EGeorgia Tech\u2019s \u003Ca href=\u0022https:\/\/seal.ae.gatech.edu\/\u0022\u003E\u003Cstrong\u003ESpace Exploration and Analysis Laboratory\u003C\/strong\u003E\u003C\/a\u003E (SEAL) has developed new algorithms that are headed to the Moon, as part of the \u003Ca href=\u0022https:\/\/www.intuitivemachines.com\/im-2\u0022\u003E\u003Cstrong\u003EIntuitive Machine\u2019s\u003C\/strong\u003E\u003C\/a\u003E IM-2 mission. The mission is sending a Nova-C class lunar lander named Athena to the Moon\u2019s south pole region to test technologies and collect data that aim to enable future exploration. The mission is part of \u003Ca href=\u0022https:\/\/www.nasa.gov\/commercial-lunar-payload-services\/\u0022\u003E\u003Cstrong\u003ENASA\u2019s Commercial Lunar Payload Services\u003C\/strong\u003E\u003C\/a\u003E (CLPS) initiative.\u003C\/p\u003E\u003Cdiv\u003E\u003Cdiv\u003E\u003Ch3\u003E\u003Cstrong\u003ESEAL\u2019s Space Odyssey\u0026nbsp;\u003C\/strong\u003E\u003C\/h3\u003E\u003C\/div\u003E\u003C\/div\u003E\u003Cdiv\u003E\u003Cdiv\u003E\u003Cp\u003ESEAL, led by AE professor \u003Ca href=\u0022https:\/\/ae.gatech.edu\/directory\/person\/john-christian\u0022\u003E\u003Cstrong\u003EJohn Christian\u003C\/strong\u003E\u003C\/a\u003E, collaborated with Intuitive Machines to develop algorithms to guide Athena to the Shackleton crater: a region known for its limited sunlight and cold temperatures. In coordination with \u003Ca href=\u0022https:\/\/www.spacex.com\/\u0022\u003E\u003Cstrong\u003ESpaceX\u003C\/strong\u003E\u003C\/a\u003E, launch of the company\u2019s IM-2 mission is targeted for a multi-day launch window that opens no earlier than February 26 from Launch Complex 39A at NASA\u2019s Kennedy Space Center in Florida.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EAthena will transport NASA\u0027s\u003Cstrong\u003E\u0026nbsp;\u003C\/strong\u003E\u003Ca href=\u0022https:\/\/www.nasa.gov\/mission\/polar-resources-ice-mining-experiment-1-prime-1\/\u0022\u003E\u003Cstrong\u003EPRIME-1\u003C\/strong\u003E\u003C\/a\u003E (Polar Resources Ice Mining Experiment-1) which includes two instruments: a drill and spectrometer. The Regolith and Ice Drill for Exploring New Terrain (TRIDENT) is designed to drill up to three feet of lunar surface to extract soil, while the mass spectrometer (MSOLO) will measure the amount of ice in the soil samples.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EAfter launch, Athena will separate from the rocket and begin a roughly five-to-four-day cruise to the Moon\u2019s orbit. The lander will orbit the Moon for approximately three to 1.5 days before its descent to the south pole.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EIn Fall 2022, Research Engineer \u003Cstrong\u003EAva Thrasher\u0026nbsp;\u003C\/strong\u003E(AE 2022, M.S. AE 2024)\u003Cstrong\u003E\u0026nbsp;\u003C\/strong\u003Ebegan working on IM-2, developing new algorithms to guide Athena to the Shackleton crater using optical terrain relative navigation (TRN). Her approach looked at developing a crater detection algorithm (CDA) using image processing techniques that capture crater center locations on the Moon which are then used to determine Athena\u0027s position estimations.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EThen, she developed a crater identification algorithm (CIA) to match craters found in the image to a catalog of known lunar craters. By using CDA and CIA in tandem, Athena is able to estimate its location and orientation with a single photo, autonomously, and in real-time.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cWe wanted to strike a balance between creating something that would be done quickly on board, but also something that was reliable,\u201d she explained. \u201cWe ended up using simple crater geometry and knowledge of the sun angle to render what we expect a crater to look like in the image.\u201d\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EThe CDA finds craters by calculating a similarity score between the image and the rendered crater at each image pixel point. This process, also known as template matching, marks crater centers at points of very high similarity. CIA then uses these crater center locations to match them with known craters in a catalog. By matching pixel locations in an image to known three-dimensional positions on the Moon, the spacecraft is able to produce an estimation of its position.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EAfter two years of research and testing, Thrasher, Christian, and the Intuitive Machines team successfully demonstrated the CDA and CIA on synthetic imagery and Thrasher handed off the algorithms to Intuitive Machines to convert them into flight software for Athena.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EShe first got involved with optical navigation (OPNAV) research after she took AE 4342: Senior Design with Prof. Christian as an undergraduate student. \u201cI found optical navigation to be really interesting. I liked the idea of being able to figure out where you are and how you\u2019re moving in real-time based on a picture,\u201d she said. In Fall 2022, she started her first graduate semester at Tech and was a new member of SEAL, where she quickly began demonstrating the idea of detecting craters and prototyping the CDA and CIA programmed into Athena. \u0026nbsp;\u003C\/p\u003E\u003Cp\u003EAfter she graduated with her master\u2019s degree in aerospace engineering in May 2024, \u0026nbsp;she loved what she did so much, that she decided to stay and work as a full-time research engineer in SEAL. Now, she\u2019s gearing up to see her work make its way to the Moon.\u003C\/p\u003E\u003Cp\u003E\u201cIt\u0027s been really exciting and humbling to contribute to the massive task of putting a lander on the Moon. I never really appreciated the scale of work and collaboration needed to make it happen until I was lucky enough to be a part of it. I\u0027ll certainly be watching the launch and tracking the mission with great anticipation of both the engineering and scientific results,\u201d said Thrasher.\u0026nbsp;\u003C\/p\u003E\u003Cdiv\u003E\u003Cdiv\u003E\u003Ch3\u003E\u003Cstrong\u003EIM-1 Makes History\u003C\/strong\u003E\u003C\/h3\u003E\u003C\/div\u003E\u003C\/div\u003E\u003Cdiv\u003E\u003Cdiv\u003E\u003Cp\u003EAs part of a multi-year collaboration, Christian helped \u003Ca href=\u0022https:\/\/www.ae.gatech.edu\/news\/2024\/02\/georgia-tech-algorithm-headed-moon\u0022\u003E\u003Cstrong\u003Edevelop a key navigation algorithm for Intuitive Machines\u2019 first space mission (IM-1\u003C\/strong\u003E\u003C\/a\u003E) which launched a Nova-C lunar lander named Odysseus to the Malapert A crater on the Moon\u2019s south pole region; about 11 miles away from IM-2\u2019s targeted Shackleton crater.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EThe IM-1 mission launched from Kennedy Space Center on February 15, 2024 and soft-landed on the Moon on February 22, 2024---making Odysseus the first U.S. lunar landing since the Apollo program and the first-ever successful commercial lunar landing. Odysseus had a rougher-than-expected soft landing due to an anomaly with the altimeter that was supposed to provide insight into the lander\u2019s height above the lunar surface. In the absence of these altimeter measurements, Odysseus relied critically on the visual odometry technique that was jointly developed by Christian and Intuitive Machines.\u0026nbsp;\u003C\/p\u003E\u003C\/div\u003E\u003C\/div\u003E\u003Cdiv\u003E\u003Cdiv\u003E\u003Cp\u003EDespite these challenges, Odysseus captured images of the Moon during landing and operated on the lunar surface for 144 hours before entering standby mode.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EProf. Christian and SEAL have more projects on the horizon to develop new technologies for exploring our Moon, other planets, asteroids, and the solar system. These technologies will enable future scientific missions to safely explore challenging destinations and answer scientific questions that were impossible with yesterday\u2019s technology.\u0026nbsp;\u003C\/p\u003E\u003C\/div\u003E\u003C\/div\u003E\u003C\/div\u003E\u003C\/div\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EGeorgia Tech\u2019s \u003Ca href=\u0022https:\/\/seal.ae.gatech.edu\/\u0022\u003E\u003Cstrong\u003ESpace Exploration and Analysis Laboratory\u003C\/strong\u003E\u003C\/a\u003E (SEAL) has developed new algorithms that are headed to the Moon, as part of the \u003Ca href=\u0022https:\/\/www.intuitivemachines.com\/im-2\u0022\u003E\u003Cstrong\u003EIntuitive Machine\u2019s\u003C\/strong\u003E\u003C\/a\u003E IM-2 mission. The mission is sending a Nova-C class lunar lander named Athena to the Moon\u2019s south pole region to test technologies and collect data that aim to enable future exploration. The mission is part of \u003Ca href=\u0022https:\/\/www.nasa.gov\/commercial-lunar-payload-services\/\u0022\u003E\u003Cstrong\u003ENASA\u2019s Commercial Lunar Payload Services\u003C\/strong\u003E\u003C\/a\u003E (CLPS) initiative.\u003C\/p\u003E\u003Cp\u003ESEAL, led by Professor \u003Cstrong\u003EJohn Christian\u003C\/strong\u003E, collaborated with Intuitive Machines to develop algorithms to guide Athena to the Shackleton crater: a region known for its limited sunlight and cold temperatures. Research Engineer \u003Cstrong\u003EAva Thrasher\u003C\/strong\u003E (AE 2022, M.S. AE 2024) led Georgia Tech\u0027s SEAL team on developing the algorithms used for Athena\u0027s flight software.\u0026nbsp;\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"AE researchers have developed new algorithms to help Intuitive Machine\u2019s lunar lander find water ice on the Moon.  "}],"uid":"34736","created_gmt":"2025-02-26 16:19:31","changed_gmt":"2025-02-26 16:27:39","author":"Kelsey Gulledge","boilerplate_text":"","field_publication":"","field_article_url":"","location":"Atlanta, GA","dateline":{"date":"2025-02-25T00:00:00-05:00","iso_date":"2025-02-25T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"676397":{"id":"676397","type":"image","title":"54284511327_9ca21c7337_o.jpg","body":"\u003Cdiv\u003E\u003Cdiv\u003E\u003Cdiv\u003E\u003Cdiv\u003E\u003Cdiv\u003E\u003Cdiv\u003E\u003Cp\u003EIntuitive Machines\u0027 IM-2 mission lunar lander, Athena, in the company\u0027s Lunar Production and Operations Center. Credit: Intuitive Machines\u003C\/p\u003E\u003C\/div\u003E\u003C\/div\u003E\u003C\/div\u003E\u003C\/div\u003E\u003C\/div\u003E\u003C\/div\u003E\u003Cdiv\u003E\u003Cdiv\u003E\u003Cdiv\u003E\u003Cdiv\u003E\u003Cdiv\u003E\u003Cbr\u003E\u0026nbsp;\u003C\/div\u003E\u003C\/div\u003E\u003C\/div\u003E\u003C\/div\u003E\u003C\/div\u003E","created":"1740586783","gmt_created":"2025-02-26 16:19:43","changed":"1740586783","gmt_changed":"2025-02-26 16:19:43","alt":"Intuitive Machines\u0027 IM-2 mission lunar lander, Athena, in the company\u0027s Lunar Production and Operations Center. Credit: Intuitive Machines","file":{"fid":"260181","name":"54284511327_9ca21c7337_o.jpg","image_path":"\/sites\/default\/files\/2025\/02\/26\/54284511327_9ca21c7337_o.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2025\/02\/26\/54284511327_9ca21c7337_o.jpg","mime":"image\/jpeg","size":5213520,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2025\/02\/26\/54284511327_9ca21c7337_o.jpg?itok=-2RtZOQq"}},"676398":{"id":"676398","type":"image","title":"Christian-John.jpg","body":null,"created":"1740586840","gmt_created":"2025-02-26 16:20:40","changed":"1740586840","gmt_changed":"2025-02-26 16:20:40","alt":"Headshot of John Christian, AE School Professor","file":{"fid":"260182","name":"Christian-John.jpg","image_path":"\/sites\/default\/files\/2025\/02\/26\/Christian-John.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2025\/02\/26\/Christian-John.jpg","mime":"image\/jpeg","size":1385478,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2025\/02\/26\/Christian-John.jpg?itok=E0GH0VXB"}},"676399":{"id":"676399","type":"image","title":"HeadShotThrasher.JPG","body":null,"created":"1740586878","gmt_created":"2025-02-26 16:21:18","changed":"1740586878","gmt_changed":"2025-02-26 16:21:18","alt":"Headshot of Ava Thrasher, AE School alumna and research engineer","file":{"fid":"260183","name":"HeadShotThrasher.JPG","image_path":"\/sites\/default\/files\/2025\/02\/26\/HeadShotThrasher.JPG","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2025\/02\/26\/HeadShotThrasher.JPG","mime":"image\/jpeg","size":630760,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2025\/02\/26\/HeadShotThrasher.JPG?itok=P_w4muA9"}},"676401":{"id":"676401","type":"image","title":"AAS_2024_CraterDetection_final-2.png","body":"\u003Cdiv\u003E\u003Cdiv\u003E\u003Cdiv\u003EIllustration of the steps used to detect and identify craters to ultimately determine the vehicles state estimation. Credit: Georgia Tech\u0026nbsp;\u003C\/div\u003E\u003C\/div\u003E\u003C\/div\u003E\u003Cdiv\u003E\u003Cbr\u003E\u0026nbsp;\u003C\/div\u003E","created":"1740587067","gmt_created":"2025-02-26 16:24:27","changed":"1740587067","gmt_changed":"2025-02-26 16:24:27","alt":"Illustration of the steps used to detect and identify craters to ultimately determine the vehicles state estimation. Credit: Georgia Tech ","file":{"fid":"260185","name":"AAS_2024_CraterDetection_final-2.png","image_path":"\/sites\/default\/files\/2025\/02\/26\/AAS_2024_CraterDetection_final-2.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2025\/02\/26\/AAS_2024_CraterDetection_final-2.png","mime":"image\/png","size":201361,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2025\/02\/26\/AAS_2024_CraterDetection_final-2.png?itok=neltaeuF"}}},"media_ids":["676397","676398","676399","676401"],"groups":[{"id":"660364","name":"Aerospace Engineering"},{"id":"1237","name":"College of Engineering"},{"id":"1188","name":"Research Horizons"}],"categories":[{"id":"136","name":"Aerospace"},{"id":"130","name":"Alumni"},{"id":"42911","name":"Education"},{"id":"144","name":"Energy"},{"id":"145","name":"Engineering"},{"id":"154","name":"Environment"},{"id":"146","name":"Life Sciences and Biology"},{"id":"135","name":"Research"},{"id":"152","name":"Robotics"}],"keywords":[],"core_research_areas":[],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EKelsey Gulledge\u003C\/p\u003E","format":"limited_html"}],"email":["kelsey.gulledge@aerospace.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"680585":{"#nid":"680585","#data":{"type":"news","title":"New Algorithm Teaches Robots Through Human Perspective","body":[{"value":"\u003Cp\u003EA new data creation paradigm and algorithmic breakthrough from Georgia Tech has laid the groundwork for humanoid assistive robots to help with laundry, dishwashing, and other household chores. The framework enables these robots to learn new skills by mimicking actions from first-person videos of everyday activities.\u003C\/p\u003E\u003Cp\u003ECurrent training methods limit robots from being produced at the necessary scale to put a robot in every home, said \u003Cstrong\u003ESimar\u003C\/strong\u003E \u003Cstrong\u003EKareer\u003C\/strong\u003E, a Ph.D. student in the School of Interactive Computing.\u003C\/p\u003E\u003Cp\u003E\u201cTraditionally, collecting data for robotics means creating demonstration data,\u201d Kareer said. \u201cYou operate the robot\u2019s joints with a controller to move it and achieve the task you want, and you do this hundreds of times while recording sensor data, then train your models. This is slow and difficult. The only way to break that cycle is to detach the data collection from the robot itself.\u201d\u003C\/p\u003E\u003Cp\u003E\u003Ca href=\u0022https:\/\/youtu.be\/ckGUsdFX9pU?si=7qmGR1D5P_iPAVMt\u0022\u003E\u003Cstrong\u003E[VIDEO: Meta Shares EgoMimic Case Study Video]\u003C\/strong\u003E\u003C\/a\u003E\u003C\/p\u003E\u003Cp\u003EOther fields, such as computer vision and natural language processing (NLP), already leverage training data passively culled from the internet to create powerful generative AI and large-language models (LLMs).\u003C\/p\u003E\u003Cp\u003EMany roboticists, however, have shifted toward interventions that allow individual users to teach their robots how to perform tasks. Kareer believes a similar source of passive data can be established to enable practical generalized training that scales the production of humanoid robots.\u003C\/p\u003E\u003Cp\u003EThis is why Kareer collaborated with School of IC Assistant Professor \u003Cstrong\u003EDanfei\u003C\/strong\u003E \u003Cstrong\u003EXu\u003C\/strong\u003E and his \u003Ca href=\u0022https:\/\/rl2.cc.gatech.edu\/\u0022\u003E\u003Cstrong\u003ERobot Learning and Reasoning Lab\u003C\/strong\u003E\u003C\/a\u003E to develop EgoMimic, an algorithmic framework that leverages data from egocentric videos.\u003C\/p\u003E\u003Cp\u003EMeta\u2019s Ego4D dataset inspired Kareer\u2019s project. The benchmark dataset, released in 2023, consists of first-person videos of humans performing daily activities. This open-source data set trains AI models from a first-person human perspective.\u003C\/p\u003E\u003Cp\u003E\u201cWhen I looked at Ego4D, I saw a dataset that\u2019s the same as all the large robot datasets we\u2019re trying to collect, except it\u2019s with humans,\u201d Kareer said. \u201cYou just wear a pair of glasses, and you go do things. It doesn\u2019t need to come from the robot. It should come from something more scalable and passively generated, which is us.\u201d\u003C\/p\u003E\u003Cp\u003EKareer acquired a pair of Meta\u2019s Project Aria research glasses, which contain a rich sensor suite and can record video from a first-person perspective through external RGB and SLAM cameras.\u003C\/p\u003E\u003Cp\u003EKareer recorded himself folding a shirt while wearing the glasses and repeated the process. He did the same with other tasks such as placing a toy in a bowl and groceries into a bag. Then, he constructed a humanoid robot with pincers for hands and attached the glasses to the top to mimic a first-person viewpoint.\u003C\/p\u003E\u003Cp\u003EThe robot performed each task repeatedly for two hours. Kareer said building a traditional training algorithm would take days of teleoperating and recording robot sensory data. For his project, he only needed to gather a baseline of sensory data to ensure performance improvement.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EKareer bridged the gap between the two training sets with the EgoMimic algorithm. The robot\u2019s task performance rating increased by as much as 400% among various tasks with just 90 minutes of recorded footage. It also showed the ability to perform these tasks in unseen environments.\u003C\/p\u003E\u003Cp\u003EIf enough people wear Aria glasses or other smart glasses while performing daily tasks, it can create the passive data bank needed to train robots on a massive scale.\u003C\/p\u003E\u003Cp\u003EThis type of data collection can enable nearly endless possibilities for roboticists to help humans achieve more in their everyday lives. Humanoid robots can be produced and trained at an industrial level and be able to perform tasks the same way humans do.\u003C\/p\u003E\u003Cp\u003E\u201cThis work is most applicable to jobs that you can get a humanoid robot to do,\u201d Kareer said. \u201cIn whatever industry we are allowed to collect egocentric data, we can develop humanoid robots.\u201d\u003C\/p\u003E\u003Cp\u003EKareer will present his paper on EgoMimic at the 2025 IEEE Engineers\u2019 International Conference on Robotics and Automation (ICRA), which will take place from May 19 to 23 in Atlanta. The paper was co-authored by Xu and School of IC Assistant Professor \u003Cstrong\u003EJudy\u003C\/strong\u003E \u003Cstrong\u003EHoffman\u003C\/strong\u003E, fellow Tech students \u003Cstrong\u003EDhruv\u003C\/strong\u003E \u003Cstrong\u003EPatel\u003C\/strong\u003E, \u003Cstrong\u003ERyan\u003C\/strong\u003E \u003Cstrong\u003EPunamiya\u003C\/strong\u003E, \u003Cstrong\u003EPranay\u003C\/strong\u003E \u003Cstrong\u003EMathur\u003C\/strong\u003E, and \u003Cstrong\u003EShuo\u003C\/strong\u003E \u003Cstrong\u003ECheng\u003C\/strong\u003E, and \u003Cstrong\u003EChen\u003C\/strong\u003E \u003Cstrong\u003EWang\u003C\/strong\u003E, a Ph.D. student at Stanford.\u003C\/p\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EInspired by a dataset created by Meta, a Georgia Tech Ph.D. student is bringing a new perspective to robotics training.\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"Inspired by a dataset created by Meta, a Georgia Tech Ph.D. student is bringing a new perspective to robotics training."}],"uid":"32045","created_gmt":"2025-02-19 15:00:13","changed_gmt":"2025-02-19 20:20:46","author":"Ben Snedeker","boilerplate_text":"","field_publication":"","field_article_url":"","location":"Atlanta, GA","dateline":{"date":"2025-02-19T00:00:00-05:00","iso_date":"2025-02-19T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"676332":{"id":"676332","type":"image","title":"Georgia Tech Ph.D. student Simar Kareer is revolutionizing how robots are trained.","body":null,"created":"1739977597","gmt_created":"2025-02-19 15:06:37","changed":"1739977597","gmt_changed":"2025-02-19 15:06:37","alt":"Georgia Tech Ph.D. student Simar Kareer is revolutionizing how robots are trained.","file":{"fid":"260101","name":"Simar Kareer_86A7668 (1).jpg","image_path":"\/sites\/default\/files\/2025\/02\/19\/Simar%20Kareer_86A7668%20%281%29.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2025\/02\/19\/Simar%20Kareer_86A7668%20%281%29.jpg","mime":"image\/jpeg","size":118241,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2025\/02\/19\/Simar%20Kareer_86A7668%20%281%29.jpg?itok=jakxURZ2"}}},"media_ids":["676332"],"related_links":[{"url":"https:\/\/youtu.be\/ckGUsdFX9pU?si=b-J_aUjaDNpMpq2b","title":"Project Aria Case Study: Introducing EgoMimic by the Georgia Institute of Technology"}],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1188","name":"Research Horizons"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"135","name":"Research"},{"id":"152","name":"Robotics"}],"keywords":[{"id":"10199","name":"Daily Digest"},{"id":"181991","name":"Georgia Tech News Center"},{"id":"187915","name":"go-researchnews"}],"core_research_areas":[{"id":"39521","name":"Robotics"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EBen Snedeker, Communication Manager\u003C\/p\u003E\u003Cp\u003EGeorgia Tech College of Computing\u003C\/p\u003E","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}},"680526":{"#nid":"680526","#data":{"type":"news","title":"Securing Tomorrow\u2019s Autonomous Robots Today","body":[{"value":"\u003Cp\u003EMen and women in California put their lives on the line when battling wildfires every year, but there is a future where machines powered by artificial intelligence are on the front lines, not firefighters.\u003C\/p\u003E\u003Cp\u003EHowever, this new generation of self-thinking robots would need security protocols to ensure they aren\u2019t susceptible to hackers. To integrate such robots into society, they must come with assurances that they will behave safely around humans.\u003C\/p\u003E\u003Cp\u003EIt begs the question: can you guarantee the safety of something that doesn\u2019t exist yet? It\u2019s something Assistant Professor Glen Chou hopes to accomplish by developing algorithms that will enable autonomous systems to learn and adapt while acting with safety and security assurances.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EHe plans to launch research initiatives, in collaboration with the School of Cybersecurity and Privacy and the Daniel Guggenheim School of Aerospace Engineering, to secure this new technological frontier as it develops.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cTo operate in uncertain real-world environments, robots and other autonomous systems need to leverage and adapt a complex network of perception and control algorithms to turn sensor data into actions,\u201d he said. \u201cTo obtain realistic assurances, we must do a joint safety and security analysis on these sensors and algorithms simultaneously, rather than one at a time.\u201d\u003C\/p\u003E\u003Cp\u003EThis end-to-end method would proactively look for flaws in the robot\u2019s systems rather than wait for them to be exploited. This would lead to intrinsically robust robotic systems that can recover from failures.\u003C\/p\u003E\u003Cp\u003EChou said this research will be useful in other domains, including advanced space exploration. If a space rover is sent to one of Saturn\u2019s moons, for example, it needs to be able to act and think independently of scientists on Earth.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EAside from fighting fires and exploring space, this technology could perform maintenance in nuclear reactors, automatically maintain the power grid, and make autonomous surgery safer. It could also bring assistive robots into the home, enabling higher standards of care.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EThis is a challenging domain where safety, security, and privacy concerns are paramount due to frequent, close contact with humans.\u003C\/p\u003E\u003Cp\u003EThis will start in the newly established Trustworthy Robotics Lab at Georgia Tech, which Chou directs. He and his Ph.D. students will design principled algorithms that enable general-purpose robots and autonomous systems to operate capably, safely, and securely with humans while remaining resilient to real-world failures and uncertainty.\u003C\/p\u003E\u003Cp\u003EChou earned dual bachelor\u2019s degrees in electrical engineering and computer sciences as well as mechanical engineering from University of California Berkeley in 2017, a master\u2019s and Ph.D. in electrical and computer engineering from the University of Michigan in 2019 and 2022, respectively. He was a postdoc at MIT Computer Science \u0026amp; Artificial Intelligence Laboratory prior to joining Georgia Tech in November 2024. He is a recipient of the National Defense Science and Engineering Graduate fellowship program, NSF Graduate Research fellowships, and was named a Robotics: Science and Systems Pioneer in 2022.\u003C\/p\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EAssistant Professor Glen Chou is leading research to ensure the security and safety of future autonomous robots, which could one day fight wildfires, explore space, and assist in critical environments like nuclear reactors and hospitals. His work at Georgia Tech\u2019s Trustworthy Robotics Lab focuses on developing algorithms that allow robots to learn, adapt, and operate securely in uncertain real-world conditions. By integrating safety and security analyses, Chou aims to create resilient robotic systems that can proactively address vulnerabilities. His research, conducted in collaboration with cybersecurity and aerospace engineering experts, could revolutionize autonomous technology across multiple domains.\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"Assistant Professor Glen Chou is leading research to ensure the security and safety of future autonomous robots, which could one day fight wildfires, explore space, and assist in critical environments like nuclear reactors and hospitals."}],"uid":"36253","created_gmt":"2025-02-17 13:42:40","changed_gmt":"2025-02-17 13:53:01","author":"John Popham","boilerplate_text":"","field_publication":"","field_article_url":"","location":"Atlanta, GA","dateline":{"date":"2025-02-14T00:00:00-05:00","iso_date":"2025-02-14T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"676301":{"id":"676301","type":"image","title":"Glen Header Image.jpeg","body":null,"created":"1739799782","gmt_created":"2025-02-17 13:43:02","changed":"1739799782","gmt_changed":"2025-02-17 13:43:02","alt":"Man writing on glass with a marker ","file":{"fid":"260058","name":"Glen Header Image.jpeg","image_path":"\/sites\/default\/files\/2025\/02\/17\/Glen%20Header%20Image.jpeg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2025\/02\/17\/Glen%20Header%20Image.jpeg","mime":"image\/jpeg","size":1811476,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2025\/02\/17\/Glen%20Header%20Image.jpeg?itok=Cuy2sVvz"}}},"media_ids":["676301"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1188","name":"Research Horizons"},{"id":"660367","name":"School of Cybersecurity and Privacy"}],"categories":[{"id":"153","name":"Computer Science\/Information Technology and Security"},{"id":"145","name":"Engineering"},{"id":"135","name":"Research"},{"id":"152","name":"Robotics"}],"keywords":[{"id":"192863","name":"go-ai"},{"id":"187991","name":"go-robotics"},{"id":"10199","name":"Daily Digest"},{"id":"188776","name":"go-research"},{"id":"187812","name":"artificial intelligence (AI)"},{"id":"182941","name":"cc-research; ic-cybersecurity; ic-hcc"},{"id":"1404","name":"Cybersecurity"},{"id":"181920","name":"cc-research; ic-ai-ml; ic-robotics"},{"id":"182191","name":"areospace systems analysis"}],"core_research_areas":[{"id":"193655","name":"Artificial Intelligence at Georgia Tech"},{"id":"145171","name":"Cybersecurity"},{"id":"39501","name":"People and Technology"},{"id":"39521","name":"Robotics"},{"id":"193657","name":"Space Research Initiative"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cdiv\u003E\u003Cp\u003EJohn (JP) Popham\u0026nbsp;\u003Cbr\u003ECommunications Officer II\u0026nbsp;\u003Cbr\u003ECollege of Computing | School of Cybersecurity and Privacy\u003C\/p\u003E\u003C\/div\u003E","format":"limited_html"}],"email":["jpopham3@gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"675467":{"#nid":"675467","#data":{"type":"news","title":"Using Deep Learning Techniques to Improve Liver Disease Diagnosis and Treatment","body":[{"value":"\u003Cp\u003EHepatic, or liver, disease affects more than 100 million people in the U.S. About 4.5 million adults (1.8%) have been diagnosed with liver disease, but it is estimated that between 80 and 100 million adults in the U.S. have undiagnosed fatty liver disease in varying stages. Over time, undiagnosed and untreated hepatic diseases can lead to cirrhosis, a severe scarring of the liver that cannot be reversed.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EMost hepatic diseases are chronic conditions that will be present over the life of the patient, but early detection improves overall health and the ability to manage specific conditions over time. Additionally, assessing patients over time allows for effective treatments to be adjusted as necessary. The standard protocol for diagnosis, as well as follow-up tissue assessment, is a biopsy after the return of an abnormal blood test, but biopsies are time-consuming and pose risks for the patient. Several non-invasive imaging techniques have been developed to assess the stiffness of liver tissue, an indication of scarring, including magnetic resonance elastography (MRE).\u003C\/p\u003E\u003Cp\u003EMRE combines elements of ultrasound and MRI imaging to create a visual map showing gradients of stiffness throughout the liver and is increasingly used to diagnose hepatic issues. MRE exams, however, can fail for many reasons, including patient motion, patient physiology, imaging issues, and mechanical issues such as improper wave generation or propagation in the liver. Determining the success of MRE exams depends on visual inspection of technologists and radiologists. With increasing work demands and workforce shortages, providing an accurate, automated way to classify image quality will create a streamlined approach and reduce the need for repeat scans.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EProfessor\u0026nbsp;\u003Ca href=\u0022https:\/\/www.biorobotics.gatech.edu\/wp\/\u0022\u003EJun Ueda\u003C\/a\u003E in the George W. Woodruff School of Mechanical Engineering and robotics Ph.D. student Heriberto Nieves, working with a team from the Icahn School of Medicine at Mount Sinai, have successfully applied deep learning techniques for accurate, automated quality control image assessment. The research,\u0026nbsp;\u003Ca href=\u0022https:\/\/onlinelibrary.wiley.com\/doi\/10.1002\/jmri.29490\u0022\u003E\u201cDeep Learning-Enabled Automated Quality Control for Liver MR Elastography: Initial Results,\u201d\u003C\/a\u003E was published in the\u003Cem\u003E Journal of Magnetic Resonance Imaging\u003C\/em\u003E.\u003C\/p\u003E\u003Cp\u003EUsing five deep learning training models, an accuracy of 92% was achieved by the best-performing ensemble on retrospective MRE images of patients with varied liver stiffnesses. The team also achieved a return of the analyzed data within seconds. The rapidity of image quality return allows the technician to focus on adjusting hardware or patient orientation for re-scan in a single session, rather than requiring patients to return for costly and timely re-scans due to low-quality initial images.\u003C\/p\u003E\u003Cp\u003EThis new research is a step toward streamlining the review pipeline for MRE using deep learning techniques, which have remained unexplored compared to other medical imaging modalities.\u0026nbsp; The research also provides a helpful baseline for future avenues of inquiry, such as assessing the health of the spleen or kidneys. It may also be applied to automation for image quality control for monitoring non-hepatic conditions, such as breast cancer or muscular dystrophy, in which tissue stiffness is an indicator of initial health and disease progression. Ueda, Nieves, and their team hope to test these models on Siemens Healthineers magnetic resonance scanners within the next year.\u003C\/p\u003E\u003Cp\u003E\u0026nbsp;\u0026nbsp;\u0026nbsp;\u0026nbsp;\u0026nbsp;\u0026nbsp;\u0026nbsp;\u0026nbsp;\u0026nbsp;\u0026nbsp;\u0026nbsp;\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u003Cstrong\u003EPublication\u003C\/strong\u003E\u003Cbr\u003ENieves-Vazquez, H.A., Ozkaya, E., Meinhold, W., Geahchan, A., Bane, O., Ueda, J. and Taouli, B. (2024), Deep Learning-Enabled Automated Quality Control for Liver MR Elastography: Initial Results. J Magn Reson Imaging.\u0026nbsp;\u003Ca href=\u0022https:\/\/doi.org\/10.1002\/jmri.29490\u0022\u003Ehttps:\/\/doi.org\/10.1002\/jmri.29490\u003C\/a\u003E\u003C\/p\u003E\u003Cp\u003E\u003Cstrong\u003EPrior Work\u003C\/strong\u003E\u0026nbsp;\u003Cbr\u003E\u003Ca href=\u0022https:\/\/research.gatech.edu\/robotically-precise-diagnostics-and-therapeutics-degenerative-disc-disorder\u0022\u003ERobotically Precise Diagnostics and Therapeutics for Degenerative Disc Disorder\u003C\/a\u003E\u003C\/p\u003E\u003Cp\u003E\u003Cstrong\u003ERelated Material\u003C\/strong\u003E\u003Cbr\u003E\u003Ca href=\u0022https:\/\/onlinelibrary.wiley.com\/doi\/10.1002\/jmri.29492\u0022\u003EEditorial for \u201cDeep Learning-Enabled Automated Quality Control for Liver MR Elastography: Initial Results\u201d\u003C\/a\u003E\u003C\/p\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EProfessor\u0026nbsp;\u003Ca href=\u0022https:\/\/www.biorobotics.gatech.edu\/wp\/\u0022\u003EJun Ueda\u003C\/a\u003E in the George W. Woodruff School of Mechanical Engineering and robotics Ph.D. student Heriberto Nieves, working with a team from the Icahn School of Medicine at Mount Sinai, have successfully applied deep learning techniques for accurate, automated quality control image assessment.\u0026nbsp;\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"With increasing work demands and workforce shortages, providing an accurate, automated way to classify image quality will create a streamlined approach and reduce the need for repeat scans. "}],"uid":"27863","created_gmt":"2024-07-15 19:33:24","changed_gmt":"2024-07-17 15:20:20","author":"Christa Ernst","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2024-07-15T00:00:00-04:00","iso_date":"2024-07-15T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"674351":{"id":"674351","type":"image","title":"Ueda MRE News","body":"\u003Cp\u003EProfessor\u0026nbsp;\u003Ca href=\u0022https:\/\/www.biorobotics.gatech.edu\/wp\/\u0022\u003EJun Ueda\u003C\/a\u003E in the George W. Woodruff School of Mechanical Engineering and robotics Ph.D. student Heriberto Nieves.\u003C\/p\u003E","created":"1721071536","gmt_created":"2024-07-15 19:25:36","changed":"1721071827","gmt_changed":"2024-07-15 19:30:27","alt":"Professor\u00a0Jun Ueda in the George W. Woodruff School of Mechanical Engineering and robotics Ph.D. student Heriberto Nieves.","file":{"fid":"257851","name":"Heriberto and Ueda DL-MRE 6 half sized.png","image_path":"\/sites\/default\/files\/2024\/07\/15\/Heriberto%20and%20Ueda%20DL-MRE%206%20half%20sized.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2024\/07\/15\/Heriberto%20and%20Ueda%20DL-MRE%206%20half%20sized.png","mime":"image\/png","size":4165537,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2024\/07\/15\/Heriberto%20and%20Ueda%20DL-MRE%206%20half%20sized.png?itok=2FyY2iUP"}}},"media_ids":["674351"],"groups":[{"id":"142761","name":"IRIM"},{"id":"1292","name":"Parker H. Petit Institute for Bioengineering and Bioscience (IBB)"},{"id":"1188","name":"Research Horizons"}],"categories":[{"id":"138","name":"Biotechnology, Health, Bioengineering, Genetics"},{"id":"135","name":"Research"},{"id":"152","name":"Robotics"}],"keywords":[{"id":"81491","name":"Institute for Robotics and Intelligent Machines (IRIM)"},{"id":"11689","name":"Institute for Bioengineeirng and Bioscience"},{"id":"594","name":"college of engineering"},{"id":"98751","name":"College of Engineering; George W. Woodruff School of Mechanical Engineering"},{"id":"187812","name":"artificial intelligence (AI)"},{"id":"9540","name":"Bioengineering and Bioscience"},{"id":"97611","name":"research news"},{"id":"188087","name":"go-irim"},{"id":"187915","name":"go-researchnews"},{"id":"192863","name":"go-ai"},{"id":"187423","name":"go-bio"}],"core_research_areas":[{"id":"193655","name":"Artificial Intelligence at Georgia Tech"},{"id":"39441","name":"Bioengineering and Bioscience"},{"id":"39521","name":"Robotics"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EChrista M. Ernst |\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u003Cstrong\u003EResearch Communications Program Manager |\u0026nbsp;\u003C\/strong\u003E\u003C\/p\u003E\u003Cp\u003E\u003Cstrong\u003ETopic Expertise: Robotics, Data Sciences, Semiconductor Design \u0026amp; Fab |\u0026nbsp;\u003C\/strong\u003E\u003C\/p\u003E\u003Cp\u003E\u003Ca href=\u0022https:\/\/research.gatech.edu\/\u0022 rel=\u0022noopener noreferrer\u0022 target=\u0022_blank\u0022\u003E\u003Cstrong\u003EResearch @ the Georgia Institute of Technology\u003C\/strong\u003E\u003C\/a\u003E\u003C\/p\u003E","format":"limited_html"}],"email":["christa.ernst@research.gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"675021":{"#nid":"675021","#data":{"type":"news","title":" Ph.D. Student Wins Best Paper at Robotics Conference","body":[{"value":"\u003Cp\u003EAsk a person to find a frying pan, and they will most likely go to the kitchen. Ask a robot to do the same, and you may get numerous responses, depending on how the robot is trained.\u003C\/p\u003E\u003Cp\u003ESince humans often associate objects in a home with the room they are in, Naoki Yokoyama thinks robots that navigate human environments to perform assistive tasks should mimic that reasoning.\u003C\/p\u003E\u003Cp\u003ERoboticists have employed natural language models to help robots mimic human reasoning over the past few years. However, Yokoyama, a Ph.D. student in robotics, said these models create a \u201cbottleneck\u201d that prevents agents from picking up on visual cues such as room type, size, d\u00e9cor, and lighting.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EYokoyama presented a new framework for semantic reasoning at the Institute of Electrical and Electronic Engineers (IEEE) \u003Ca href=\u0022https:\/\/www.ieee-ras.org\/conferences-workshops\/fully-sponsored\/icra\u0022\u003E\u003Cstrong\u003EInternational Conference on Robotics and Automation\u003C\/strong\u003E\u003C\/a\u003E (ICRA) last month in Yokohama, Japan. ICRA is the world\u2019s largest robotics conference.\u003C\/p\u003E\u003Cp\u003EYokoyama earned a best paper award in the Cognitive Robotics category with his \u003Ca href=\u0022http:\/\/naoki.io\/portfolio\/vlfm\u0022\u003E\u003Cstrong\u003EVision-Language Frontier Maps (VLFM) proposal\u003C\/strong\u003E\u003C\/a\u003E.\u003C\/p\u003E\u003Cp\u003EAssistant Professor Sehoon Ha and Associate Professor Dhruv Batra from the School of Interactive Computing advised Yokoyama on the paper. Yokoyama authored the paper while interning at the Boston Dynamics\u2019 \u003Ca href=\u0022https:\/\/theaiinstitute.com\/\u0022\u003E\u003Cstrong\u003EAI Institute\u003C\/strong\u003E\u003C\/a\u003E.\u003C\/p\u003E\u003Cp\u003E\u201cI think the cognitive robotic category represents a significant portion of submissions to ICRA nowadays,\u201d said Yokoyama, whose family is from Japan. \u201cI\u2019m grateful that our work is being recognized among the best in this field.\u201d\u003C\/p\u003E\u003Cp\u003EInstead of natural language models, Yokoyama used a renowned vision-language model called BLIP-2 and tested it on a Boston Dynamics \u201cSpot\u201d robot in home and office environments.\u003C\/p\u003E\u003Cp\u003E\u201cWe rely on models that have been trained on vast amounts of data collected from the web,\u201d Yokoyama said. \u201cThat allows us to use models with common sense reasoning and world knowledge. It\u2019s not limited to a typical robot learning environment.\u201d\u003C\/p\u003E\u003Ch6\u003E\u003Cstrong\u003EWhat is Blip-2?\u003C\/strong\u003E\u003C\/h6\u003E\u003Cp\u003EBLIP-2 matches images to text by assigning a score that evaluates how well the user input text describes the content of an image. The model removes the need for the robot to use object detectors and language models.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EInstead, the robot uses BLIP-2 to extract semantic values from RGB images with a text prompt that includes the target object.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EBLIP-2 then teaches the robot to recognize the room type, distinguishing the living room from the bathroom and the kitchen. The robot learns to associate certain objects with specific rooms where it will likely find them.\u003C\/p\u003E\u003Cp\u003EFrom here, the robot creates a value map to determine the most likely locations for a target object, Yokoyama said.\u003C\/p\u003E\u003Cp\u003EYokoyama said this is a step forward for intelligent home assistive robots, enabling users to find objects \u2014 like missing keys \u2014 in their homes without knowing an item\u2019s location.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cIf you\u2019re looking for a pair of scissors, the robot can automatically figure out it should head to the kitchen or the office,\u201d he said. \u201cEven if the scissors are in an unusual place, it uses semantic reasoning to work through each room from most probable location to least likely.\u201d\u003C\/p\u003E\u003Cp\u003EHe added that the benefit of using a VLM instead of an object detector is that the robot will include visual cues in its reasoning.\u003C\/p\u003E\u003Cp\u003E\u201cYou can look at a room in an apartment, and there are so many things an object detector wouldn\u2019t tell you about that room that would be informative,\u201d he said. \u201cYou don\u2019t want to limit yourself to a textual description or a list of object classes because you\u2019re missing many semantic visual cues.\u201d\u003C\/p\u003E\u003Cp\u003EWhile other VLMs exist, Yokoyama chose BLIP-2 because the model:\u003C\/p\u003E\u003Cul\u003E\u003Cli\u003EAccepts any text length and isn\u2019t limited to a small set of objects or categories.\u003C\/li\u003E\u003Cli\u003EAllows the robot to be pre-trained on vast amounts of data collected from the internet.\u003C\/li\u003E\u003Cli\u003EHas proven results that enable accurate image-to-text matching.\u003C\/li\u003E\u003C\/ul\u003E\u003Ch6\u003E\u003Cstrong\u003EHome, Office, and Beyond\u003C\/strong\u003E\u003C\/h6\u003E\u003Cp\u003EYokoyama also tested the Spot robot to navigate a more challenging office environment. Office spaces tend to be more homogenous and harder to distinguish from one another than rooms in a home.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cWe showed a few cases in which the robot will still work,\u201d Yokoyama said. \u201cWe tell it to find a microwave, and it searches for the kitchen. We tell it to find a potted plant, and it moves toward an area with windows because, based on what it knows from BLIP-2, that\u2019s the most likely place to find the plant.\u201d\u003C\/p\u003E\u003Cp\u003EYokoyama said as VLM models continue to improve, so will robot navigation. The increase in the number of VLM models has caused robot navigation to steer away from traditional physical simulations.\u003C\/p\u003E\u003Cp\u003E\u201cIt shows how important it is to keep an eye on the work being done in computer vision and natural language processing for getting robots to perform tasks more efficiently,\u201d he said. \u201cThe current research direction in robot learning is moving toward more intelligent and higher-level reasoning. These foundation models are going to play a key role in that.\u201d\u003C\/p\u003E\u003Cp\u003E\u003Cem\u003ETop photo by Kevin Beasley\/College of Computing.\u003C\/em\u003E\u003C\/p\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003ERoboticists have employed natural language models to help robots mimic human reasoning over the past few years. However, Yokoyama, a Ph.D. student in robotics, said these models create a \u201cbottleneck\u201d that prevents agents from picking up on visual cues such as room type, size, d\u00e9cor, and lighting.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EYokoyama presented a new framework for semantic reasoning at the Institute of Electrical and Electronic Engineers (IEEE) \u003Ca href=\u0022https:\/\/www.ieee-ras.org\/conferences-workshops\/fully-sponsored\/icra\u0022\u003E\u003Cstrong\u003EInternational Conference on Robotics and Automation\u003C\/strong\u003E\u003C\/a\u003E (ICRA) last month in Yokohama, Japan. ICRA is the world\u2019s largest robotics conference.\u003C\/p\u003E\u003Cp\u003EYokoyama earned a best paper award in the Cognitive Robotics category with his \u003Ca href=\u0022http:\/\/naoki.io\/portfolio\/vlfm\u0022\u003E\u003Cstrong\u003EVision-Language Frontier Maps (VLFM) proposal\u003C\/strong\u003E\u003C\/a\u003E.\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"Yokoyama presented a new framework for semantic reasoning for robots at the IEEE International Conference on Robotics and Automation, where he won best paper in the Cognitive Robotics category."}],"uid":"36530","created_gmt":"2024-06-06 14:26:46","changed_gmt":"2024-06-06 14:40:32","author":"Nathan Deen","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2024-06-06T00:00:00-04:00","iso_date":"2024-06-06T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"674146":{"id":"674146","type":"image","title":"208A9469.jpg","body":null,"created":"1717684031","gmt_created":"2024-06-06 14:27:11","changed":"1717684031","gmt_changed":"2024-06-06 14:27:11","alt":"Three students kneeling around a spot robot","file":{"fid":"257622","name":"208A9469.jpg","image_path":"\/sites\/default\/files\/2024\/06\/06\/208A9469.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2024\/06\/06\/208A9469.jpg","mime":"image\/jpeg","size":153459,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2024\/06\/06\/208A9469.jpg?itok=E1iUHz3L"}}},"media_ids":["674146"],"groups":[{"id":"50876","name":"School of Interactive Computing"},{"id":"1188","name":"Research Horizons"}],"categories":[{"id":"153","name":"Computer Science\/Information Technology and Security"},{"id":"135","name":"Research"},{"id":"152","name":"Robotics"},{"id":"193157","name":"Student Honors and Achievements"},{"id":"8862","name":"Student Research"}],"keywords":[{"id":"192863","name":"go-ai"},{"id":"187812","name":"artificial intelligence (AI)"},{"id":"10199","name":"Daily Digest"},{"id":"187915","name":"go-researchnews"}],"core_research_areas":[{"id":"193655","name":"Artificial Intelligence at Georgia Tech"},{"id":"39521","name":"Robotics"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003ENathan Deen\u003C\/p\u003E\u003Cp\u003ECommunications Officer\u003C\/p\u003E\u003Cp\u003ESchool of Interactive Computing\u003C\/p\u003E","format":"limited_html"}],"email":["ndeen6@gatech.edu"],"slides":[],"orientation":[],"userdata":""}},"674367":{"#nid":"674367","#data":{"type":"news","title":"Why Can\u2019t Robots Outrun Animals?","body":[{"value":"\u003Cp\u003ERobots that can run, jump, and even talk have shifted from the stuff of science fiction to reality in the past few decades. Yet even in robots specialized for specific movements like running, animals are still able to outmaneuver the most advanced robotic developments.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EGeorgia Tech\u2019s \u003Ca href=\u0022https:\/\/physics.gatech.edu\/user\/simon-sponberg\u0022 rel=\u0022noreferrer noopener\u0022 target=\u0022_blank\u0022\u003ESimon Sponberg\u003C\/a\u003E recently collaborated with researchers at the \u003Ca href=\u0022https:\/\/www.washington.edu\/\u0022 rel=\u0022noreferrer noopener\u0022 target=\u0022_blank\u0022\u003EUniversity of Washington\u003C\/a\u003E, \u003Ca href=\u0022https:\/\/www.sfu.ca\/\u0022 rel=\u0022noreferrer noopener\u0022 target=\u0022_blank\u0022\u003ESimon Fraser University\u003C\/a\u003E, \u003Ca href=\u0022https:\/\/www.colorado.edu\/\u0022 rel=\u0022noreferrer noopener\u0022 target=\u0022_blank\u0022\u003EUniversity of Colorado Boulder\u003C\/a\u003E, and \u003Ca href=\u0022https:\/\/www.sri.com\/\u0022 rel=\u0022noreferrer noopener\u0022 target=\u0022_blank\u0022\u003EStanford Research Institute\u003C\/a\u003E to answer one deceptively complex question: Why can\u2019t robots outrun animals?\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cThis work is about trying to understand how, despite have some really amazing robots, there still seems to be a gulf between the capabilities of animal movement and what we can engineer,\u201d says Sponberg, who is Dunn Family Associate Professor in the \u003Ca href=\u0022https:\/\/physics.gatech.edu\/\u0022 rel=\u0022noreferrer noopener\u0022 target=\u0022_blank\u0022\u003ESchool of Physics\u003C\/a\u003E and \u003Ca href=\u0022https:\/\/biosciences.gatech.edu\/\u0022 rel=\u0022noreferrer noopener\u0022 target=\u0022_blank\u0022\u003ESchool of Biological Sciences\u003C\/a\u003E.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003ERecently published in \u003Cem\u003E\u003Ca href=\u0022https:\/\/www.science.org\/doi\/10.1126\/scirobotics.adi9754\u0022 rel=\u0022noreferrer noopener\u0022 target=\u0022_blank\u0022\u003EScience Robotics\u003C\/a\u003E,\u003C\/em\u003E their study systematically examines a suite of biological and robotic runners to figure out how to further advance our best robotic designs.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cIn robotics design we are often very component focused \u2014 we are used to having to establish specifications for the parts that we need and then finding the best component solution,\u201d said Sponberg, who also serves on the executive committee for Georgia Tech\u0027s \u003Ca href=\u0022neuro.gatech.edu\u0022\u003ENeuro Next Initiative\u003C\/a\u003E. \u201cThis is of course not how evolution works. We wondered if we systematically analyzed the performance of animals in the same component way that we design robots, if we might see an obvious gap.\u201d\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThe gap turns out not to be in the function of individual robotic components, but rather the ability of those components to work together in the seamless way biological components do, highlighting a field of opportunity for new research in robotic development.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u201cThis means that the frontier is not necessarily figuring out how to design better motors or sensors or controllers,\u201d says Sponberg, \u201cbut rather how to integrate them together \u2014 this is where biology really excels.\u201d\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Ch4\u003E\u003Cstrong\u003ERead more about man versus machine and the future of bioinspired robotics \u003Ca href=\u0022https:\/\/www.ece.uw.edu\/spotlight\/why-animals-can-outrun-robots\/\u0022\u003Ehere\u003C\/a\u003E.\u003C\/strong\u003E\u003C\/h4\u003E\r\n","summary":"","format":"limited_html"}],"field_subtitle":[{"value":"Georgia Tech Researcher Collaborates to Advance Bioinspired Design"}],"field_summary":[{"value":"\u003Cp\u003EGeorgia Tech Researcher Simon Sponberg collaborates to ask why robotic advancements have yet to outpace animals \u2014 and look at what we can learn from biology to engineer new robotic designs.\u003C\/p\u003E\r\n","format":"limited_html"}],"field_summary_sentence":[{"value":"Georgia Tech Researcher Simon Sponberg collaborates to ask why robotic advancements have yet to outpace animals \u2014 and look at what we can learn from biology to engineer new robotic designs."}],"uid":"35575","created_gmt":"2024-04-24 19:31:58","changed_gmt":"2024-05-02 20:25:23","author":"adavidson38","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2024-05-02T00:00:00-04:00","iso_date":"2024-05-02T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"673838":{"id":"673838","type":"image","title":"mCLARI_Spider.jpg","body":"\u003Cp\u003ECan this small robot outrun a spider? Photo Credit: Animal Inspired Movement and Robotics Lab, CU Boulder.\u003C\/p\u003E\r\n","created":"1713987354","gmt_created":"2024-04-24 19:35:54","changed":"1713987354","gmt_changed":"2024-04-24 19:35:54","alt":"Can this small robot outrun a spider? Photo Credit: Animal Inspired Movement and Robotics Lab, CU Boulder.","file":{"fid":"257286","name":"mCLARI_Spider.jpg","image_path":"\/sites\/default\/files\/2024\/04\/24\/mCLARI_Spider.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2024\/04\/24\/mCLARI_Spider.jpg","mime":"image\/jpeg","size":3554930,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2024\/04\/24\/mCLARI_Spider.jpg?itok=wDPfHkwN"}}},"media_ids":["673838"],"related_links":[{"url":"https:\/\/research.gatech.edu\/georgia-tech-partners-15m-nsf-grant-explore-muscle-dynamics","title":"Georgia Tech Partners on $15M NSF Grant to Explore Muscle Dynamics"},{"url":"https:\/\/research.gatech.edu\/edge-georgia-tech-professors-awarded-curci-grants-emerging-bio-research-0","title":"On The Edge: Georgia Tech Professors Awarded Curci Grants for Emerging Bio Research"},{"url":"https:\/\/research.gatech.edu\/feature\/ultrafast-flight","title":"How Insects Evolved to Ultrafast Flight (And Back)"}],"groups":[{"id":"66220","name":"Neuro"},{"id":"1292","name":"Parker H. Petit Institute for Bioengineering and Bioscience (IBB)"},{"id":"1188","name":"Research Horizons"},{"id":"1278","name":"College of Sciences"},{"id":"1275","name":"School of Biological Sciences"},{"id":"126011","name":"School of Physics"}],"categories":[{"id":"138","name":"Biotechnology, Health, Bioengineering, Genetics"},{"id":"146","name":"Life Sciences and Biology"},{"id":"150","name":"Physics and Physical Sciences"},{"id":"152","name":"Robotics"}],"keywords":[{"id":"188087","name":"go-irim"},{"id":"172970","name":"go-neuro"},{"id":"192253","name":"cos-neuro"},{"id":"187423","name":"go-bio"},{"id":"187915","name":"go-researchnews"},{"id":"181469","name":"bioinspired design"},{"id":"193266","name":"cos-research"}],"core_research_areas":[{"id":"193656","name":"Neuro Next Initiative"},{"id":"39521","name":"Robotics"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003E\u003Cstrong\u003E\u003Ca href=\u0022mailto:audra.davidson@research.gatech.edu\u0022\u003EAudra Davidson\u003C\/a\u003E\u003C\/strong\u003E\u003Cbr \/\u003E\r\nResearch Communications Program Manager\u003Cbr \/\u003E\r\nNeuro Next Initiative\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["audra.davidson@research.gatech.edu"],"slides":[],"orientation":[],"userdata":""}}}