{"685002":{"#nid":"685002","#data":{"type":"news","title":"Two IC Faculty Receive NSF CAREER for Robotics and AR\/VR Initiatives","body":[{"value":"\u003Cp\u003EPractice may not make perfect for robots, but new machine learning models from Georgia Tech are allowing them to improve their skillsets to more effectively assist humans in the real world.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u003Ca href=\u0022https:\/\/faculty.cc.gatech.edu\/~danfei\/\u0022\u003E\u003Cstrong\u003EDanfei Xu\u003C\/strong\u003E\u003C\/a\u003E, an assistant professor in \u003Ca href=\u0022https:\/\/ic.gatech.edu\/\u0022\u003E\u003Cstrong\u003EGeorgia Tech\u2019s School of Interactive Computing\u003C\/strong\u003E\u003C\/a\u003E, is introducing new models that provide robots with \u201con-the-job\u201d training.\u003C\/p\u003E\u003Cp\u003EThe National Science Foundation (NSF) awarded Xu its CAREER award given to early career faculty. The award will enable Xu to expand his research and refine his models, which could accelerate the process of robot deployment and alleviate manufacturers from the burden of achieving perfection.\u003C\/p\u003E\u003Cp\u003E\u201cThe main problem we\u2019re trying to tackle is how to allow robots to learn on the job,\u201d Xu said. \u201cHow should it self-improve based on the performance or the new requirements or new user preferences in each home or working environment? You cannot expect a robot manufacturer to program all of that.\u003C\/p\u003E\u003Cp\u003E\u201cThe challenging thing about robotics is that the robot must get feedback from the physical environment. It must try to solve a problem to understand the limits of its abilities so it can decide how to improve its own performance.\u201d\u003C\/p\u003E\u003Cp\u003EAs with humans, Xu views practice as the most effective way for a robot to improve a skill. His models train the robot to identify the point at which it failed in its task performance.\u003C\/p\u003E\u003Cp\u003E\u201cIt identifies that skill and sets up an environment where it can practice,\u201d he said. \u201cIf it needs to improve opening a drawer, it will navigate itself to the drawer and practice opening it.\u201d\u003C\/p\u003E\u003Cp\u003EThe models allow the robot to split tasks into smaller parts and evaluate its own skill level using reward functions. Cooking dinner, for example, can be divided into steps like turning on the stove and opening the fridge, which are necessary to achieve the overall goal.\u003C\/p\u003E\u003Cp\u003E\u201cPlanning is a complex problem because you must predict what\u2019s going to happen in the physical world,\u201d Xu said. \u201cWe use machine learning techniques that our group has developed over the past two years, using generated models to generate positive futures. They\u2019re very good at modeling long-horizon phenomena.\u003C\/p\u003E\u003Cp\u003E\u201cThe robot knows when it\u2019s failed because there\u2019s a value that tells it how well it performed the task and whether it received its reward. While we don\u2019t know how to tell the robot why it failed, we have ways for it to improve its skills based on that measurement.\u201d\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EOne of the biggest barriers that keeps many robots from being made available for public use is the pressure on manufacturers to make the robot as close to perfect as possible at deployment. Xu said it\u2019s more practical to accept that robots will have learning gaps that need to be filled and to implement more efficient real-world learning models.\u003C\/p\u003E\u003Cp\u003E\u201cWe work under the pressure of getting everything correct before deployment,\u201d he said. \u201cWe need to meet the basic safety requirements, but in terms of competence, it is difficult to get that perfect at deployment. This takes some of the pressure off because it will be able to self-adapt.\u201d\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003EVirtual Workspace for Data Workers\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003E\u003Ca href=\u0022https:\/\/ivi.cc.gatech.edu\/people.html\u0022\u003E\u003Cstrong\u003EYalong Yang\u003C\/strong\u003E\u003C\/a\u003E, another assistant professor in the School of IC, also received the NSF CAREER Award for a research proposal that will design augmented and virtual reality (AR\/VR) workspaces for data workers.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cIn 10 years, I envision everyone will use AR\/VR in their office, and it will replace their laptop or their monitor,\u201d Yang said.\u003C\/p\u003E\u003Cp\u003EYang said he is also working with Google on the project and using Google Gemini to bring conventional applications to immersive space, with data tools being the most complicated systems to re-design for immersive environments.\u003C\/p\u003E\u003Cp\u003EThe immersive workspace and interface will also enable teams of data workers to collaborate and share their data in real-time.\u003C\/p\u003E\u003Cp\u003E\u201cI want to support the end-to-end process,\u201d Yang said. \u201cWe have visualization tools for data, but it\u2019s not enough. Data science is a pipeline \u2014 from collecting data to processing, visualizing, modeling and then communicating. If you only support one, people will need to switch to other platforms for the other steps.\u201d\u003C\/p\u003E\u003Cp\u003EYang also noted that prior research has shown that VR can enhance cognitive abilities, such as memory and attention and support multitasking. The results of his project could lead to maximizing worker efficiency without them feeling strained.\u003C\/p\u003E\u003Cp\u003E\u201cWe all have a cognitive limit in our working memory. Using AR\/VR can increase those limits and process more information. We can expand people\u2019s spatial ability to help them build a better mental model of the data presented to them.\u201d\u003C\/p\u003E\u003Cp\u003EYang was also recently named a \u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/news\/tiktok-photoshop-generative-ai-could-bring-millions-apps-3d-reality\u0022\u003E\u003Cstrong\u003E2025 Google Research Scholar\u003C\/strong\u003E\u003C\/a\u003E as he seeks to build a new artificial intelligence (AI) tool that converts mobile apps into 3D immersive environments.\u003C\/p\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003ETwo assistant professors in Georgia Tech\u2019s School of Interactive Computing \u2014 Danfei Xu and Yalong Yang \u2014 have each won NSF CAREER Awards for their respective research in robotics and AR\/VR initiatives. Xu\u2019s work will develop machine learning models that let robots learn \u201con the job,\u201d adapting from feedback and failure in real-world environments rather than being perfectly preprogrammed. Yang\u2019s project aims to build immersive AR\/VR workspaces to support data workers across the full data pipeline, including a collaboration with Google to bring conventional apps into immersive environments.\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"Two Georgia Tech professors, Danfei Xu and Yalong Yang, have received the prestigious NSF CAREER award for their research in robotics, which focuses on teaching robots to self-improve, and in augmented and virtual reality (AR\/VR), which aims to create imm"}],"uid":"36530","created_gmt":"2025-09-17 18:24:23","changed_gmt":"2025-09-17 18:28:51","author":"Nathan Deen","boilerplate_text":"","field_publication":"","field_article_url":"","location":"Atlanta, GA","dateline":{"date":"2025-09-17T00:00:00-04:00","iso_date":"2025-09-17T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"678055":{"id":"678055","type":"image","title":"ICRA-2025_86A9079-Enhanced-NR.jpg","body":null,"created":"1758133475","gmt_created":"2025-09-17 18:24:35","changed":"1758133475","gmt_changed":"2025-09-17 18:24:35","alt":"Danfei Xu","file":{"fid":"262033","name":"ICRA-2025_86A9079-Enhanced-NR.jpg","image_path":"\/sites\/default\/files\/2025\/09\/17\/ICRA-2025_86A9079-Enhanced-NR.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2025\/09\/17\/ICRA-2025_86A9079-Enhanced-NR.jpg","mime":"image\/jpeg","size":132463,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2025\/09\/17\/ICRA-2025_86A9079-Enhanced-NR.jpg?itok=Dt9A0bu8"}}},"media_ids":["678055"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1188","name":"Research Horizons"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"153","name":"Computer Science\/Information Technology and Security"},{"id":"152","name":"Robotics"}],"keywords":[{"id":"191934","name":"National Science Foundation (NSF)"},{"id":"7842","name":"NSF CAREER Award"},{"id":"188776","name":"go-research"},{"id":"9153","name":"Research Horizons"},{"id":"145251","name":"virtual reality"},{"id":"1597","name":"Augmented Reality"}],"core_research_areas":[{"id":"39501","name":"People and Technology"},{"id":"39521","name":"Robotics"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[],"email":[],"slides":[],"orientation":[],"userdata":""}}}