{"683581":{"#nid":"683581","#data":{"type":"news","title":"From TikTok to Photoshop: Generative AI Could Bring Millions of Apps Into 3D Reality","body":[{"value":"\u003Cp\u003EThe idea of people experiencing their favorite mobile apps as immersive 3D environments took a step closer to reality with a new Google-funded research iniative at Georgia Tech.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EA new approach proposed by Tech researcher Yalong Yang uses generative artificial intelligence (GenAI) technologies to convert almost any mobile or web-based app into a 3D environment.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EThat includes application software programs from Microsoft and Adobe as well as any social media (Tiktok), entertainment (Spotify), banking (PayPal), or food service app (Uber Eats) and everything in between.\u003C\/p\u003E\u003Cp\u003EYang aims to make the 3D environments compatible with augmented and virtual reality (AR\/VR) headsets and smart glasses. He believes his research could be a breakthrough in spatial computing and change how humans interact with their favorite apps and computer systems in general.\u003C\/p\u003E\u003Cp\u003E\u201cWe\u2019ll be able to turn around and see things we want, and we can grab them and put them together,\u201d said Yang, an assistant professor in the School of Interactive Computing. \u201cWe\u2019ll no longer use a mouse to scroll or the keyboard to type, but we can do more things like physical navigation.\u201d\u003C\/p\u003E\u003Cp\u003EYang\u2019s proposal recently earned him recognition as a 2025 Google Research Scholar. Along with converting popular social apps, his platform will be able to instantly render Photoshop, MS Office, and other workplace applications in 3D for AR\/VR devices.\u003C\/p\u003E\u003Cp\u003E\u201cWe have so many applications installed in our machines to complete all the various types of work we do,\u201d he said. \u201cWe use Photoshop for photo editing, Premiere Pro for video editing, Word for writing documents. We want to create an AR\/VR ecosystem that has all these things available in one interface with all apps working cohesively to support multitasking.\u201d\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003EFilling The Gap With AI\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003EJust as Google\u2019s Veo and Open AI\u2019s Sora use generative-AI to create video clips, Yang believes it can be used to create interactive, immersive environments for any Android or Apple app.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cA critical gap in AR\/VR is that we do not have all those existing applications, and redesigning all those apps will take forever,\u201d he said. \u201cIt\u2019s urgent that we have a complete ecosystem in VR to enable us to do the work we need to do. Instead of recreating everything from scratch, we need a way to convert these applications into immersive formats.\u201d\u003C\/p\u003E\u003Cdiv\u003E\u003Cdiv\u003E\u003Cdiv\u003E\u003Cp\u003EThe Google Play Store boasts 3.5 million apps for Android devices, while the Apple Store includes 1.8 million apps for iOS users.\u003C\/p\u003E\u003Cp\u003EMeanwhile, there are fewer than 10,000 apps available on the latest Meta Quest 3 headset, leaving a gap of millions of apps that will need 3D conversion.\u003C\/p\u003E\u003Cp\u003E\u201cWe envision a one-click app, and the (Android Package Kit) file output will be a Meta APK that you can install on your MetaQuest 3,\u201d he said.\u003C\/p\u003E\u003Cp\u003EYang said major tech companies like Apple have the resources to redesign their apps into 3D formats. However, small- to mid-sized companies that have created apps either do not have that ability or would take years to do so.\u003C\/p\u003E\u003Cp\u003EThat\u2019s where generative-AI can help. Yang plans to use it to convert source code from web-based and mobile apps into WebXR.\u003C\/p\u003E\u003Cp\u003EWebXR is a set of application programming interfaces (APIs) that enables developers to create AR\/VR experiences within web browsers.\u003C\/p\u003E\u003Cp\u003E\u201cWe start with web-based content,\u201d he said. \u201cA lot of things are already based on the web, so we want to convert that user interface into Web XR.\u201d\u003C\/p\u003E\u003Ch4\u003E\u003Cstrong\u003EBuilding New Worlds\u003C\/strong\u003E\u003C\/h4\u003E\u003Cp\u003EThe process for converting mobile apps would be similar.\u003C\/p\u003E\u003Cp\u003E\u201cAndroid uses an XML description file to define its user-interface (UI) elements. It\u2019s very much like HTML on a web page. We believe we can use that as our input and map the elements to their desired location in a 3D environment. AI is great at translating one language to another \u2014 JavaScript to C-sharp, for example \u2014 so that can help us in this process.\u201d\u003C\/p\u003E\u003Cp\u003EIf generative-AI can create environments, the next step would be to create a seamless user experience.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cIn a normal desktop or mobile application, we can only see one thing at a time, and it\u2019s the same for a lot of VR headsets with one application occupying everything. To live in a multi-task environment, we can\u2019t just focus on one thing because we need to keep switching our tasks, so how do we break all the elements down and let them float around and create a spatial view of them surrounding the user?\u201d\u003C\/p\u003E\u003Cp\u003EAlong with Assistant Professor Cindy Xiong, Yang is one of two researchers in the School of IC to be named a 2025 Google Research Scholar.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EFour researchers from the College of Competing have received the award. The other two are Ryan Shandler from the School of Cybersecurity and Privacy and Victor Fung from the School of Computational Science and Engineering.\u003C\/p\u003E\u003Cdiv\u003E\u003Ch4\u003E\u003Cstrong\u003EReent Storie\u003C\/strong\u003E\u003C\/h4\u003E\u003C\/div\u003E\u003C\/div\u003E\u003C\/div\u003E\u003C\/div\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EA new Google-funded research project at Georgia Tech, led by Assistant Professor Yalong Yang, is using generative AI to convert existing mobile and web apps into 3D environments. This initiative aims to bridge the \u0022critical gap\u0022 in AR\/VR ecosystems by allowing millions of apps to be adapted for headsets without a lengthy redesign process. The goal is to create a seamless, multitasking environment where all apps can work cohesively in a single interface, transitioning from traditional mouse and keyboard interactions to physical navigation.\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"A new Google-funded research project at Georgia Tech is using generative AI to convert millions of existing mobile and web apps into 3D experiences for augmented and virtual reality."}],"uid":"36530","created_gmt":"2025-08-06 14:17:28","changed_gmt":"2025-08-06 14:23:34","author":"Nathan Deen","boilerplate_text":"","field_publication":"","field_article_url":"","location":"Atlanta, GA","dateline":{"date":"2025-08-06T00:00:00-04:00","iso_date":"2025-08-06T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"677592":{"id":"677592","type":"image","title":"AdobeStock_628967696_Editorial_Use_Only.jpeg","body":null,"created":"1754489856","gmt_created":"2025-08-06 14:17:36","changed":"1754489856","gmt_changed":"2025-08-06 14:17:36","alt":"apps","file":{"fid":"261505","name":"AdobeStock_628967696_Editorial_Use_Only.jpeg","image_path":"\/sites\/default\/files\/2025\/08\/06\/AdobeStock_628967696_Editorial_Use_Only.jpeg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2025\/08\/06\/AdobeStock_628967696_Editorial_Use_Only.jpeg","mime":"image\/jpeg","size":113784,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2025\/08\/06\/AdobeStock_628967696_Editorial_Use_Only.jpeg?itok=11V_kbBq"}}},"media_ids":["677592"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"1188","name":"Research Horizons"},{"id":"50876","name":"School of Interactive Computing"}],"categories":[{"id":"135","name":"Research"}],"keywords":[{"id":"194701","name":"go-resarchnews"},{"id":"192863","name":"go-ai"},{"id":"9153","name":"Research Horizons"},{"id":"187812","name":"artificial intelligence (AI)"},{"id":"192390","name":"generative AI"},{"id":"1597","name":"Augmented Reality"},{"id":"145251","name":"virtual reality"},{"id":"34741","name":"mobile app"},{"id":"167543","name":"social media"},{"id":"190091","name":"Google AI"},{"id":"184554","name":"Google Research Award"},{"id":"172013","name":"Faculty Awards and Honors"},{"id":"77571","name":"3D"}],"core_research_areas":[{"id":"193655","name":"Artificial Intelligence at Georgia Tech"},{"id":"39501","name":"People and Technology"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[],"email":[],"slides":[],"orientation":[],"userdata":""}}}