event
PhD Defense by Chaojian Li
Primary tabs
Title: Enabling Ubiquitous 3D Intelligence via Multi-Granular Algorithm-Hardware Synergy
Date: Monday, April 28, 2025
Time: 9:00 AM - 11:00 AM ET
Location:
- In-Person: Klaus 1212, Klaus Advanced Computing Building
- Online: https://gatech.zoom.us/j/93627642270?pwd=7dLqCyY9TiEku7DJ0QXX8WZUYCGDul.1
Chaojian Li
Ph.D. Candidate
School of Computer Science, College of Computing
Georgia Institute of Technology
Committee:
Dr. Yingyan (Celine) Lin (Advisor), College of Computing, Georgia Institute of Technology
Dr. Prasanna Balaprakash, Oak Ridge National Laboratory
Dr. Greg Eisenhauer, College of Computing, Georgia Institute of Technology
Dr. Josiah Hester, College of Computing, Georgia Institute of Technology
Dr. Hyesoon Kim, College of Computing, Georgia Institute of Technology
Dr. Ling Liu, College of Computing, Georgia Institute of Technology
Abstract:
3D intelligence is emerging as one of the next frontiers in artificial intelligence, extending beyond text and image processing to enable richer and more immersive experiences. However, realizing this promise poses significant computational and memory challenges, particularly for real-time applications on resource-constrained edge devices. Achieving ubiquitous 3D intelligence requires overcoming challenges related to efficiency, accessibility, and adaptability—enabling “every application on every device, all at once.”
To address these challenges, my dissertation focuses on how a unified insight into multi-granular algorithm–hardware co-design, combined with the development of supporting research infrastructure, can help mitigate the aforementioned limitations. First, I introduce Instant-3D, which addresses the efficiency challenge. Instant-3D is a hardware–algorithm co-design that optimizes both memory usage and access regularity for bottleneck operators, enabling instant on-device 3D reconstruction. Next, I present MixRT, which tackles the accessibility challenge. MixRT leverages operator-level heterogeneity to fully utilize commonly available hardware resources on modern GPUs, enabling real-time rendering across a wide range of edge devices—from mobile phones to laptops. Then, I introduce Uni-Render, which targets the adaptability challenge. Uni-Render is a unified neural rendering accelerator that dynamically adjusts dataflows to meet diverse rendering metric requirements, achieving real-time rendering across five different models using a single accelerator consuming approximately five watts. Finally, I briefly discuss my contributions to building the supporting research infrastructure and conclude with a vision for the future of ubiquitous 3D intelligence. I also explore how the proposed innovations and infrastructure can be extended beyond 3D intelligence to advance more efficient, accessible, and adaptable AI.
Groups
Status
- Workflow Status:Published
- Created By:Tatianna Richardson
- Created:04/16/2025
- Modified By:Tatianna Richardson
- Modified:04/16/2025
Categories
Keywords
Target Audience