PhD Proposal by John Lambert

Primary tabs

Title: Deep Learning for Building and Validating Geometric and Semantic Maps

Date: Thursday, September 23, 2021
Time: 6:30 am (EDT)
Location (Virtual): https://bluejeans.com/746169224/5701

John Lambert
Ph.D. Student in Computer Science
School of Interactive Computing, College of Computing

Georgia Institute of Technology


Dr. James Hays (Co-Advisor), School of Interactive Computing, Georgia Institute of Technology

Dr. Frank Dellaert (Co-Advisor), School of Interactive Computing, Georgia Institute of Technology

Dr. Zsolt Kira, School of Interactive Computing, Georgia Institute of Technology

Dr. Cedric Pradalier, School of Interactive Computing, Georgia Institute of Technology, Lorraine

Dr. Simon Lucey, Australian Institute for Machine Learning, University of Adelaide


Mapping the world is the key to making spatial artificial intelligence a reality in our near future. Spatial AI, or embodied intelligence that allows robots to operate autonomously in 3D space, enables safety-critical awareness of a robot's surroundings. However, current methods for building and validating geometric maps are limited by their high computational runtimes and lack of spatial completeness under sparse views. Likewise, current methods for creating and verifying semantic maps are also limited, especially by their inaccuracy in out-of-domain environments, and inability to recognize when real-world changes have rendered a map out-of-date. 


This dissertation research introduces new algorithms for building and validating geometric and semantic maps using deep learning, with three original contributions. First, I introduce a new deep-learning based algorithm that enables creating complete and accurate 2d geometric maps of indoor environments under wide camera baselines with little overlap. Second, I introduce a deep-learning based system for global structure from motion (SfM), alleviating the high levels of noise that have plagued the accuracy of global SfM techniques for years and led to a reliance upon slow, incremental SfM algorithms. Finally, I introduce new algorithms for building accurate semantic maps with a single model in multiple, highly diverse domains, and for accurately detecting real world changes that render portions of these high-definition (HD) maps obsolete. Along the way, in order to satisfy the demands of these data-driven, deep learning approaches, I contribute 3 large-scale datasets towards solving these problems -- the Argoverse Datasets, the MSeg Dataset, and the Trust but Verify (TbV) Dataset.


  • Workflow Status:
  • Created By:
    Tatianna Richardson
  • Created:
  • Modified By:
    Tatianna Richardson
  • Modified:


Target Audience

    No target audience selected.