event

PhD Defense by Nathaniel Moore Glaser

Primary tabs

Dear faculty members and fellow students,

 

You are cordially invited to attend my dissertation defense on Friday, November 17th.

 

Title: Collaborative Perception and Planning for Multi-View and Multi-Robot Systems

 

Date: Friday, November 17th, 2023

Time: 1:00 PM - 3:00 PM EST

Location: Coda C1315 ("Grant Park") or this zoom link

 

Nathaniel Moore Glaser

Robotics PhD Candidate

School of Electrical and Computer Engineering

Georgia Institute of Technology

 

Committee:

Dr. Zsolt Kira (Advisor) - School of Interactive Computing, Georgia Institute of Technology

Dr. James Hays - School of Interactive Computing, Georgia Institute of Technology

Dr. Patricio Vela - School of Electrical and Computer Engineering, Georgia Institute of Technology

Dr. Pratap Tokekar - Department of Computer Science, University of Maryland

Dr. Milutin Pajovic - Senior Research Scientist, Analog Devices

 

Abstract:

The field of robotics has historically focused on egocentric, single-agent systems.  However, robots in such systems are susceptible to single points of failure.  For instance, a single sensor failure or adverse environmental condition can render an isolated robot "blind".  On the other hand, robots in multi-agent systems have the opportunity to overcome potentially dangerous blind spots, via communication and collaboration with their peers.  In this dissertation, we address these communication-critical settings with Collaborative Perception and Planning for Multi-View and Multi-Robot Systems.

 

First, we develop several learned communication and spatial registration schemes for immediate collaboration.  These schemes allow us to efficiently communicate and align visual observations between moving agents for a single instant in time.  We demonstrate improved egocentric semantic segmentation accuracy for a swarm of obstruction-prone aerial quadrotors.  Second, we extend our methods to short-term collaboration, where robots compress short sequences of observations into easily communicable representations, such as maps and costmaps.  In addition to conserving bandwidth, our methods (1) reduce task duration for multi-robot mapping and (2) improve safety for planning, especially in self-driving settings where egocentric vision has potentially fatal blind spots.  Third, we stress-test our methods on large-scale, long-term collaboration.  In the setting of production-scale robotic farming, we show how collaborative perception is capable of handling large numbers of heterogeneous robots by corresponding ambiguous data over long stretches of time.  We hope that our work on collaborative perception will help transition single-agent systems to robust and efficient multi-agent capabilities.

Status

  • Workflow Status:Published
  • Created By:Tatianna Richardson
  • Created:11/06/2023
  • Modified By:Tatianna Richardson
  • Modified:11/06/2023

Categories

Keywords

Target Audience