event

PhD Proposal by Johnathan Corbin

Primary tabs

Title: Formulating Multi-Agent Safety-Critical Control as a Feasible Resource Allocation Problem

 

Date: Friday May 15th, 2026

Time: 1:00pm - 2:00pm ET

Location: Guggenheim 244

Virtual:  https://teams.microsoft.com/meet/295530221364067?p=XAsX2uMHAJ40THTLPb

 

Johnathan Corbin

Robotics Ph.D. Student

Daniel Guggenheim School of Aerospace Engineering

Georgia Institute of Technology

 

Committee:

Dr. Jonathan Rogers (Advisor)

Daniel Guggenheim School of Aerospace Engineering

Georgia Institute of Technology

 

Dr. Sarah H. Q. Li

Daniel Guggenheim School of Aerospace Engineering

Georgia Institute of Technology

 

Dr. Anirban Mazumdar

George W. Woodruff School of Mechanical Engineering

Georgia Institute of Technology 

 

Dr. Matthew Hale

School of Electrical and Computer Engineering

Georgia Institute of Technology

 

Dr. Sean Wilson

Robotics and Autonomous Systems Division

Georgia Tech Research Institute

 

 

Abstract:

In heterogeneous multi-agent systems, ensuring safety requires two distinct decisions. The first is determining what corrective action is needed. The second is assigning which agents should take it. Standard control barrier function (CBF) formulations combine these into a single centralized quadratic program, distributing corrective effort by geometric proximity with no regard for agent capability, privacy, or accumulated burden. This can cause actuator-limited agents to be unable to maintain safety and makes it difficult to adjust how responsibility is allocated to agents.

 

This proposal reformulates multi-agent safety-critical control as a feasible resource allocation problem through a two-stage architecture. The first stage compresses multi-constraint safety into a single, dynamically feasible "safety deficit" using integral augmentation, log-sum-exponential composition, and optimal-decay CBFs. The second stage introduces "avoidance credit," an allocable resource that decouples the physics of safety from the economics of effort distribution through a modular interface. The interface admits any allocation mechanism satisfying a small set of conditions, enabling properties such as agent privacy and long-horizon fairness. As long as the allocation satisfies these conditions, the system is guaranteed to maintain safety within the actuator limits of the agents.

Status

  • Workflow status: Published
  • Created by: Tatianna Richardson
  • Created: 05/11/2026
  • Modified By: Tatianna Richardson
  • Modified: 05/11/2026

Categories

Keywords

User Data

Target Audience