event
PhD Proposal by Maxwell Asselmeier
Primary tabs
Title: Perception-informed Semantic Autonomy for Legged Systems
Date: Wednesday, May 20th, 2026
Time: 10am - 12pm ET
Location: Klaus 1212 (Zoom link)
Maxwell Asselmeier
Robotics Ph.D. Student
Woodruff School of Mechanical Engineering
Georgia Institute of Technology
Committee:
Dr. Ye Zhao (co-advisor) – Woodruff School of Mechanical Engineering, Georgia Institute of Technology
Dr. Patricio A. Vela (co-advisor) – School of Electrical and Computer Engineering, Georgia Institute of Technology
Dr. Sehoon Ha – School of Interactive Computing, Georgia Institute of Technology
Dr. Lu Gan - Daniel Guggenheim School of Aerospace Engineering, Georgia Institute of Technology
Dr. Zak Kingston – Department of Computer Science, Purdue University
Abstract:
In this proposal, we will discuss two perception-informed and semantically-aware autonomy frameworks, one for Quadrupedal Navigation (QuadNav) and one for Bipedal Loco-manipulation (BiLocoManip). Through presenting these two frameworks, we will demonstrate how the task, environment, and embodiment at hand are crucial aspects that should inform the design of an autonomy stack. Within QuadNav, we decompose the task of navigation into global and local levels, with the local level being further decomposed into torso-, foot-, and joint-level reasoning. Torso-level decision-making is done through QuadGap, a gap-based local planner. Foot- and joint-level planning is done through QuadPiPS, a bi-level graph search and trajectory optimization framework. These various framework levels are all coordinated through a learned experience heuristic. Within BiLocoManip, we decompose the task of whole-body loco-manipulation into task planning over a long horizon, contact planning over a short horizon, and joint control in the immediate future. At the task level, a Vision Language Model (VLM) synthesizes planning commands according to environmental affordances which inform sampling during Model Predictive Path Integral (MPPI) control through a learned world model. Joint-level commands are ultimately tracked through a whole-body reinforcement learning policy.
Groups
Status
- Workflow status: Published
- Created by: Tatianna Richardson
- Created: 05/08/2026
- Modified By: Tatianna Richardson
- Modified: 05/08/2026
Categories
Keywords
User Data
Target Audience