event

PhD Proposal by Nitish Sontakke

Primary tabs

Title:  Scalable Motion Imitation and Adaptive Sim-to-Real Transfer for Robot Control

Date: Thursday, 11th December 2025

Time: 10:00 AM-12:00 PM

Zoom: Link

 

Nitish Sontakke

Ph.D. Student

School of Interactive Computing

Georgia Institute of Technology

 

Committee members

Dr. Sehoon Ha (advisor): School of Interactive Computing, Georgia Institute of Technology

Dr. Greg Turk: School of Interactive Computing, Georgia Institute of Technology

Dr. Harish Ravichandar: School of Interactive Computing, Georgia Institute of Technology

Dr. Rajat Kumar: Sr. Applied Scientist, Amazon Robotics

 

Abstract

Deep reinforcement learning (RL) combined with motion imitation has emerged as a powerful paradigm for learning control policies, enabling robots to acquire diverse motor skills by tracking reference motions. These methods, however, face two fundamental challenges: obtaining or generating high-quality and diverse reference motions at scale, and transferring policies trained in simulation to hardware, where dynamics mismatches cause failure. In this thesis proposal, we present a suite of techniques that address these limitations by advancing the state-of-the-art in two key areas: scalable motion imitation and sim-to-real transfer.

First, focusing on motion generation, we introduce a two-stage learning framework that eliminates the need for expensive motion capture data. Using RL with simplified dynamics, our approach first generates feasible motion plans automatically, which are then tracked by an imitation policy that ensures real-world operability. Next, to scale this approach across hundreds of diverse behaviors and prevent catastrophic forgetting, we develop Adaptive Motion Sampling, which allows the policy to retain previously mastered skills while continuing to make progress towards learning new and challenging ones. For hardware deployment, we devised two complementary sim-to-real approaches. BayRnTune expedites the transfer to hardware by intelligently adjusting simulation parameters to match reality, allowing the robot to build upon its existing knowledge rather than relearning for every adjustment. EnvMimic uses RL to model residual dynamics from a small number of real-world trajectories. When applied to quadrupedal and buoyancy-assisted robots, our methods demonstrate a significant improvement in real-world performance while efficiently utilizing data. Finally, we propose extending these techniques to humanoid robotics, investigating how a minimal but highly diverse set of motions combined with adaptive transfer strategies can enable the practical deployment of versatile controllers. Together, this body of research establishes a framework for progressive refinement, spanning from automated motion generation and scalable imitation to data-efficient real-world transfer. By unifying these stages, our approach significantly reduces the human effort and data requirements traditionally associated with deploying learned control policies on physical robots.

 

Status

  • Workflow status: Published
  • Created by: Tatianna Richardson
  • Created: 12/09/2025
  • Modified By: Tatianna Richardson
  • Modified: 12/09/2025

Categories

Keywords

Target Audience