event

PhD Defense by Yunbo Zhang

Primary tabs

Title: Generating Physically Based Animation For Multi-entity Interactions Using Deep Reinforcement Learning

 

 

Date: Friday, July 26, 2024

Time: 2:30pm - 4:30pm EST

Location: KLAUS 3126 / Zoom

 

Yunbo Zhang

Ph.D. Candidate

School of Interactive Computing

College of Computing

Georgia Institute of Technology

 

Committee:

Dr. Greg Turk (advisor), School of Interactive Computing, Georgia Institute of Technology

Dr. Sehoon Ha, School of Interactive Computing, Georgia Institute of Technology

Dr. Jie Tan, School of Interactive Computing, Georgia Institute of Technology

Dr. Yuting Ye, Reality Lab Research, Meta Inc.

Dr. Charlie Kemp, Hello Roboc Inc.

 

 

 

Abstract

Physics-based animations offer an immersive experience for various applications such as films, video games, and virtual reality by enabling physical interactions between characters and their environments. Unlike traditional kinematic animation, physics-based animation enhances realism, allowing a character in a kitchen to dexterously handle utensils and food, or to enable accurate physical interactions in multiplayer role-playing games. However, designing controllers for these animations is challenging, as traditional methods require extensive tuning and lack agility and generality. Recent advancements in Deep Reinforcement Learning (DRL) have significantly improved the generation of complex locomotion and interaction skills, allowing for more natural and adaptable character motions. Despite these achievements, challenges remain in general physical interactions in simulations. This thesis investigates several challenging physical interaction scenarios and uses DRL to produce high-quality animations for different types of physical interactions.

 

We first investigate a manipulation problem where a controlled tool is used to dexterously manipulate different types of amorphous non-rigid materials. We build both a fast simulation environment for amorphous materials and a training pipeline that produces control signals for a tool that manipulate the materials. We demonstrate the framework using several common manipulation tasks, including gathering materials into a pile, spreading material evenly on a flat surface, and tossing and flipping a patty in a pan.

 

For the second type of interaction, we consider the challenge of controlling a simulated hand to perform dexterous in-hand manipulation of various types of rigid objects. We demonstrate a simulation and training pipeline that generates a hand controller capable of manipulating a rigid object by following input motion capture trajectories. Additionally, to transfer a policy to a variety of objects with different shapes, we introduce a novel curriculum learning algorithm that trains a collection of policies to reproduce the same manipulation skill on different types of objects.

 

Lastly, we consider the scenario where a full-body character performs general physical interactions with another character or with the surrounding environment. We use a novel RL reward formulation that captures the quality of physical interaction between a controlled character and others. We demonstrate that the reward can be used to train a policy that controls a character to imitate a specific type of physical interaction with differently scaled characters while preserving the content of the interaction. We also introduce another approach that uses a two-stage training pipeline to train a single control policy that generalizes the learned motion for the character to interact with other characters. Using the new formulation, we no longer need the frame-by-frame guidance from the motion capture data during inference time, and the character is able to perform a specified physical interaction skill with other differently sized characters.

Status

  • Workflow Status:Published
  • Created By:Tatianna Richardson
  • Created:07/12/2024
  • Modified By:Tatianna Richardson
  • Modified:07/12/2024

Categories

Keywords

Target Audience