event

PhD Defense by Yin Li

Primary tabs

Title: Learning Embodied Models of Actions from First Person Video

 

Yin Li

Ph.D. Candidate

School of Interactive Computing

College of Computing

Georgia Institute of Technology

 

Date: Friday, August 4, 2017

Time: 2:30pm-4:30pm EDT

Location: GVU Cafe, 2nd Floor TSRB

 

Committee:

-----------------------

Prof. James M. Rehg (Advisor), School of Interactive Computing, Georgia Institute of Technology

Prof. Irfan Essa, School of Interactive Computing, Georgia Institute of Technology

Prof. James Hays, School of Interactive Computing, Georgia Institute of Technology

Prof. Serge Belongie, Department of Computer Science, Cornell University & Cornell Tech

Prof. Kristen Grauman, Department of Computer Science, University of Texas at Austin

 

Abstract:

-----------------------

Advances in sensor miniaturization, low-power computing, and battery life have enabled the first generation of mainstream wearable cameras. Millions of hours of videos are captured by these devices every year, creating a record of our daily visual experiences at an unprecedented scale. This has created a major opportunity to develop new capabilities and products based on computer vision. Meanwhile, computer vision is at a tipping point. Major progress has been made over the last few years in both visual recognition and 3D reconstruction. The stage is set for a grand challenge that can break our field away from narrowly focused benchmarks in favor of "in the wild", long-term, open world problems in visual analytics and embedded sensing.

 

My thesis work focuses on the automatic analysis of visual data captured from wearable cameras, known as First Person Vision (FPV). My goal is to develop novel embodied representations for first person activity recognition. More specifically, I propose to leverage first person visual cues, including the body motion, hand locations and egocentric gaze for understanding the camera wearer's attention and actions. These cues are naturally "embodied" as they derive from the purposive body movements of the person, and capture the concept of action within its context.

 

To this end, I have investigated three important aspects of first person actions. First, I led the effort of developing a new FPV dataset of meal preparation tasks. This dataset establishes by far the largest benchmark for FPV action recognition, gaze estimation and hand segmentation. Second, I present a method to estimate egocentric gaze in the context of actions. My work demonstrates for the first time that egocentric gaze can be reliably estimated using only head motion and hand locations, and without the need of object or action cues. Finally, I develop methods that incorporate first person visual cues for recognizing actions and predicting body motion in FPV. My work shows that this embodied representation can significantly improve the accuracy of FPV action recognition. I also present preliminary results for body motion prediction.

 

Status

  • Workflow Status:Published
  • Created By:Tatianna Richardson
  • Created:08/01/2017
  • Modified By:Tatianna Richardson
  • Modified:08/01/2017

Categories

Keywords

Target Audience