event

PhD Defense by Vinay Bettadapura

Primary tabs

Ph.D. Defense of Dissertation Announcement

 

Title: LEVERAGING CONTEXTUAL CUES FOR DYNAMIC SCENE UNDERSTANDING

 

Vinay Bettadapura

Ph.D. Candidate

School of Interactive Computing

College of Computing

Georgia Institute of Technology

http://www.vbettadapura.com/

 

Date: Tuesday, November 24th, 2015

Time: 12:00 Noon to 2:00 PM EST

Location: CCB 153 (next to the Dean's office)

 

COMMITTEE:

 

Dr. Irfan Essa, School of Interactive Computing, Georgia Tech (Advisor)

Dr. Gregory Abowd, School of Interactive Computing, Georgia Tech

Dr. Thad Starner, School of Interactive Computing, Georgia Tech

Dr. Rahul Sukthankar, Google Research

Dr. Caroline Pantofaru, Google Research

 

ABSTRACT

 

Environments with people are complex, with many activities and events that need to be represented and explained. The goal of scene understanding is to either determine what objects and people are doing in such complex and dynamic environments, or to know the overall happenings, such as the highlights of the scene. The context within which the activities and events unfold provides key insights that cannot be derived by studying the activities and events alone. In this thesis, we show that this rich contextual information can be successfully leveraged, along with the video data, to support dynamic scene understanding.

 

We categorize and study four different types of contextual cues: (1) spatio-temporal context, (2) egocentric context, (3) geographic context, and (4) environmental context, and show that they improve dynamic scene understanding tasks across several different application domains.

 

We start by presenting data-driven techniques to enrich spatio-temporal context by augmenting Bag-of-Words models with temporal, local and global causality information and show that this improves activity recognition, anomaly detection and scene assessment from videos. Next, we leverage the egocentric context derived from sensor data captured from first-person point-of-view devices to perform field-of-view localization in order to understand the user’s focus of attention. We demonstrate single and multi-user field-of-view localization in both indoor and outdoor environments with applications in augmented reality, event understanding and studying social interactions. Next, we look at how geographic context can be leveraged to make challenging “in-the-wild” object recognition tasks more tractable using the problem of food recognition in restaurants as a case-study. Finally, we study the environmental context obtained from dynamic scenes such as sporting events, which take place in responsive environments such as stadiums and gymnasiums, and show that it can be successfully used to address the challenging task of automatically generating basketball highlights. We perform comprehensive user-studies on 25 full-length NCAA games and demonstrate the effectiveness of environmental context in producing highlights that are comparable to the highlights produced by ESPN.

Status

  • Workflow Status:Published
  • Created By:Tatianna Richardson
  • Created:11/06/2015
  • Modified By:Fletcher Moore
  • Modified:10/07/2016

Categories

Keywords

Target Audience