event

Phd Defense by Jianwei Yang

Primary tabs

Title: Structured Visual Understanding, Generation and Reasoning

 

Jianwei Yang

Ph.D. Candidate in Computer Science

School of Interactive Computing

Georgia Institute of Technology

https://www.cc.gatech.edu/~jyang375/

 

Date: Thursday, January 2nd, 2020

Time: 4:00-6:00 PM (EST)

Location: Coda C1003 Adair

BlueJeans: https://bluejeans.com/998872971

 

 

Committee:

Dr. Devi Parikh (Advisor), School of Interactive Computing, Georgia Institute of Technology

Dr. Dhruv Batra, School of Interactive Computing, Georgia Institute of Technology

Dr. David Crandall, School of Informatics, Computing and Engineering, Indiana University

Dr. Stefan Lee, School of Electrical  Engineering and Computer Science, Oregon State University

Dr. Judy Hoffman, School of Interactive Computing, Georgia Institute of Technology

 

Abstract:

 

The world around us is highly structured. In the real world, multiple objects usually exist in a scene and interact with each other in predictable ways (e.g., mug on table, keyboard below computer monitor); for a single object, it usually consists of multiple components under some structured configurations (e.g., a person has different body parts). These structures manifest themselves in the visual data that captures the world around us, and thus can potentially provide a strong inductive bias to various vision tasks. In this talk, I will discuss how to integrate such structure priors into different tasks including visual understanding, generation and reasoning. Specifically,

 

  1. I will talk about how to integrate the structure prior into visual system to improve its visual understanding ability at different levels. Particularly, I will talk about how we leverage the relational sparsity and context in images for a better scene graph generation;
  2. I will show how we can exploit the structure in images to address the dual problem, visual generation. Specifically, I will explain how we can generate images in a compositional manner by generating background and foreground objects separately and compose them together;
  3. I will discuss how to use the structured semantic visual representations as the interface to bridge visual perception and reasoning to address vision and language tasks. Specifically, I will present a meta-active-learning framework in which the agent reasonings on symbolic scene graph to generate informative questions to ask oracle to improve its visual system incrementally.

 

On these different levels of tasks, we demonstrate that modeling the structures in visual data and the associated text can not only improve the model performance but also increase the model transparency. To the end, I will briefly discuss the challenges in this domain and the extensions of recent works.

Status

  • Workflow Status:Published
  • Created By:Tatianna Richardson
  • Created:01/02/2020
  • Modified By:Tatianna Richardson
  • Modified:01/02/2020

Categories

Keywords