event

PhD Proposal by Jianwei Yang

Primary tabs

Title: Modeling Structure for Visual Understanding and Generation

 

Date: Thursday, November 29, 2018

Time: 11:30AM - 1:00PM (ET)

Location: TSRB 223

 

Jianwei Yang

Ph.D. Student in Computer Science

School of Interactive Computing

Georgia Institute of Technology

https://www.cc.gatech.edu/~jyang375/

 

Committee:

Dr. Devi Parikh (Advisor, School of Interactive Computing, Georgia Institute of Technology)

Dr. Dhruv Batra (School of Interactive Computing, Georgia Institute of Technology)

Dr. David J. Crandall (School of Informatics, Computing, and Engineering, Indiana University)

Dr. Stefan Lee (School of Interactive Computing, Georgia Institute of Technology)

 

Abstract:

 

The world around us is highly structured. Objects interact with each other in predictable ways (e.g., mugs are often on tables, keyboards are often below computer monitors, the sky is in the background, grass is often green). This structure manifests itself in the visual data that captures the world around us, and in text that describes it. In this thesis, the goal is to leverage this structure in our visual world for visual understanding and its dual problem visual generation, both with and without interactions with language. Specifically, this thesis work makes the following contributions.

 

On visual understanding side:

1)  Proposed an effective approach for scene graph generation, that learns to compute the relationship-ness between objects and prune the dense graph accordingly before performing graph labeling.

2)  Proposed a language-based meta-active-learning framework for an agent, that can learn to ask informative questions to the human/oracle based on a structured representation of the scene, and then learn its visual recognition models incrementally.

On visual generation side:

1)  Proposed a new model for generating images by considering the layer-by-layer structure in images, that generates image background and foreground step-by-step and compose them into a single image with proper spatial configuration.

2)  In the proposed work, I will further leverage the layer-by-layer structure in images and text for visual generation conditioned on language. Specifically, given a description of images, the model learns to extract the structure in the sentence, and then generate the image layer-by-layer accordingly so that the generated images are consistent with the given description.

Status

  • Workflow Status:Published
  • Created By:Tatianna Richardson
  • Created:11/26/2018
  • Modified By:Tatianna Richardson
  • Modified:11/26/2018

Categories

Keywords