event

PhD Proposal by Aishwarya Agrawal

Primary tabs

Title: Visual Question Answering and Beyond

 

Date: Wednesday, November 28 2018

Time: 1:00PM - 2:30PM (ET)

Location: CCB 345

 

Aishwarya Agrawal

Ph.D. Student

School of Interactive Computing

College of Computing

Georgia Institute of Technology

 

Committee:

Dr. Dhruv Batra (Advisor, School of Interactive Computing, Georgia Institute of Technology)

Dr. Devi Parikh (School of Interactive Computing, Georgia Institute of Technology)

Dr. James Hays (School of Interactive Computing, Georgia Institute of Technology)

Dr. C. Lawrence Zitnick (Research Lead, Facebook AI Research, Menlo Park)

Dr. Oriol Vinyals (Research Scientist, Google DeepMind, London)

 

Abstract:

In this thesis, I will present my work on a multi-modal AI task called Visual Question Answering (VQA) -- given an image and a natural language question about the image (e.g., "What kind of store is this?", "Is it safe to cross the street?"), the machine's task is to automatically produce an accurate natural language answer ("bakey", "yes"). Applications of VQA include -- aiding visually impaired users in understanding their surroundings, aiding analysts in examining large quantities of surveillance data, teaching children through interactive demos, interacting with personal AI assistants, and making visual social media content more accessible.

 

Specifically, I will present the following -- 

1) how to create a large-scale dataset and define evaluation metrics for free-form and open-ended VQA, 

2) how to develop techniques for characterizing the behavior of VQA models, and

3) how to build VQA models that are less driven by language biases in training data and are more visually grounded, by proposing -- 

        a) a new evaluation protocol,  

        b) a new model architecture, and 

        c) a novel objective function.

 

In proposed work, I will study how to build agents that can not only "see" and "talk", but can also "act". Specifically, I will study how can we train agents to -- follow language instructions grounded in visual data (e.g., 'Add a red sphere', 'Add a large cylinder' ) and execute actions to generate scenes that are consistent with the given instruction.

 

Status

  • Workflow Status:Published
  • Created By:Tatianna Richardson
  • Created:11/26/2018
  • Modified By:Tatianna Richardson
  • Modified:11/29/2018

Categories

Keywords