PhD Proposal by Aishwarya Agrawal

Event Details
  • Date/Time:
    • Wednesday November 28, 2018 - Thursday November 29, 2018
      1:00 pm - 2:59 pm
  • Location: CCB 345
  • Phone:
  • URL:
  • Email:
  • Fee(s):
  • Extras:
No contact information submitted.

Summary Sentence: Visual Question Answering and Beyond

Full Summary: No summary paragraph submitted.

Title: Visual Question Answering and Beyond


Date: Wednesday, November 28 2018

Time: 1:00PM - 2:30PM (ET)

Location: CCB 345


Aishwarya Agrawal

Ph.D. Student

School of Interactive Computing

College of Computing

Georgia Institute of Technology



Dr. Dhruv Batra (Advisor, School of Interactive Computing, Georgia Institute of Technology)

Dr. Devi Parikh (School of Interactive Computing, Georgia Institute of Technology)

Dr. James Hays (School of Interactive Computing, Georgia Institute of Technology)

Dr. C. Lawrence Zitnick (Research Lead, Facebook AI Research, Menlo Park)

Dr. Oriol Vinyals (Research Scientist, Google DeepMind, London)



In this thesis, I will present my work on a multi-modal AI task called Visual Question Answering (VQA) -- given an image and a natural language question about the image (e.g., "What kind of store is this?", "Is it safe to cross the street?"), the machine's task is to automatically produce an accurate natural language answer ("bakey", "yes"). Applications of VQA include -- aiding visually impaired users in understanding their surroundings, aiding analysts in examining large quantities of surveillance data, teaching children through interactive demos, interacting with personal AI assistants, and making visual social media content more accessible.


Specifically, I will present the following -- 

1) how to create a large-scale dataset and define evaluation metrics for free-form and open-ended VQA, 

2) how to develop techniques for characterizing the behavior of VQA models, and

3) how to build VQA models that are less driven by language biases in training data and are more visually grounded, by proposing -- 

        a) a new evaluation protocol,  

        b) a new model architecture, and 

        c) a novel objective function.


In proposed work, I will study how to build agents that can not only "see" and "talk", but can also "act". Specifically, I will study how can we train agents to -- follow language instructions grounded in visual data (e.g., 'Add a red sphere', 'Add a large cylinder' ) and execute actions to generate scenes that are consistent with the given instruction.


Additional Information

In Campus Calendar

Graduate Studies

Invited Audience
Faculty/Staff, Public, Graduate students, Undergraduate students
Phd proposal
  • Created By: Tatianna Richardson
  • Workflow Status: Published
  • Created On: Nov 26, 2018 - 10:40am
  • Last Updated: Nov 29, 2018 - 9:58am