PhD Defense by Yash Goyal

Event Details
  • Date/Time:
    • Monday March 9, 2020 - Tuesday March 10, 2020
      11:00 am - 12:59 pm
  • Location: Coda 1215 Midtown
  • Phone:
  • URL:
  • Email:
  • Fee(s):
  • Extras:
No contact information submitted.

Summary Sentence: Towards Transparent and Grounded Visual AI Systems

Full Summary: No summary paragraph submitted.

Title: Towards Transparent and Grounded Visual AI Systems


Yash Goyal
Ph.D. Candidate in Computer Science
School of Interactive Computing
Georgia Institute of Technology

Date: Monday, March 9th, 2020
Time: 11:00 am to 1:00 pm (ET)
Location: Coda 1215 Midtown

Dr. Dhruv Batra (Advisor; School of Interactive Computing, Georgia Institute of Technology & Facebook AI Research)
Dr. Devi Parikh (School of Interactive Computing, Georgia Institute of Technology & Facebook AI Research)
Dr. Mark Riedl (School of Interactive Computing, Georgia Institute of Technology)
Dr. Trevor Darrell (University of California, Berkeley)
Dr. Stefan Lee (Oregon State University)


My research goal is to build transparent and grounded AI systems. In my thesis, I try to answer the question -- "Do deep visual models make their decisions for the right reasons?" in two ways:


1. Visual grounding. Grounding is essential to build reliable and generalizable systems that are not driven by dataset biases. In the context of the task of Visual Question Answering (VQA), we would expect models to be looking at the right regions in the image while answering a question. We address this issue of visual grounding in VQA by proposing a) two new benchmarking datasets to test visual grounding, and b) a new VQA model that is visually grounded by design.


2. Transparency. Transparency in AI systems can help system designers find their failure modes and provide guidance to teach humans. We developed techniques for generating explanations from deep models that give us insights into what they are basing their decisions on. Specifically, we generate counterfactual visual explanations and show how we can use such explanations to teach humans.


Most of the interpretability works rely on correlations to generate explanations which might or might not reflect the true underlying mechanism in the deep model. This can result in disastrous consequences in high-risk domains such as healthcare, ethics, etc. To prevent this, there is a need to reason about the causal relationship between explanations and model decisions. Towards the end of the talk, I will present my recent work on generating causal explanations.

Additional Information

In Campus Calendar

Graduate Studies

Invited Audience
Faculty/Staff, Public, Graduate students, Undergraduate students
Phd Defense
  • Created By: Tatianna Richardson
  • Workflow Status: Published
  • Created On: Mar 6, 2020 - 3:57pm
  • Last Updated: Mar 6, 2020 - 3:57pm