event

PhD Defense by Yash Goyal

Primary tabs

Title: Towards Transparent and Grounded Visual AI Systems

----------------

Yash Goyal
Ph.D. Candidate in Computer Science
School of Interactive Computing
Georgia Institute of Technology
https://www.cc.gatech.edu/~ygoyal3/

Date: Monday, March 9th, 2020
Time: 11:00 am to 1:00 pm (ET)
Location: Coda 1215 Midtown

Committee:
----------------
Dr. Dhruv Batra (Advisor; School of Interactive Computing, Georgia Institute of Technology & Facebook AI Research)
Dr. Devi Parikh (School of Interactive Computing, Georgia Institute of Technology & Facebook AI Research)
Dr. Mark Riedl (School of Interactive Computing, Georgia Institute of Technology)
Dr. Trevor Darrell (University of California, Berkeley)
Dr. Stefan Lee (Oregon State University)

Abstract:
----------------

My research goal is to build transparent and grounded AI systems. In my thesis, I try to answer the question -- "Do deep visual models make their decisions for the right reasons?" in two ways:

 

1. Visual grounding. Grounding is essential to build reliable and generalizable systems that are not driven by dataset biases. In the context of the task of Visual Question Answering (VQA), we would expect models to be looking at the right regions in the image while answering a question. We address this issue of visual grounding in VQA by proposing a) two new benchmarking datasets to test visual grounding, and b) a new VQA model that is visually grounded by design.

 

2. Transparency. Transparency in AI systems can help system designers find their failure modes and provide guidance to teach humans. We developed techniques for generating explanations from deep models that give us insights into what they are basing their decisions on. Specifically, we generate counterfactual visual explanations and show how we can use such explanations to teach humans.

 

Most of the interpretability works rely on correlations to generate explanations which might or might not reflect the true underlying mechanism in the deep model. This can result in disastrous consequences in high-risk domains such as healthcare, ethics, etc. To prevent this, there is a need to reason about the causal relationship between explanations and model decisions. Towards the end of the talk, I will present my recent work on generating causal explanations.

Status

  • Workflow Status:Published
  • Created By:Tatianna Richardson
  • Created:03/06/2020
  • Modified By:Tatianna Richardson
  • Modified:03/06/2020

Categories

Keywords