event

PhD Defense by Ramprasaath R. Selvaraju

Primary tabs

Title: Explaining Model Decisions and Correcting them via Human Feedback

 

Ramprasaath R. Selvaraju

 

Ph.D. Candidate in Computer Science

School of Interactive Computing

Georgia Institute of Technology

http://ramprs.github.io/

 

Date: Monday, March 23rd, 2020

Time: 12:00-2:00 PM (EST)

BlueJeans: https://bluejeans.com/4346160082

**Note: this defense is remote-only due to the institute's guidelines on COVID-19**

 

Committee:

 

Dr. Devi Parikh (Advisor), School of Interactive Computing, Georgia Institute of Technology

Dr. Dhruv Batra, School of Interactive Computing, Georgia Institute of Technology

Dr. Judy Hoffman, School of Interactive Computing, Georgia Institute of Technology

Dr. Stefan Lee, School of Electrical Engineering and Computer Science, Oregon State University

Dr. Been Kim, Google Brain

 

Abstract:

 

Deep networks have enabled unprecedented breakthroughs in a variety of computer vision tasks. While these models enable superior performance, their increasing complexity and lack of decomposability into individually intuitive components makes them hard to interpret. Consequently, when today's intelligent systems fail, they fail spectacularly disgracefully, giving no warning or explanation.

 

Towards the goal of making deep networks interpretable, trustworthy and unbiased, in this thesis, I will present my work on building algorithms that provide explanations for decisions emanating from deep networks in order to —

 

1. understand/interpret why the model did what it did,

2. correct unwanted biases learned by AI models, and

3. encourage human-like reasoning in AI.

 

 

Status

  • Workflow Status:Published
  • Created By:Tatianna Richardson
  • Created:03/20/2020
  • Modified By:Tatianna Richardson
  • Modified:03/20/2020

Categories

Keywords