event

PhD Defense by Arjun Chandrasekaran

Primary tabs

Title: Towards natural Human-AI interactions in Vision and Language

 

Arjun Chandrasekaran

Computer Science PhD Student

College of Computing

Georgia Institute of Technology

http://www.prism.gatech.edu/~arjun9/

 

Date: Monday, October 21st, 2019

Time:  12pm to 2pm (EDT)

Location: Coda 1215 “Midtown” (756 West Peachtree St NW)

 

Committee:

----------------

Dr. Devi Parikh (Advisor, College of Computing, Georgia Institute of Technology)

Dr. Dhruv Batra (College of Computing, Georgia Institute of Technology)

Dr. Sonia Chernova (College of Computing, Georgia Institute of Technology)

Dr. Mark Riedl (College of Computing, Georgia Institute of Technology)

Dr. Mohit Bansal (Department of Computer Science, University of North Carolina at Chapel Hill)

 

 

Abstract:

----------------

Inter-human interaction is a rich form of communication. Human interactions typically leverage a good theory of mind, involve pragmatics, story-telling, humor, sarcasm, empathy, sympathy, etc. Recently, we have seen a tremendous increase in the frequency and the modalities through which humans interact with AI. Despite this, current human-AI interactions lack many of these features that characterize inter-human interactions. Towards the goal of developing AI that can interact with humans naturally (similar to other humans), I take a two-pronged approach that involves investigating the ways in which both the AI and the human can adapt to each other’s characteristics and capabilities. In my research, I study aspects 

of human interactions, such as humor, story-telling, and the humans’ abilities to understand and collaborate with an AI. Specifically, in the vision and language modalities,

  1. In an effort to improve the AI’s capabilities to adapt its interactions to a human, we build computational models for (i) humor manifested in static images, (ii) contextual, multi-modal humor, and (iii) temporal understanding of the elements of a story
  2. In an effort to improve the capabilities of a collaborative human-AI team, we study (i) a lay person’s predictions regarding the behavior of an AI in a situation, (ii) the extent to which interpretable explanations from an AI can improve performance of a human-AI team. 

Through this work, I demonstrate that aspects of human interactions (such as certain forms of humor and story-telling) can be modeled with reasonable success using computational models that utilize neural networks. On the other hand, I also show that a lay person can successfully predict the outputs and failures of a deep neural network. Finally, I present evidence that suggests that a lay person who has access to interpretable explanations from the model, can collaborate more effectively with a neural network on a goal-driven task. 

Status

  • Workflow Status:Published
  • Created By:Tatianna Richardson
  • Created:10/10/2019
  • Modified By:Tatianna Richardson
  • Modified:10/10/2019

Categories

Keywords