event

PhD Proposal by Sarah Wiegreffe

Primary tabs

Title: Interpreting Neural Networks for and with Natural Language

Date: Thursday, October 14th, 2021
Time: 12:00-2:00pm (ET)
Location (virtual): https://bluejeans.com/1547682366

Sarah Wiegreffe
PhD Student in Computer Science
School of Interactive Computing
College of Computing
Georgia Institute of Technology

Committee:
Dr. Mark Riedl (advisor, School of Interactive Computing, Georgia Institute of Technology)
Dr. Alan Ritter (School of Interactive Computing, Georgia Institute of Technology)
Dr. Wei Xu (School of Interactive Computing, Georgia Institute of Technology)
Dr. Noah Smith (Paul G. Allen School of Computer Science & Engineering, University of Washington)
Dr. Sameer Singh (Bren School of Information and Computer Sciences, University of California at Irvine)

Abstract:

In the past decade, natural language processing (NLP) systems have come to be built almost exclusively on a backbone of large neural models. As the landscape of feasible tasks has widened due to the capabilities of these models, the space of applications has also widened to include subfields with real-world consequences, such as fact-checking, fake news detection, and medical decision support. The increasing size and nonlinearity of these models results in an opacity that hinders efforts by machine learning practitioners and lay-users alike to understand their internals and derive meaning or trust from their predictions. The field of explainable NLP has emerged as an active area for remedying this opacity, with a focus on providing textual explanations meaningful to human users. In this thesis, I propose test suites for evaluating the quality of model explanations under two definitions of meaning: faithfulness and human acceptability. I use these evaluation methods to investigate the utility of two explanation forms and three model architectures. I additionally propose two methods to improve explanation quality– one which improves the faithfulness of extractive explanations and one which increases the human acceptability of free-text explanations.

Status

  • Workflow Status:Published
  • Created By:Tatianna Richardson
  • Created:10/05/2021
  • Modified By:Tatianna Richardson
  • Modified:10/05/2021

Categories

Keywords