event

PhD Proposal by Frederick Hohman

Primary tabs

Title: Interactive Scalable Interfaces for Machine Learning Interpretability

Fred Hohman
School of Computational Science & Engineering
College of Computing
Georgia Institute of Technology

Date: Monday, December 2, 2019
Time: 4:30pm - 6pm (EST)
Location: Coda 114 (first floor conference room)

Committee
Duen Horng (Polo) Chau - Advisor, Georgia Tech, Computational Science & Engineering
Alex Endert - Georgia Tech, Interactive Computing
Chao Zhang - Georgia Tech, Computational Science & Engineering
Nathan Hodas - Pacific Northwest National Lab
Scott Davidoff - NASA Jet Propulsion Lab
Steven Drucker - Microsoft Research

Abstract
Data-driven paradigms now solve the world's hardest problems. Instead of authoring a function through exact and explicit rules, machine learning techniques instead learn rules from data. Unfortunately, often these rules are unknown to both the people who train models and the people they impact. This is the purpose and promise of machine learning interpretability. But what is interpretability? What are the rules that modern, complex models learn? And how can we best represent and communicate them to people? Since machine learning now impacts people's daily lives, we answer these questions taking a human-centered approach by designing and developing interactive interfaces can enable interpretability at scale and for everyone. This thesis focuses on:

(1) Operationalizing machine learning interpretability: We conduct user research for understanding what practitioners want from interpretability, instantiate these into an interactive interface, and use it as a design probe into the emerging practice of interpreting models (Gamut). We also show how different complementary mediums can be used for explanations, such as visualization and verbalization (TeleGam).

(2) Scaling deep learning interpretability: We reveal critical yet missing areas of deep learning interpretability research (Interrogative Survey). For example, neural network explanations focus on single instances or neurons, but predictions are often computed from millions of weights that are optimized over millions of instances, such explanations can easily miss a bigger picture. We present an interactive interface that scalably summarizes and visualizes what features a deep learning model has learned and how those features interact to make predictions (Summit).

(3) Communicating interpretability with interactive articles: Due to rapid democratization, machine learning now impacts everyone. We use interactive articles, a new medium to communicate complex ideas, to communicate interpretability and teach people about machine learning's capabilities and limitations, while developing new publishing platforms (Parametric Press). We propose a framework for how interactive articles can be effective through a comprehensive survey that pulls from diverse disciplines of research and critical reflections (Interactive Articles).

Our work has made significant impact to industry and society: our visualizations have been deployed and demoed at Microsoft and built into widely-used interpretability toolkits, our research is supported by NASA, and our interactive articles have been read by hundreds of thousands of people.

Status

  • Workflow Status:Published
  • Created By:Tatianna Richardson
  • Created:11/26/2019
  • Modified By:Tatianna Richardson
  • Modified:11/26/2019

Categories

Keywords