634753 event 1588010979 1588010979 <![CDATA[PhD Proposal by Terrance Law]]> Title: Automated Yet Transparent Data Insights

 

Terrance Law

Ph.D. Student in Computer Science

School of Interactive Computing

Georgia Institute of Technology

https://terrancelaw.github.io

 

Date: Monday, May 4th, 2020

Time: 3:00 pm to 5:00 pm (EDT)

BlueJeans:  https://primetime.bluejeans.com/a2m/live-event/zegywwpc

 

Committee:

Dr. Alex Endert (advisor), School of Interactive Computing, Georgia Institute of Technology

Dr. John Stasko (advisor), School of Interactive Computing, Georgia Institute of Technology

Dr. Duen Horng (Polo) Chau, School of Computational Science and Engineering, Georgia Institute of Technology

Dr. Jian Zhao, Cheriton School of Computer Science, University of Waterloo

 

Abstract:

Visualization systems have begun to employ functionality that automatically extracts and communicates data insights. For example, given a sales data set, these systems may automatically generate textual descriptions of data insights such as "Your market share of motorcycles is the highest in Georgia" and "The sales of motorcycles in Georgia have been decreasing over the past six months." Researchers have been investigating various applications of automated data insights such as exploratory data analysis, visualization interpretation, and question answering. However, such investigations have been detached from the workflow of visualization users in practice.

 

A practical concern of these automatically generated data insights is a lack of transparency: Systems that recommend automated data insights often do not reveal how and why the insights are generated. In a typical scenario, visualization users manually mine insights from data and rigorously assess the validity of these data insights. With the validation, they can confidently communicate the insights to other stakeholders and utilize the insights to inform decisions. In the face of data insights that are automatically generated, however, visualization users tend to be skeptical because they do not know how the insights are garnered. Users may not be willing to trust data insights that they cannot explain for communication and for making important decisions. Opaqueness in automated insights can pose hindrance to their adoption.

 

My work intends to investigate the needs for transparency in automated data insights and the effectiveness of explanations in supporting transparency. First, I proposed an organizational framework of the types and purposes of automated insights based on a systematic review of 20 systems that recommend automated insights. I then interviewed 23 professional visualization users from 19 organizations to understand how they perceive automated insight tools. I will conduct a crowdsourced study to investigate the effectiveness of why and why-not explanations in promoting user trust, understanding, and knowledge transfer. My thesis research is expected to offer a more structured understanding of automated data insights and provide guidance to designers who aim to create transparent and trustworthy automated insight tools.

]]> <![CDATA[BlueJeans Link]]> 221981 1788 102851