<node id="689044">
  <nid>689044</nid>
  <type>event</type>
  <uid>
    <user id="27707"><![CDATA[27707]]></user>
  </uid>
  <created>1773934301</created>
  <changed>1773934343</changed>
  <title><![CDATA[PhD Defense by Alexander Bendeck]]></title>
  <body><![CDATA[<p>Title: Large Language Models as Computational Engines and Virtual Domain Experts for Visual Data Analysis</p><p>Date: Thursday, April 2, 2026<br>Time: 3-5pm Eastern time (U.S.)<br>Location: Technology Square Research Building (TSRB) 334<br>Virtual meeting (hybrid): https://gatech.zoom.us/j/5618662383?pwd=dTB2YjB5WnRiaHhFaHZITVNQeFJVUT09</p><p>Alexander Bendeck<br>Ph.D. Candidate in Computer Science&nbsp;<br>School of Interactive Computing&nbsp;<br>Georgia Institute of Technology</p><p>Committee<br>Dr. John Stasko (Advisor) - School of Interactive Computing, Georgia Institute of Technology<br>Dr. Alex Endert - School of Interactive Computing, Georgia Institute of Technology<br>Dr. Clio Andris - School of City and Regional Planning, Georgia Institute of Technology<br>Dr. Cindy Xiong Bearfield - School of Interactive Computing, Georgia Institute of Technology<br>Dr. Ross Maciejewski - School of Computing and Augmented Intelligence, Arizona State University</p><p>Abstract<br>Advances in generative artificial intelligence have led to the development of pre-trained large language models (LLMs) which are widely available and broadly useful. For data visualization researchers, LLMs' vast domain knowledge and computational power have the promise to extend existing research threads in exciting directions. However, well-documented hallucination and inconsistency issues with LLMs can inhibit visualization system performance and erode user trust. We also have limited formal understanding of LLMs’ ability to help analysts with specific tasks.</p><p>In my thesis work, I study the potential use of LLMs as “virtual domain experts” during visual data analysis. This includes two main goals: First, to evaluate LLMs at applying their knowledge bases to data- and chart-centric tasks; and second, to study user satisfaction and trust for LLM-powered visualization systems. I address the first goal through an empirical evaluation of the GPT-4V multimodal language model on a suite of visualization literacy tasks, demonstrating LLM performance at reading and understanding visualizations. In subsequent work, I address both goals by assessing LLMs’ domain knowledge and generative capabilities on two specific tasks: question answering and data integration. For each task, I present formative studies, empirical evaluations, and design probes using proof-of-concept visualization systems, exploring both technical and human-centered perspectives on the use of LLMs during visual data analysis.</p><p>&nbsp;</p>]]></body>
  <field_summary_sentence>
    <item>
      <value><![CDATA[Large Language Models as Computational Engines and Virtual Domain Experts for Visual Data Analysis ]]></value>
    </item>
  </field_summary_sentence>
  <field_summary>
    <item>
      <value><![CDATA[<p>Large Language Models as Computational Engines and Virtual Domain Experts for Visual Data Analysis&nbsp;</p>]]></value>
    </item>
  </field_summary>
  <field_time>
    <item>
      <value><![CDATA[2026-04-02T15:00:00-04:00]]></value>
      <value2><![CDATA[2026-04-02T17:00:00-04:00]]></value2>
      <rrule><![CDATA[]]></rrule>
      <timezone><![CDATA[America/New_York]]></timezone>
    </item>
  </field_time>
  <field_fee>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_fee>
  <field_extras>
      </field_extras>
  <field_audience>
          <item>
        <value><![CDATA[Public]]></value>
      </item>
      </field_audience>
  <field_media>
      </field_media>
  <field_contact>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_contact>
  <field_location>
    <item>
      <value><![CDATA[Technology Square Research Building (TSRB) 334]]></value>
    </item>
  </field_location>
  <field_sidebar>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_sidebar>
  <field_phone>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_phone>
  <field_url>
    <item>
      <url><![CDATA[]]></url>
      <title><![CDATA[]]></title>
            <attributes><![CDATA[]]></attributes>
    </item>
  </field_url>
  <field_email>
    <item>
      <email><![CDATA[]]></email>
    </item>
  </field_email>
  <field_boilerplate>
    <item>
      <nid><![CDATA[]]></nid>
    </item>
  </field_boilerplate>
  <links_related>
      </links_related>
  <files>
      </files>
  <og_groups>
          <item>221981</item>
      </og_groups>
  <og_groups_both>
          <item><![CDATA[Graduate Studies]]></item>
      </og_groups_both>
  <field_categories>
          <item>
        <tid>1788</tid>
        <value><![CDATA[Other/Miscellaneous]]></value>
      </item>
      </field_categories>
  <field_keywords>
          <item>
        <tid>100811</tid>
        <value><![CDATA[Phd Defense]]></value>
      </item>
      </field_keywords>
  <field_userdata><![CDATA[]]></field_userdata>
</node>
