{"690155":{"#nid":"690155","#data":{"type":"event","title":"PhD Defense by Seongmin Lee","body":[{"value":"\u003Cp\u003E\u003Cstrong\u003ETitle\u003C\/strong\u003E: Visual and Algorithmic Explanations to Fortify AI Safety\u003C\/p\u003E\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u003Cstrong\u003EDate\u003C\/strong\u003E: Monday, May 18, 2026\u003C\/p\u003E\u003Cp\u003E\u003Cstrong\u003ETime\u003C\/strong\u003E: 1PM to 3PM Eastern Time (US)\u003C\/p\u003E\u003Cp\u003E\u003Cstrong\u003ELocation\u003C\/strong\u003E: Coda 114 (1st floor conference room; just walk in, no special access needed)\u003C\/p\u003E\u003Cp\u003E\u003Cstrong\u003EVirtual Meeting\u003C\/strong\u003E:\u0026nbsp;\u003Ca href=\u0022https:\/\/nam12.safelinks.protection.outlook.com\/?url=https%3A%2F%2Fgatech.zoom.us%2Fj%2F91061621484\u0026amp;data=05%7C02%7Ctm186%40gtvault.onmicrosoft.com%7C307e3f7976a34cdadbf908dea8594092%7C482198bbae7b4b258b7a6d7f32faa083%7C1%7C0%7C639133298478975396%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C\u0026amp;sdata=RcdX0lQJl42470bP%2B6LbXtyIB%2FGEwrRz%2Fev%2BQCPpWP0%3D\u0026amp;reserved=0\u0022\u003Ehttps:\/\/gatech.zoom.us\/j\/91061621484\u003C\/a\u003E\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u003Cstrong\u003ESeongmin Lee\u003C\/strong\u003E\u003C\/p\u003E\u003Cp\u003ECS Ph.D. Candidate\u003C\/p\u003E\u003Cp\u003ESchool of Computational Science and Engineering\u003C\/p\u003E\u003Cp\u003ECollege of Computing\u003C\/p\u003E\u003Cp\u003EGeorgia Institute of Technology\u003C\/p\u003E\u003Cp\u003E\u003Ca href=\u0022https:\/\/nam12.safelinks.protection.outlook.com\/?url=https%3A%2F%2Fseongmin.xyz%2F\u0026amp;data=05%7C02%7Ctm186%40gtvault.onmicrosoft.com%7C307e3f7976a34cdadbf908dea8594092%7C482198bbae7b4b258b7a6d7f32faa083%7C1%7C0%7C639133298479006597%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C\u0026amp;sdata=4dVakJXq5nKqMswsl9mNutCG4G0q0WiSJVdTkliNyzw%3D\u0026amp;reserved=0\u0022\u003Ehttps:\/\/seongmin.xyz\/\u003C\/a\u003E\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u003Cstrong\u003ECommittee\u003C\/strong\u003E:\u003C\/p\u003E\u003Cp\u003EDr. Duen Horng (Polo) Chau - Advisor, Georgia Tech, School of Computational Science \u0026amp; Engineering\u003C\/p\u003E\u003Cp\u003EDr. Alex Endert - Georgia Tech, School of Interactive Computing\u003C\/p\u003E\u003Cp\u003EDr. Chao Zhang - Georgia Tech, School of Computational Science \u0026amp; Engineering\u003C\/p\u003E\u003Cp\u003EDr. Judy Hoffman - University of California, Irvine, Donald Bren School of Information and Computer Sciences\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EDr. Oliver Brdiczka - Adobe, Adobe Firefly\u003C\/p\u003E\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u003Cstrong\u003EAbstract\u003C\/strong\u003E:\u003C\/p\u003E\u003Cp\u003EAs modern AI systems, such as diffusion-based generative models or large language models (LLMs), continue to grow in scale, complexity, and societal impact, understanding and mitigating their risks has become increasingly urgent yet challenging due to their black-box nature.\u003C\/p\u003E\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EMy thesis addresses this critical challenge by developing novel visualizations and algorithms that help people understand the reasons and mechanisms behind AI behaviors, and take actionable steps to mitigate risks. Our work is organized into three complementary\u003C\/p\u003E\u003Cp\u003Ethrusts:\u003C\/p\u003E\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u003Cem\u003E(1) Attribute risks.\u003C\/em\u003E We begin with investigating how to uncover the underlying causes of AI risks. We present the first survey bridging LLM interpretation and safety. Building on insights from our survey that training data can offer intuitive explanations for LLM generations, we develop LLM Attributor, which visually reveals the training data sources behind LLM-generated text, offering a novel way to diagnose unsafe outputs.\u003C\/p\u003E\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u003Cem\u003E(2) Explain failure.\u003C\/em\u003E While interpretation algorithms reveal causes of AI risks, their impact depends on how effectively they are communicated. To fill this gap, we introduce interactive visualizations that explain complex model mechanisms to broad audiences. Diffusion Explainer helps non-experts understand modern generative AI, outperforming traditional tools in 56-participant user studies. Extending visualization to non-generative models, VisCUIT empowers experts to explore the mechanisms behind failures of classifiers.\u003C\/p\u003E\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u003Cem\u003E(3) Guide mitigation.\u003C\/em\u003E To reduce risks, we introduce CRAYON, simple yet powerful algorithms that help classifiers overcome reliance on irrelevant features using yes-no annotations; experiments with large-scale human evaluations with 5,893 participants show its superiority over 12 methods across three datasets \u2014 even those requiring complex annotations. Extending to modern LLMs, we develop SHINE algorithm to determine whether hallucinations stem from limited model knowledge or flawed generation strategies. SHINE effectively differentiates faithful text and two types of hallucinations across three LLMs, and outperforms seven hallucination detection methods across four datasets and four LLMs.\u003C\/p\u003E\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\u003Cp\u003EMy PhD research develops practical, innovative, human-centered solutions for research problems grounded in real-world needs, from advancing AI education to improving LLM safety, leveraging close partnership with leading companies like Google, Adobe, Cisco, JPMorgan Chase, ADP, and Avast. My work has made significant impacts across academia, industry, and society: Diffusion Explainer and its followup work Transformer Explainer have reached over 638k users in 210+ countries and have been integrated into university AI courses (e.g., MIT, Columbia). My research has been recognized with honors including the Korean Honor Scholarship, NCWIT AiC Collegiate Award Finalist, and IEEE VIS Best Poster Award.\u003C\/p\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EVisual and Algorithmic Explanations to Fortify AI Safety\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"Visual and Algorithmic Explanations to Fortify AI Safety"}],"uid":"27707","created_gmt":"2026-05-05 20:07:19","changed_gmt":"2026-05-05 20:07:32","author":"Tatianna Richardson","boilerplate_text":"","field_publication":"","field_article_url":"","field_event_time":{"event_time_start":"2026-05-18T13:00:00-04:00","event_time_end":"2026-05-18T15:00:00-04:00","event_time_end_last":"2026-05-18T15:00:00-04:00","gmt_time_start":"2026-05-18 17:00:00","gmt_time_end":"2026-05-18 19:00:00","gmt_time_end_last":"2026-05-18 19:00:00","rrule":null,"timezone":"America\/New_York"},"location":"Coda 114 ","extras":[],"groups":[{"id":"221981","name":"Graduate Studies"}],"categories":[],"keywords":[{"id":"100811","name":"Phd Defense"}],"core_research_areas":[],"news_room_topics":[],"event_categories":[{"id":"1788","name":"Other\/Miscellaneous"}],"invited_audience":[{"id":"78771","name":"Public"}],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[],"email":[],"slides":[],"orientation":[],"userdata":""}}}