{"689941":{"#nid":"689941","#data":{"type":"event","title":"SCS Visitor Seminar- Ilias Diakonikolas","body":[{"value":"\u003Cp\u003E\u003Cstrong\u003ETalk Title: \u003C\/strong\u003EAlgorithmic Foundations of Robust Learning\u003C\/p\u003E\u003Cp\u003E\u003Cstrong\u003ESpeaker: \u0026nbsp;\u003C\/strong\u003EIlias Diakonikolas, Professor, The University of Wisconsin\u2013Madison\u003C\/p\u003E\u003Cp\u003E\u003Cstrong\u003EAbstract:\u003C\/strong\u003E\u003C\/p\u003E\u003Cp\u003ERobustness is a basic requirement for trustworthy machine learning, yet achieving it efficiently in high dimensions has long been a fundamental challenge. For decades, the prevailing view was that learning algorithms with strong robustness guarantees necessarily come with prohibitive computational cost, creating a sharp tension between statistical guarantees and algorithmic tractability. This talk describes a research program aimed at overcoming this barrier through an algorithmic theory of robust learning.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u0026nbsp;I will describe two interconnected threads within this research program. The first develops a unified framework for efficient robust high-dimensional estimation, including the first polynomial-time algorithms for several fundamental unsupervised learning tasks under adversarial corruption. The second studies supervised learning under noisy labels, with an emphasis on learning predictors with low-dimensional latent representations. I will conclude by discussing future directions, including robustness beyond worst-case corruption and the efficient learning of richer nonlinear representations.\u003C\/p\u003E\u003Cp\u003E\u003Cstrong\u003EBio:\u003C\/strong\u003E\u003C\/p\u003E\u003Cp\u003Elias Diakonikolas is the Lubar Professor in the Department of Computer Sciences at UW Madison. He obtained a Diploma in electrical and computer engineering from the National Technical University of Athens and a Ph.D. in computer science from Columbia University where he was advised by Mihalis Yannakakis. Before moving to UW, he was an Andrew and Erna Viterbi Early Career Chair at USC and a faculty member at the University of Edinburgh. Prior to that, he was the Simons postdoctoral fellow in theoretical computer science at the University of California, Berkeley. His research is on the algorithmic foundations of massive data sets, in particular on designing efficient algorithms for fundamental problems in machine learning. He is a recipient of the ACM Grace Murray Hopper award, a Sloan Fellowship, an NSF CAREER Award, a Romnes Faculty Fellowship, a Google Faculty Research Award, a Marie Curie Fellowship, best paper awards at NeurIPS and COLT, the IBM Research Pat Goldberg Best Paper Award, and an honorable mention in the George Nicholson competition from the INFORMS society. Ilias wrote with Daniel Kane the textbook \u0022Algorithmic High-dimensional Robust Statistics\u0022 published by Cambridge University Press.\u003C\/p\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003E\u003Cstrong\u003ETalk Title: \u003C\/strong\u003EAlgorithmic Foundations of Robust Learning\u003C\/p\u003E\u003Cp\u003E\u003Cstrong\u003ESpeaker: \u0026nbsp;\u003C\/strong\u003EIlias Diakonikolas, Professor, The University of Wisconsin\u2013Madison\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"SCS Visitor Seminar-  Ilias Diakonikolas, Professor, The University of Wisconsin\u2013Madison"}],"uid":"36532","created_gmt":"2026-04-21 20:35:11","changed_gmt":"2026-04-21 20:35:11","author":"Morgan Usry","boilerplate_text":"","field_publication":"","field_article_url":"","field_event_time":{"event_time_start":"2026-04-28T11:00:00-04:00","event_time_end":"2026-04-28T12:00:00-04:00","event_time_end_last":"2026-04-28T12:00:00-04:00","gmt_time_start":"2026-04-28 15:00:00","gmt_time_end":"2026-04-28 16:00:00","gmt_time_end_last":"2026-04-28 16:00:00","rrule":null,"timezone":"America\/New_York"},"location":"KACB 2447","extras":[],"groups":[{"id":"47223","name":"College of Computing"},{"id":"322011","name":"College of Computing Events"},{"id":"50875","name":"School of Computer Science"}],"categories":[],"keywords":[],"core_research_areas":[],"news_room_topics":[],"event_categories":[{"id":"194683","name":"Talk"}],"invited_audience":[{"id":"78761","name":"Faculty\/Staff"},{"id":"177814","name":"Postdoc"},{"id":"174045","name":"Graduate students"},{"id":"78751","name":"Undergraduate students"}],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[],"email":[],"slides":[],"orientation":[],"userdata":""}},"689818":{"#nid":"689818","#data":{"type":"event","title":"HotCSE Seminar: ShengYun (Anthony) Peng","body":[{"value":"\u003Cp\u003E\u003Cstrong\u003EName: \u003C\/strong\u003ESchool of CSE CS Ph.D. Candidate ShengYun (Anthony) Peng\u003C\/p\u003E\u003Cp\u003E\u003Cstrong\u003EDate:\u0026nbsp;\u003C\/strong\u003EWednesday, April 29, 2026, at 12:00 p.m.\u003C\/p\u003E\u003Cp\u003E\u003Cstrong\u003ELocation:\u003C\/strong\u003E\u0026nbsp;Coda, Room 114 (\u003Ca href=\u0022https:\/\/www.google.com\/maps\/place\/Coda\/@33.7752651,-84.3876426,15z\/data=!4m6!3m5!1s0x88f5046677950223:0x7fd1ad077b382c98!8m2!3d33.7752651!4d-84.3876426!16s%2Fg%2F11c6lvs7sl?entry=ttu\u0022\u003EGoogle Maps link\u003C\/a\u003E)\u003C\/p\u003E\u003Cp\u003E\u003Cem\u003ELunch provided!\u003C\/em\u003E\u003C\/p\u003E\u003Cp\u003E\u003Cstrong\u003ETitle:\u0026nbsp;\u003C\/strong\u003ESafety Alignment of Generative Foundation Models\u003C\/p\u003E\u003Cp\u003E\u003Cstrong\u003EAbstract\u003C\/strong\u003E: Modern LLMs are safety-aligned through supervised finetuning (SFT) and reinforcement learning from human feedback (RLHF) to mitigate harmful, undesirable, or disallowed outputs. Despite ongoing progress, LLMs still exhibit critical safety gaps: models can be jailbroken into revealing harmful content, often overrefuse benign queries, and fail to maintain safety under adversarial scenarios. My dissertation research advances the safety alignment of generative foundation models by developing principled tools, architectures, and training methods that strengthen their robustness and reliability at scale. Specifically, this thesis focuses on three complementary thrusts: a) Understanding and shaping the safety landscape of LLMs, b) Internalizing safety in agentic reasoning intelligence, and c) Grounding safety and robustness in multimodal perception.\u003C\/p\u003E\u003Cp\u003E\u003Cstrong\u003EBio\u003C\/strong\u003E: Anthony is a CS Ph.D. candidate at Georgia Tech working with Polo Chau. His thesis research has elevated foundational AI efforts at Nvidia, Meta, IBM, Intel, and ADP via internships and collaborations, and has resulted in several first-author publications and awards at NeurIPS, ACL, ICCV, EMNLP, CVPR, and BMVC. His research has contributed to the AI foundation of multiple funded industry research grants totaling over $1.4M. Learn more about him at \u003Ca href=\u0022https:\/\/nam12.safelinks.protection.outlook.com\/?url=https%3A%2F%2Fshengyun-peng.github.io%2F\u0026amp;data=05%7C02%7Cbwine3%40gtvault.onmicrosoft.com%7C303dc9415eb14fefcb7808de9c1a020a%7C482198bbae7b4b258b7a6d7f32faa083%7C1%7C0%7C639119832703433041%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C\u0026amp;sdata=f1MXbtHfeOkhqSyFga9YXc%2FrtttSTocImXrE3taAf5I%3D\u0026amp;reserved=0\u0022 id=\u0022OWA726a464e-7dc2-62f2-9b14-343d384d9f8b\u0022 title=\u0022Original URL:\u0026#13;https:\/\/shengyun-peng.github.io\/\u0026#13;\u0026#13;Click to follow link.\u0022\u003Eshengyun-peng.github.io\u003C\/a\u003E.\u003C\/p\u003E\u003Cp\u003E\u003Cem\u003E\u003Cstrong\u003EAbout HotCSE\u003C\/strong\u003E\u003C\/em\u003E\u003C\/p\u003E\u003Cp\u003EHotCSE is an academic seminar series to bring Ph.D. students in Computational Science and Engineering together to discuss interesting topics. The topics consist of high-performance computing, machine learning, data analysis, simulation, computational sustainability, medical informatics, etc.\u003C\/p\u003E\u003Cp\u003EThe talks have always been enjoyable and have ranged from quite informal to formal conference style talks. Either chalks or slides can be used to help people understand your talk. It is also a great forum to practice conference talks and bounce around new ideas.\u003C\/p\u003E\u003Cp\u003ECurrently the talks are sponsored by the School of Computational Science and Engineering. The goal of CSE is slightly broader than that of these talks - we want to bring more people outside CSE to discuss their related work here.\u003C\/p\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003E\u003Cstrong\u003EName: \u003C\/strong\u003ESchool of CSE CS Ph.D. Candidate ShengYun (Anthony) Peng\u003C\/p\u003E\u003Cp\u003E\u003Cstrong\u003EDate:\u0026nbsp;\u003C\/strong\u003EWednesday, April 29, 2026, at 12:00 p.m.\u003C\/p\u003E\u003Cp\u003E\u003Cstrong\u003ELocation:\u003C\/strong\u003E\u0026nbsp;Coda, Room 114 (\u003Ca href=\u0022https:\/\/www.google.com\/maps\/place\/Coda\/@33.7752651,-84.3876426,15z\/data=!4m6!3m5!1s0x88f5046677950223:0x7fd1ad077b382c98!8m2!3d33.7752651!4d-84.3876426!16s%2Fg%2F11c6lvs7sl?entry=ttu\u0022\u003EGoogle Maps link\u003C\/a\u003E)\u003C\/p\u003E\u003Cp\u003E\u003Cstrong\u003ETitle:\u0026nbsp;\u003C\/strong\u003ESafety Alignment of Generative Foundation Models\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"Seminar Title: Safety Alignment of Generative Foundation Models"}],"uid":"36319","created_gmt":"2026-04-17 12:22:27","changed_gmt":"2026-04-17 12:30:29","author":"Bryant Wine","boilerplate_text":"","field_publication":"","field_article_url":"","field_event_time":{"event_time_start":"2026-04-29T12:00:00-04:00","event_time_end":"2026-04-29T13:00:00-04:00","event_time_end_last":"2026-04-29T13:00:00-04:00","gmt_time_start":"2026-04-29 16:00:00","gmt_time_end":"2026-04-29 17:00:00","gmt_time_end_last":"2026-04-29 17:00:00","rrule":null,"timezone":"America\/New_York"},"location":"Coda, Room 114","extras":["free_food"],"hg_media":{"679985":{"id":"679985","type":"image","title":"Anthony-Peng.jpg","body":null,"created":"1776428985","gmt_created":"2026-04-17 12:29:45","changed":"1776428985","gmt_changed":"2026-04-17 12:29:45","alt":"ShengYun Anthony Peng HotCSE","file":{"fid":"264214","name":"Anthony-Peng.jpg","image_path":"\/sites\/default\/files\/2026\/04\/17\/Anthony-Peng.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2026\/04\/17\/Anthony-Peng.jpg","mime":"image\/jpeg","size":17786,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2026\/04\/17\/Anthony-Peng.jpg?itok=XAafvFai"}}},"media_ids":["679985"],"related_links":[{"url":"https:\/\/hotcse.gatech.edu\/","title":"HotCSE"}],"groups":[{"id":"47223","name":"College of Computing"},{"id":"50877","name":"School of Computational Science and Engineering"}],"categories":[],"keywords":[],"core_research_areas":[],"news_room_topics":[],"event_categories":[{"id":"1795","name":"Seminar\/Lecture\/Colloquium"}],"invited_audience":[{"id":"194945","name":"Alumni"},{"id":"78761","name":"Faculty\/Staff"},{"id":"177814","name":"Postdoc"},{"id":"78771","name":"Public"},{"id":"174045","name":"Graduate students"},{"id":"78751","name":"Undergraduate students"}],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003ECSE Graduate Student Association\u003C\/p\u003E\u003Cp\u003Ecse-gsa@cc.gatech.edu\u003C\/p\u003E","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}}}