{"689451":{"#nid":"689451","#data":{"type":"event","title":"PhD Defense by Kartik Sharma","body":[{"value":"\u003Cp\u003E\u003Cstrong\u003ETitle:\u0026nbsp;\u003C\/strong\u003EAdapting Models to User Intent for Safe and Controllable Generation\u003C\/p\u003E\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u003Cstrong\u003EDate:\u003C\/strong\u003E\u0026nbsp;Tuesday, April 21st , 2026\u003C\/p\u003E\u003Cp\u003E\u003Cstrong\u003ETime:\u003C\/strong\u003E\u0026nbsp;2:00 p.m. - 4:00 p.m. ET\u003C\/p\u003E\u003Cp\u003E\u003Cstrong\u003ELocation (in-person + virtual):\u003C\/strong\u003E\u0026nbsp;Coda C1303 Glenwood [\u003Ca href=\u0022https:\/\/gatech.zoom.us\/j\/95115725740?pwd=ahXrtkkVogbVHFNqxJB7u4esvxFaMi.1\u0022\u003EVirtual Link\u003C\/a\u003E]\u003C\/p\u003E\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u003Cstrong\u003EKartik Sharma\u003C\/strong\u003E\u003C\/p\u003E\u003Cp\u003EPh.D. candidate\u003C\/p\u003E\u003Cp\u003ESchool of Computational Science and Engineering\u003C\/p\u003E\u003Cp\u003ECollege of Computing\u003C\/p\u003E\u003Cp\u003EGeorgia Institute of Technology\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u003Cstrong\u003ECommittee:\u003C\/strong\u003E\u003C\/p\u003E\u003Cp\u003EDr. Srijan Kumar (Advisor, Computational Science and Engineering, Georgia Institute of Technology)\u003C\/p\u003E\u003Cp\u003EDr. Rakshit Trivedi (Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology)\u003C\/p\u003E\u003Cp\u003EDr. Chao Zhang (Computational Science and Engineering, Georgia Institute of Technology)\u003C\/p\u003E\u003Cp\u003EDr. Bo Dai (Computational Science and Engineering, Georgia Institute of Technology)\u003C\/p\u003E\u003Cp\u003EDr. Polo Chao (Computational Science and Engineering, Georgia Institute of Technology)\u003C\/p\u003E\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u003Cstrong\u003EAbstract\u003C\/strong\u003E:\u003C\/p\u003E\u003Cp\u003EAs artificial intelligence (AI) is increasingly deployed in high-stakes, open-world environments, the demand for systems that are both highly capable and strictly trustworthy has become paramount. Modern generative models require high test-time plasticity to remain responsive to complex and dynamic human intentions. However, this same architectural flexibility inadvertently leaves them vulnerable to manipulation by bad actors and unintended changes. To ensure AI remains an empowering tool rather than a fragile liability, this dissertation presents a comprehensive framework for adapting models safely to user intent. First, this work introduces interpretable test-time mechanisms that empower users to precisely steer model behavior. These interventions are designed to enforce hard constraints in graph diffusion models and guide large language and imitation learning models toward exemplary and directed behavior descriptions. However, because unconstrained plasticity inherently alters a model\u2019s risk profile, the research then systematically diagnoses the vulnerabilities in the systems, exposing vulnerabilities in dynamic graph models and multimodal language models. To safely support user-driven controllability without succumbing to these adversarial risks, the final phase demonstrates how structural adaptivity of system prompts can protect a language model\u2019s core without compromising its benign behavior. Ultimately, this research works towards resolving the modern stability-plasticity dilemma, transforming vulnerable networks into deeply steerable and secure extensions of human intent.\u003C\/p\u003E\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EAdapting Models to User Intent for Safe and Controllable Generation\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"Adapting Models to User Intent for Safe and Controllable Generation"}],"uid":"27707","created_gmt":"2026-04-03 19:51:55","changed_gmt":"2026-04-03 19:52:20","author":"Tatianna Richardson","boilerplate_text":"","field_publication":"","field_article_url":"","field_event_time":{"event_time_start":"2026-04-21T14:00:00-04:00","event_time_end":"2026-04-21T16:00:00-04:00","event_time_end_last":"2026-04-21T16:00:00-04:00","gmt_time_start":"2026-04-21 18:00:00","gmt_time_end":"2026-04-21 20:00:00","gmt_time_end_last":"2026-04-21 20:00:00","rrule":null,"timezone":"America\/New_York"},"location":"Coda C1303 Glenwood ","extras":[],"groups":[{"id":"221981","name":"Graduate Studies"}],"categories":[],"keywords":[{"id":"100811","name":"Phd Defense"}],"core_research_areas":[],"news_room_topics":[],"event_categories":[{"id":"1788","name":"Other\/Miscellaneous"}],"invited_audience":[{"id":"78771","name":"Public"}],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[],"email":[],"slides":[],"orientation":[],"userdata":""}}}