{"689163":{"#nid":"689163","#data":{"type":"event","title":"PhD Defense by Bolin Lai","body":[{"value":"\u003Cp\u003E\u003Cstrong\u003ETitle: Multimodal Human Behavior Modeling: From Understanding to Generation\u003C\/strong\u003E\u003C\/p\u003E\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u003Cstrong\u003EDate: Tuesday, March 31st\u003C\/strong\u003E\u003C\/p\u003E\u003Cp\u003E\u003Cstrong\u003ETime: 3:00-5:00pm ET\u003C\/strong\u003E\u003C\/p\u003E\u003Cp\u003E\u003Cstrong\u003ERemote Link:\u0026nbsp;\u003C\/strong\u003E\u003Ca href=\u0022https:\/\/gatech.zoom.us\/j\/96560653822?pwd=PKFEAdNbnxP79Qua7qddx0MZ6qeIxo.1\u0026amp;from=addon\u0022\u003Ehttps:\/\/gatech.zoom.us\/j\/96560653822?pwd=PKFEAdNbnxP79Qua7qddx0MZ6qeIxo.1\u0026amp;from=addon\u003C\/a\u003E\u003C\/p\u003E\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u003Cstrong\u003EBolin Lai\u003C\/strong\u003E\u003C\/p\u003E\u003Cp\u003EMachine Learning PhD Student\u003C\/p\u003E\u003Cp\u003ESchool of Electrical and Computer Engineering\u003Cbr\u003EGeorgia Institute of Technology\u003C\/p\u003E\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u003Cstrong\u003ECommittee\u003C\/strong\u003E\u003C\/p\u003E\u003Cp\u003E1 Dr. James Rehg (Advisor, CS, UIUC)\u003C\/p\u003E\u003Cp\u003E2 Dr. Zsolt Kira (Advisor, IC, Georgia Tech)\u003C\/p\u003E\u003Cp\u003E3 Dr. James Hays (IC, Georgia Tech)\u003C\/p\u003E\u003Cp\u003E4 Dr. Judy Hoffman (IC, Georgia Tech)\u003C\/p\u003E\u003Cp\u003E5 Dr. Humphrey Shi (IC, Georgia, Tech)\u003C\/p\u003E\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u003Cstrong\u003EAbstract\u003C\/strong\u003E\u003C\/p\u003E\u003Cp\u003EHuman behavior modeling is a critical step to develop AI agents that can assist us in various tasks. In contrast with learning objects, scenes and textures, human behaviors are inherently purposeful, guided by underlying intentions and goals. Additionally, human behaviors involve precise and adaptive interactions with the environment, characterized by fine-grained and nuanced control. The two key differences require innovative approaches for AI models to understand our intentions in the behaviors, and capture the nuance of our actions in different tasks. In my thesis proposal, I will elaborate my research on leveraging multimodal inputs to capture the underlying intentions and enable precise controllability on human actions in both understanding and generation problems. My thesis includes four chapters: audio-visual gaze anticipation, multimodal social behavior understanding, text-guided egocentric action generation, and training-free text-image conditioned action generation. The ultimate goal of my research is to enable AI models to better understand and interact with people, paving the way towards human-centric artificial intelligence.\u003C\/p\u003E\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003EMultimodal Human Behavior Modeling: From Understanding to Generation\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"Multimodal Human Behavior Modeling: From Understanding to Generation"}],"uid":"27707","created_gmt":"2026-03-24 18:34:30","changed_gmt":"2026-03-24 18:34:30","author":"Tatianna Richardson","boilerplate_text":"","field_publication":"","field_article_url":"","field_event_time":{"event_time_start":"2026-03-31T15:00:00-04:00","event_time_end":"2026-03-31T17:00:00-04:00","event_time_end_last":"2026-03-31T17:00:00-04:00","gmt_time_start":"2026-03-31 19:00:00","gmt_time_end":"2026-03-31 21:00:00","gmt_time_end_last":"2026-03-31 21:00:00","rrule":null,"timezone":"America\/New_York"},"location":"ZOOM","extras":[],"groups":[{"id":"221981","name":"Graduate Studies"}],"categories":[],"keywords":[{"id":"100811","name":"Phd Defense"}],"core_research_areas":[],"news_room_topics":[],"event_categories":[{"id":"1788","name":"Other\/Miscellaneous"}],"invited_audience":[{"id":"78771","name":"Public"}],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[],"email":[],"slides":[],"orientation":[],"userdata":""}}}