{"688715":{"#nid":"688715","#data":{"type":"event","title":"Ph.D. Dissertation Defense - Benjamin Reichman","body":[{"value":"\u003Cp\u003E\u003Cstrong\u003ETitle: \u003C\/strong\u003EEmotions in Large Language Models\u003C\/p\u003E\u003Cp\u003E\u003Cstrong\u003E\u0026nbsp;Committee\u003C\/strong\u003E\u003C\/p\u003E\u003Cp\u003E1 Dr. Larry Heck (Advisor), School of Electrical and Computer Engineering, Georgia Tech\u003C\/p\u003E\u003Cp\u003E2 Dr. Kartik Goyal, School of Interactive Computing, Georgia Tech\u003C\/p\u003E\u003Cp\u003E3 Dr. Zsolt Kira, School of Interactive Computing, Georgia Tech\u003C\/p\u003E\u003Cp\u003E4 Dr. David Anderson, School of Electrical and Computer Engineering, Georgia Tech\u003C\/p\u003E\u003Cp\u003E5 Dr. May Wang, School, Wallace H. Coulter Department of Biomedical Engineering, Georgia Tech\u003C\/p\u003E\u003Cp\u003E6 Dr. Michael Wick, Oracle Labs Burlington\u003C\/p\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003ELarge language models (LLMs) through their training process learn and store a lot of general knowledge. However, the distribution of knowledge has a long-tail pattern of many infrequently used facts. This poses a challenge for LLMs when a query requires information that is on the long-tail. It is on such queries that LLMs have a tendency to hallucinate. Retrieval-augmented generation (RAG) improves an LLM\u0027s ability to answer such questions by retrieving the needed information and adding it to the LLM\u0027s context. Part of this thesis proposal looks at this retrieval algorithm and analyzes how it works. A crucial part of RAG is the retrieval corpus itself. Most RAG benchmarks use Wikipedia or Wikipedia-like texts as their retrieval corpus. These texts are written in a neutral and factual tone. However, when RAG systems retrieve internet-based content, they encounter text with diverse tones and linguistic styles, introducing challenges for downstream tasks. This thesis addresses this problem by constructing and validating datasets that introduce sarcasm and emotional variation into retrieved passages, and by developing methods that enable LLMs to better comprehend such pragmatically inflected inputs. In doing so, it explores both prompt-based and translation-based approaches for adapting text tone, and analyzes how emotions are represented in LLMs\u2019 latent spaces, showing how these insights can be leveraged to improve RAG reading performance.\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"Emotions in Large Language Models"}],"uid":"28475","created_gmt":"2026-03-03 21:46:21","changed_gmt":"2026-03-03 21:47:37","author":"Daniela Staiculescu","boilerplate_text":"","field_publication":"","field_article_url":"","field_event_time":{"event_time_start":"2026-03-13T09:00:00-04:00","event_time_end":"2026-03-13T11:00:00-04:00","event_time_end_last":"2026-03-13T11:00:00-04:00","gmt_time_start":"2026-03-13 13:00:00","gmt_time_end":"2026-03-13 15:00:00","gmt_time_end_last":"2026-03-13 15:00:00","rrule":null,"timezone":"America\/New_York"},"location":"Room 523A, TSRB","extras":[],"groups":[{"id":"434381","name":"ECE Ph.D. Dissertation Defenses"}],"categories":[],"keywords":[{"id":"100811","name":"Phd Defense"},{"id":"1808","name":"graduate students"}],"core_research_areas":[],"news_room_topics":[],"event_categories":[{"id":"1788","name":"Other\/Miscellaneous"}],"invited_audience":[{"id":"78771","name":"Public"}],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[],"email":[],"slides":[],"orientation":[],"userdata":""}}}