{"689474":{"#nid":"689474","#data":{"type":"event","title":"School of CSE Seminar Series: Abhinav Bhatele","body":[{"value":"\u003Cp\u003E\u003Cstrong\u003ESpeaker:\u003C\/strong\u003E\u0026nbsp;Abhinav Bhatele, associate professor at University of Maryland\u003Cbr\u003E\u003Cstrong\u003EDate and Time:\u003C\/strong\u003E\u0026nbsp;April 17, 2:00-3:00 p.m.\u003Cbr\u003E\u003Cstrong\u003ELocation:\u003C\/strong\u003E\u0026nbsp;Coda 114\u003Cbr\u003E\u003Cstrong\u003EHost:\u003C\/strong\u003E\u0026nbsp;Rich Vuduc\u003C\/p\u003E\u003Cp\u003E\u003Cstrong\u003ETitle:\u003C\/strong\u003E\u0026nbsp;\u003Cem\u003EBreaking the Scaling Wall in Distributed Deep Learning\u003C\/em\u003E\u003C\/p\u003E\u003Cp\u003E\u003Cstrong\u003EAbstract:\u003C\/strong\u003E Significant advances in computer architecture (development of extremely powerful server-class GPUs) and parallel computing (scalable libraries for dense and sparse linear algebra) have contributed to the on-going AI revolution. In particular, distributed training of deep neural networks (DNNs) relies on scalable matrix multiplication algorithms and efficient communication on high-speed interconnects. Pre-training and fine-tuning large language models (LLMs) with hundreds of billions to trillions of parameters and graph neural networks (GNNs) on extremely large graphs requires hundreds to tens of thousands of GPUs. However, such training often suffers from significant scaling bottlenecks such as high communication overheads and load imbalance.\u003C\/p\u003E\u003Cp\u003EIn this talk, I will present several systems research directions that directly impact AI model training. First, I will describe my group\u0027s work in using a three-dimensional parallel algorithm for matrix multiplication in large-scale LLM training.\u0026nbsp; Second, I will demonstrate the application of the same algorithm to full-graph and mini-batch GNN training when working with extremely large graphs. Finally, I will also discuss the need for scalable collective communication routines for large-scale DNN training.\u003C\/p\u003E\u003Cp\u003E\u003Cstrong\u003EBio:\u003C\/strong\u003E Abhinav Bhatele is an associate professor in the department of computer science, and director of the \u003Ca href=\u0022https:\/\/pssg.cs.umd.edu\/\u0022\u003EParallel Software and Systems Group\u003C\/a\u003E at the University of Maryland, College Park. His research interests are broadly in systems and AI, with a focus on parallel computing and distributed AI. He has published research in parallel programming models and runtimes, network design and simulation, applications of machine learning to parallel systems, parallel deep learning, and on analyzing\/visualizing, modeling and optimizing the performance of parallel software and systems. Abhinav has received best paper awards at Euro-Par 2009, IPDPS 2013, IPDPS 2016, and PDP 2024, and a best poster award at SC 2023. He was selected as a recipient of the \u003Ca href=\u0022http:\/\/www.ieee-tcsc.org\/early.php\u0022\u003EIEEE TCSC Award for Excellence in Scalable Computing (Early Career)\u003C\/a\u003E in 2014, the \u003Ca href=\u0022https:\/\/www.llnl.gov\/news\/laboratory-researchers-recognized-accomplishments-early-and-mid-career-0\u0022\u003ELLNL Early and Mid-Career Recognition\u003C\/a\u003E award in 2018, the NSF CAREER award in 2021, the \u003Ca href=\u0022http:\/\/www.ieee-tcsc.org\/middle.php\u0022\u003EIEEE TCSC Award for Excellence in Scalable Computing (Middle Career)\u003C\/a\u003E in 2023, and the \u003Ca href=\u0022https:\/\/cs.illinois.edu\/about\/awards\/alumni-awards\/alumni-awards-past-recipients\/66697\u0022\u003EUIUC CS Early Career Academic Achievement Alumni Award\u003C\/a\u003E in 2024.\u003C\/p\u003E\u003Cp\u003EAbhinav received a B.Tech. degree in Computer Science and Engineering from I.I.T. Kanpur, India in May 2005, and M.S. and Ph.D. degrees in Computer Science from the University of Illinois at Urbana-Champaign in 2007 and 2010 respectively. He was a post-doc and later computer scientist in the Center for Applied Scientific Computing at Lawrence Livermore National Laboratory from 2011-2019. Abhinav was an associate editor of the IEEE Transactions on Parallel and Distributed Systems (TPDS) from 2022-2024. He was one of the General Chairs of IEEE Cluster 2022, and Research Papers Chair of ISC 2023.\u003C\/p\u003E","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003E\u003Cstrong\u003ESpeaker:\u003C\/strong\u003E\u0026nbsp;Abhinav Bhatele, associate professor at University of Maryland\u003Cbr\u003E\u003Cstrong\u003EDate and Time:\u003C\/strong\u003E\u0026nbsp;April 17, 2:00-3:00 p.m.\u003Cbr\u003E\u003Cstrong\u003ELocation:\u003C\/strong\u003E\u0026nbsp;Coda 114\u003Cbr\u003E\u003Cstrong\u003EHost:\u003C\/strong\u003E\u0026nbsp;Rich Vuduc\u003C\/p\u003E\u003Cp\u003E\u003Cstrong\u003ETitle:\u003C\/strong\u003E\u0026nbsp;\u003Cem\u003EBreaking the Scaling Wall in Distributed Deep Learning\u003C\/em\u003E\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"School of CSE hosts a seminar from University of Maryland Associate Professor Abhinav Bhatele"}],"uid":"36319","created_gmt":"2026-04-06 14:51:44","changed_gmt":"2026-04-06 14:56:35","author":"Bryant Wine","boilerplate_text":"","field_publication":"","field_article_url":"","field_event_time":{"event_time_start":"2026-04-17T14:00:00-04:00","event_time_end":"2026-04-17T15:00:00-04:00","event_time_end_last":"2026-04-17T15:00:00-04:00","gmt_time_start":"2026-04-17 18:00:00","gmt_time_end":"2026-04-17 19:00:00","gmt_time_end_last":"2026-04-17 19:00:00","rrule":null,"timezone":"America\/New_York"},"location":"Coda, Room 114","extras":[],"hg_media":{"679866":{"id":"679866","type":"image","title":"Abhinav-Bhatele.jpg","body":null,"created":"1775487284","gmt_created":"2026-04-06 14:54:44","changed":"1775487284","gmt_changed":"2026-04-06 14:54:44","alt":"CSE Seminar Abhinav Bhatele","file":{"fid":"264076","name":"Abhinav-Bhatele.jpg","image_path":"\/sites\/default\/files\/2026\/04\/06\/Abhinav-Bhatele.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/2026\/04\/06\/Abhinav-Bhatele.jpg","mime":"image\/jpeg","size":32593,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/2026\/04\/06\/Abhinav-Bhatele.jpg?itok=_It-OtrX"}}},"media_ids":["679866"],"groups":[{"id":"47223","name":"College of Computing"},{"id":"50877","name":"School of Computational Science and Engineering"}],"categories":[],"keywords":[{"id":"166983","name":"School of Computational Science and Engineering"}],"core_research_areas":[],"news_room_topics":[],"event_categories":[{"id":"1795","name":"Seminar\/Lecture\/Colloquium"}],"invited_audience":[{"id":"194945","name":"Alumni"},{"id":"78761","name":"Faculty\/Staff"},{"id":"177814","name":"Postdoc"},{"id":"78771","name":"Public"},{"id":"174045","name":"Graduate students"},{"id":"78751","name":"Undergraduate students"}],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003ERich Vuduc (richie@cc.gatech.edu)\u003C\/p\u003E","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}}}