{"627708":{"#nid":"627708","#data":{"type":"news","title":"New Tricks for an Old Technique: Asynchronous Methods for Exascale Computing","body":[{"value":"\u003Cp\u003EIn the realm of high-performance computing (HPC), also known as supercomputing, the idea of \u0026ldquo;better, faster, stronger\u0026rdquo; is only as good as the number of tasks a computer can efficiently run at once or complete before moving on to the next step.\u0026nbsp;\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cstrong\u003E\u003Ca href=\u0022https:\/\/dblp.org\/pers\/hd\/w\/Wolfson=Pou:Jordi\u0022\u003EJordi Wolfson-Pou\u003C\/a\u003E\u003C\/strong\u003E, a Ph.D. student in the School of Computational Science and Engineering (CSE), is an HPC researcher who has spent the past few months traveling the globe, presenting new insights on an old solution that aims to tackle synchronization bottlenecks in supercomputers.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EThese new insights highlight the efficacy of using\u0026nbsp;asynchronous multigrid iterative methods for solving large linear systems on exascale computers. The researchers who discovered these findings believe this approach can quicken the computing processes used in a variety of fields, such as physics and engineering.\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Solving equations in physics and engineering often requires highly accurate solutions, which means very large problems need to be solved.\u0026nbsp;\u0026nbsp;This is where supercomputers come in. The next generation of supercomputers will be capable of doing calculations at the exascale and will certainly be fast, but synchronization will limit their speed,\u0026rdquo; said Wolfson-Pou.\u003C\/p\u003E\r\n\r\n\u003Cp\u003EIterative methods are old computing techniques that start with an initial guess and generate a sequence of improved approximations to the true solution.\u0026nbsp;These methods have achieved remarkable results and showed measurable progress for completing calculations simultaneously. However, the nature of these methods involves one or more synchronization points within each iteration.\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026ldquo;Supercomputers are composed of many parallel processes doing calculations concurrently. If many processes have to synchronize, some may be idle while waiting for others to finish,\u0026rdquo; said Wolfson-Pou.\u0026nbsp;\u0026nbsp;\u0026ldquo;For example, this could be due to some processes having to do more calculations than others, or the underlying hardware that a process uses is slower than the hardware another uses. In asynchronous methods, the faster processes simply move on to the next step using the most available information.\u0026rdquo;\u003C\/p\u003E\r\n\r\n\u003Cp\u003EFrom Brazil to China, Wolfson-Pou presented the new observations he and CSE Professor\u0026nbsp;\u003Cstrong\u003E\u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/~echow\/\u0022\u003EEdmond Chow\u003C\/a\u003E\u0026nbsp;\u003C\/strong\u003Ediscovered while examining multigrid methods in an effort to understand how they can be executed asynchronously.\u003C\/p\u003E\r\n\r\n\u003Cp\u003ETheir findings are detailed in the paper,\u0026nbsp;\u003Ca href=\u0022https:\/\/www.cc.gatech.edu\/~echow\/pubs\/jwp-chow-ipdps19.pdf\u0022\u003E\u003Cem\u003EAsynchronous Multigrid Methods,\u003C\/em\u003E\u0026nbsp;\u003C\/a\u003Ewhich\u0026nbsp;was presented at the following:\u003C\/p\u003E\r\n\r\n\u003Col\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022http:\/\/www.ipdps.org\/ipdps2019\/2019-call-for-papers.html\u0022\u003EInternational Parallel \u0026amp; Distributed Computing Symposium\u003C\/a\u003E\u0026nbsp;(IPDPS), May 20-24, Rio de Janeiro, Brazil\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022https:\/\/iciam2019.org\/\u0022\u003EInternational Conference on Industrial and Applied Mathematics\u003C\/a\u003E\u0026nbsp;(ICIAM), July 15-19, Valencia, Spain\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022http:\/\/www.multigrid.org\/img2019\/\u0022\u003EInternational Multigrid Conference\u003C\/a\u003E\u0026nbsp;(IMG), August 11-16, Kunming, China\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Ca href=\u0022http:\/\/grandmaster.colorado.edu\/summit\/schedule.php\u0022\u003EAMG Summit\u003C\/a\u003E, September 30 \u0026ndash; October 3, Santa Fe, New Mexico\u003C\/li\u003E\r\n\u003C\/ol\u003E\r\n\r\n\u003Cp\u003EThe paper\u0026rsquo;s experimental results show that asynchronous multigrid can be faster than classical multigrid in terms of reducing the time it takes to converge to the solution.\u003C\/p\u003E\r\n","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":"","field_summary_sentence":[{"value":"CSE researchers present a new perspective on applying asynchronous methods to combat bottlenecks in exascale computing."}],"uid":"34540","created_gmt":"2019-10-17 13:19:20","changed_gmt":"2019-10-17 13:27:25","author":"Kristen Perez","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2019-10-17T00:00:00-04:00","iso_date":"2019-10-17T00:00:00-04:00","tz":"America\/New_York"},"extras":[],"hg_media":{"627707":{"id":"627707","type":"image","title":"Asynchronous Methods for HPC ","body":null,"created":"1571318114","gmt_created":"2019-10-17 13:15:14","changed":"1571318114","gmt_changed":"2019-10-17 13:15:14","alt":"Global-res and local-res partitionings for the Multadd example presented in Section IV for each step of the computation of the corrections e0 and e1. Arrows denote moving to the next step of the computation. Sync() denotes a synchronization point, where the list of threads passed to Sync() denotes the threads that synchronize. Blue Sync() denotes a synchronization for asynchronous multigrid, and red Sync() denotes a synchronization point for synchronous multigrid. ","file":{"fid":"239017","name":"Screen Shot 2019-10-17 at 9.13.14 AM.png","image_path":"\/sites\/default\/files\/images\/Screen%20Shot%202019-10-17%20at%209.13.14%20AM.png","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/Screen%20Shot%202019-10-17%20at%209.13.14%20AM.png","mime":"image\/png","size":306882,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/Screen%20Shot%202019-10-17%20at%209.13.14%20AM.png?itok=yjQjYlN7"}}},"media_ids":["627707"],"groups":[{"id":"624060","name":"Center for High Performance Computing (CHiPC)"},{"id":"47223","name":"College of Computing"},{"id":"50877","name":"School of Computational Science and Engineering"}],"categories":[{"id":"134","name":"Student and Faculty"},{"id":"8862","name":"Student Research"},{"id":"135","name":"Research"}],"keywords":[{"id":"702","name":"hpc"},{"id":"181217","name":"cse-hpc"},{"id":"3427","name":"High performance computing"},{"id":"172914","name":"Exascale Computing"},{"id":"182689","name":"Jordi Wolfson-pou"},{"id":"182690","name":"Edmond Chau"},{"id":"4305","name":"cse"},{"id":"11559","name":"CSE computational science engineering"}],"core_research_areas":[],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EKristen Perez\u003C\/p\u003E\r\n\r\n\u003Cp\u003ECommunications Officer\u003C\/p\u003E\r\n","format":"limited_html"}],"email":["kristen.perez@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}}}