<node id="627708">
  <nid>627708</nid>
  <type>news</type>
  <uid>
    <user id="34540"><![CDATA[34540]]></user>
  </uid>
  <created>1571318360</created>
  <changed>1571318845</changed>
  <title><![CDATA[New Tricks for an Old Technique: Asynchronous Methods for Exascale Computing]]></title>
  <body><![CDATA[<p>In the realm of high-performance computing (HPC), also known as supercomputing, the idea of &ldquo;better, faster, stronger&rdquo; is only as good as the number of tasks a computer can efficiently run at once or complete before moving on to the next step.&nbsp;&nbsp;</p>

<p><strong><a href="https://dblp.org/pers/hd/w/Wolfson=Pou:Jordi">Jordi Wolfson-Pou</a></strong>, a Ph.D. student in the School of Computational Science and Engineering (CSE), is an HPC researcher who has spent the past few months traveling the globe, presenting new insights on an old solution that aims to tackle synchronization bottlenecks in supercomputers.</p>

<p>These new insights highlight the efficacy of using&nbsp;asynchronous multigrid iterative methods for solving large linear systems on exascale computers. The researchers who discovered these findings believe this approach can quicken the computing processes used in a variety of fields, such as physics and engineering.&nbsp;</p>

<p>&ldquo;Solving equations in physics and engineering often requires highly accurate solutions, which means very large problems need to be solved.&nbsp;&nbsp;This is where supercomputers come in. The next generation of supercomputers will be capable of doing calculations at the exascale and will certainly be fast, but synchronization will limit their speed,&rdquo; said Wolfson-Pou.</p>

<p>Iterative methods are old computing techniques that start with an initial guess and generate a sequence of improved approximations to the true solution.&nbsp;These methods have achieved remarkable results and showed measurable progress for completing calculations simultaneously. However, the nature of these methods involves one or more synchronization points within each iteration.</p>

<p>&ldquo;Supercomputers are composed of many parallel processes doing calculations concurrently. If many processes have to synchronize, some may be idle while waiting for others to finish,&rdquo; said Wolfson-Pou.&nbsp;&nbsp;&ldquo;For example, this could be due to some processes having to do more calculations than others, or the underlying hardware that a process uses is slower than the hardware another uses. In asynchronous methods, the faster processes simply move on to the next step using the most available information.&rdquo;</p>

<p>From Brazil to China, Wolfson-Pou presented the new observations he and CSE Professor&nbsp;<strong><a href="https://www.cc.gatech.edu/~echow/">Edmond Chow</a>&nbsp;</strong>discovered while examining multigrid methods in an effort to understand how they can be executed asynchronously.</p>

<p>Their findings are detailed in the paper,&nbsp;<a href="https://www.cc.gatech.edu/~echow/pubs/jwp-chow-ipdps19.pdf"><em>Asynchronous Multigrid Methods,</em>&nbsp;</a>which&nbsp;was presented at the following:</p>

<ol>
	<li><a href="http://www.ipdps.org/ipdps2019/2019-call-for-papers.html">International Parallel &amp; Distributed Computing Symposium</a>&nbsp;(IPDPS), May 20-24, Rio de Janeiro, Brazil</li>
	<li><a href="https://iciam2019.org/">International Conference on Industrial and Applied Mathematics</a>&nbsp;(ICIAM), July 15-19, Valencia, Spain</li>
	<li><a href="http://www.multigrid.org/img2019/">International Multigrid Conference</a>&nbsp;(IMG), August 11-16, Kunming, China</li>
	<li><a href="http://grandmaster.colorado.edu/summit/schedule.php">AMG Summit</a>, September 30 &ndash; October 3, Santa Fe, New Mexico</li>
</ol>

<p>The paper&rsquo;s experimental results show that asynchronous multigrid can be faster than classical multigrid in terms of reducing the time it takes to converge to the solution.</p>
]]></body>
  <field_subtitle>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_subtitle>
  <field_dateline>
    <item>
      <value>2019-10-17T00:00:00-04:00</value>
      <timezone><![CDATA[America/New_York]]></timezone>
    </item>
  </field_dateline>
  <field_summary_sentence>
    <item>
      <value><![CDATA[CSE researchers present a new perspective on applying asynchronous methods to combat bottlenecks in exascale computing.]]></value>
    </item>
  </field_summary_sentence>
  <field_summary>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_summary>
  <field_media>
          <item>
        <nid>
          <node id="627707">
            <nid>627707</nid>
            <type>image</type>
            <title><![CDATA[Asynchronous Methods for HPC ]]></title>
            <body><![CDATA[]]></body>
                          <field_image>
                <item>
                  <fid>239017</fid>
                  <filename><![CDATA[Screen Shot 2019-10-17 at 9.13.14 AM.png]]></filename>
                  <filepath><![CDATA[/sites/default/files/images/Screen%20Shot%202019-10-17%20at%209.13.14%20AM.png]]></filepath>
                  <file_full_path><![CDATA[http://hg.gatech.edu//sites/default/files/images/Screen%20Shot%202019-10-17%20at%209.13.14%20AM.png]]></file_full_path>
                  <filemime>image/png</filemime>
                  <image_740><![CDATA[]]></image_740>
                  <image_alt><![CDATA[Global-res and local-res partitionings for the Multadd example presented in Section IV for each step of the computation of the corrections e0 and e1. Arrows denote moving to the next step of the computation. Sync() denotes a synchronization point, where the list of threads passed to Sync() denotes the threads that synchronize. Blue Sync() denotes a synchronization for asynchronous multigrid, and red Sync() denotes a synchronization point for synchronous multigrid. ]]></image_alt>
                </item>
              </field_image>
            
                      </node>
        </nid>
      </item>
      </field_media>
  <field_contact_email>
    <item>
      <email><![CDATA[kristen.perez@cc.gatech.edu]]></email>
    </item>
  </field_contact_email>
  <field_location>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_location>
  <field_contact>
    <item>
      <value><![CDATA[<p>Kristen Perez</p>

<p>Communications Officer</p>
]]></value>
    </item>
  </field_contact>
  <field_sidebar>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_sidebar>
  <field_boilerplate>
    <item>
      <nid><![CDATA[]]></nid>
    </item>
  </field_boilerplate>
  <!--  TO DO: correct to not conflate categories and news room topics  -->
  <!--  Disquisition: it's funny how I write these TODOs and then never
         revisit them. It's as though the act of writing the thing down frees me
         from the responsibility to actually solve the problem. But what can I
         say? There are more problems than there's time to solve.  -->
  <links_related> </links_related>
  <files> </files>
  <og_groups>
          <item>624060</item>
          <item>47223</item>
          <item>50877</item>
      </og_groups>
  <og_groups_both>
          <item>
        <![CDATA[Student and Faculty]]>
      </item>
          <item>
        <![CDATA[Student Research]]>
      </item>
          <item>
        <![CDATA[Research]]>
      </item>
      </og_groups_both>
  <field_categories>
          <item>
        <tid>134</tid>
        <value><![CDATA[Student and Faculty]]></value>
      </item>
          <item>
        <tid>8862</tid>
        <value><![CDATA[Student Research]]></value>
      </item>
          <item>
        <tid>135</tid>
        <value><![CDATA[Research]]></value>
      </item>
      </field_categories>
  <core_research_areas>
      </core_research_areas>
  <field_news_room_topics>
      </field_news_room_topics>
  <links_related>
      </links_related>
  <files>
      </files>
  <og_groups>
          <item>624060</item>
          <item>47223</item>
          <item>50877</item>
      </og_groups>
  <og_groups_both>
          <item><![CDATA[Center for High Performance Computing (CHiPC)]]></item>
          <item><![CDATA[College of Computing]]></item>
          <item><![CDATA[School of Computational Science and Engineering]]></item>
      </og_groups_both>
  <field_keywords>
          <item>
        <tid>702</tid>
        <value><![CDATA[hpc]]></value>
      </item>
          <item>
        <tid>181217</tid>
        <value><![CDATA[cse-hpc]]></value>
      </item>
          <item>
        <tid>3427</tid>
        <value><![CDATA[High performance computing]]></value>
      </item>
          <item>
        <tid>172914</tid>
        <value><![CDATA[Exascale Computing]]></value>
      </item>
          <item>
        <tid>182689</tid>
        <value><![CDATA[Jordi Wolfson-pou]]></value>
      </item>
          <item>
        <tid>182690</tid>
        <value><![CDATA[Edmond Chau]]></value>
      </item>
          <item>
        <tid>4305</tid>
        <value><![CDATA[cse]]></value>
      </item>
          <item>
        <tid>11559</tid>
        <value><![CDATA[CSE computational science engineering]]></value>
      </item>
      </field_keywords>
  <field_userdata><![CDATA[]]></field_userdata>
</node>
