news

Georgia Tech's Parallel Computing Research Leads at SIAM PP18

Primary tabs

Georgia Tech’s latest parallel computing research is being presented at the SIAM Conference on Parallel Processing for Scientific Computing (SIAM PP18), at Waseda University in Tokyo this week, March 7-10. School of Computational Science and Engineering (CSE) Associate Professor Rich Vuduc is serving as the current chair of the SIAM Activity Group on Supercomputing, which is co-sponsoring the SIAM PP conference alongside the Japan Society for Industrial and Applied Mathematics this year. A large number of CSE faculty members and Ph.D. students have research accepted for this year’s program and will be presenting during the week’s meetings. Of the 17 research presentations from Georgia Tech at the conference, 13 include presentations and work by CSE researchers. Parallel computing allows for larger problems to be divided into smaller ones, which then can be solved simultaneously. This type of computation is inherent to high-performance computing (HPC), but is gaining more notice in other areas of computing research because they are increasingly able to take advantage of frequency scaling, a computer architecture technique used to conserve power and reduce heat generated by the chip. "Parallel processing continues to have an increasingly big impact on physical simulation, data analytics, and artificial intelligence. The talks by Georgia Tech researchers reflect this with their particular focus on how to fundamentally advance the algorithms, data structures, and numerical methods underlying these domains,” said Vuduc. Some of the themes emerging in this year’s work from Georgia Tech researchers demonstrate a range of parallel processing abilities and applications that stem from four main areas: data and HPC, novel systems, algorithms and libraries, and large-scale simulations. Data and HPC MS9, MS20, MS34, MS38, MS56, MS58, MS73 Novel Systems MS30, MS38, MS41, MS97 Algorithms and Libraries MS9, MS14, MS20, MS34, MS56, MS58, MS68, MS73, MS87, MS109, CP11 Large-scale Simulations MS102, MS50, MS68   The following sessions include presentations by CSE faculty and Ph.D. students.   MS9: Tensor Decomposition for High Performance Data Analytics – Part I of III HiCOO Hierarchical Storage of Sparse Tensors Jiajia Li, Jimeng Sun, Rich Vuduc MS20: Tensor Decomposition for High Performance Data Analytics – Part II of III Organizer Rich Vuduc MS34: Architecture-Aware Graph Analytics – Part I of II Scalable Graph Alignment on Modern Architectures Ümit V. Çatalyürek, Bora Ucar, Abdurrahman Yasar MS38: Deep Learning from HPC Perspective: Opportunities and Challenges – Part II of II Faster, Smaller, and More Energy-Efficient Inference Using Codebook-based Quantization and FPGAs Mikhail Isaev, Jeffrey Young, Rich Vuduc MS41: Performance Engineering from the Node level to the Extreme Scale – Part II of II Designing an Algorithm with a Tuning Knob that Controls its Power Consumption Sara Karamati, Jeffrey Young, Rich Vuduc MS56: Tensor Decomposition for High Performance Data Analytics – Part III of III Organizer Rich Vuduc MS58: Scalable and Dynamic Graph Algorithms Graph Analysis: New Algorithm Models, New Architectures David A. Bader, Oded Green, Jason Riedy MS68: Challenges in Parallel Adaptive Mesh Refinement – Part 1 of III Structured and Unstructured Adaptivity in PETSc Toby Isaac MS73: Theory Meets Practice for High Performance Computing – Part II of II Faster Parallel Tensor Compression using Randomization Casey Battaglino MS87: Innovative Methods for High Performance Iterative Solvers – Part I of II ParILUT – A New Parallel Threshold ILU Edmond Chow MS97: Emerging Architectural Support for Scientific Kernels – Part I of II Sparse Tensor Decomposition on EMU Platform Srinivas Eswar, Jiajia Li, Richard Vuduc, Patrick Lavin, Jeffrey Young MS102: Large-Scale Simulation in Geodynamics – Part I of II Thermal Inversion in Suduction Zones Toby Isaac MS109: Techniques for Developing Massively-Parallel Linear Solvers – Part I of II ACHILES: An Asynchronous Iterative Sparse Linear Solver Edmond Chow CP11: Algorithms Chair Ümit Çatalyürek   The following sessions include presentations by Georgia Tech faculty and Ph.D. students.   MS14: On Batched BLAS Standardization – Part II of II Batched DGEMM Operations in Density Matrix Renormalization Group Arghya Chatterjee MS30: Resilience for Extreme Scale Computing – Part III of IV Resilience with Asynchronous Many Task (AMT) Programming Models Vivek Sarkar MS50: Parallel Simulations in Life Sciences Meshfree Simulations of Complex Flows Using General Finite Differences Yaroslav Vasyliv and Alexander Alexeev

Status

  • Workflow Status: Published
  • Created By: Kristen Perez
  • Created: 03/07/2018
  • Modified By: Kristen Perez
  • Modified: 03/08/2018

Target Audience