{"171141":{"#nid":"171141","#data":{"type":"news","title":"Keeneland Project Deploys New GPU Supercomputing System for the National Science Foundation","body":[{"value":"\u003Cp\u003E\u003Cstrong\u003EATLANTA \u2013 Nov. 14, 2012 \u2013\u003C\/strong\u003E Georgia Tech, along with partner research organizations on the Keeneland Project, including the University of Tennessee-Knoxville, the National Institute for Computational Sciences and Oak Ridge National Laboratory, announced today that the project has completed installation and acceptance of the Keeneland Full Scale System (KFS). This supercomputing system, which is available to the National Science Foundation (NSF) scientific community, is designed to meet the compute-intensive needs of a wide range of applications through the use of NVIDIA GPU technology. In achieving this milestone, KFS is the most powerful GPU supercomputer available for research through NSF\u2019s Extreme Science and Engineering Discovery Environment (XSEDE) program.\u003C\/p\u003E\u003Cp\u003E\u201cKeeneland provides an important capability for the NSF computational science community,\u201d says Jeffrey Vetter, Principal Investigator and Project Director, with a joint appointment to Georgia Tech\u0027s College of Computing and Oak Ridge National Laboratory. \u201cMany users are running production science applications on GPUs with performance that would not be possible on other systems.\u201d\u003C\/p\u003E\u003Cp\u003EScientists will be able to use the resource to create breakthroughs in many fields of science. For the past 20 months, the Keeneland Initial Delivery System (KIDS) has been used for research in both computer science and computational science, and has included applications in astronomical sciences, atmospheric sciences, behavioral and neural sciences, biological and critical systems, materials research and mechanical and structural systems, along with many other application areas. Much of the research will continue on KFS.\u003C\/p\u003E\u003Cp\u003EKeeneland\u2019s early users note how the system\u2019s capabilities have significantly advanced their research application areas.\u003C\/p\u003E\u003Cp\u003E\u201cThe Infiniband communication is now fast enough so that I can run my program on more GPUs to achieve better performance,\u201d says Jens Glaser, a post-doctoral associate in chemical engineering and materials science at the University of Minnesota. Glaser believes his research results demonstrate that the KFS\u0027 hardware is a significant step forward in supercomputing.\u003C\/p\u003E\u003Cp\u003EAstrophysics researcher Jamie Lombardi, an associate professor in the Department of Physics at Allegheny College, says Keeneland is easily the fastest system he has used. Lombardi uses his hydrodynamics code Starsmasher to simulate the collision and merger of two stars. The dynamics of the gas are parallelized on the CPU cores, while the gravity calculations are parallelized on the GPUs.\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E\u201cRunning on one node of KFS is nearly a factor of three faster than running on one node of my local cluster,\u201d says Lombardi. \u201cThe availability of such a large number of nodes on KFS makes it possible for me to run higher resolution simulations than I have ever run before.\u201d\u003C\/p\u003E\u003Cp\u003EThe Keeneland Full Scale System is a 615 TFLOPS HP Proliant SL250-based supercomputer with 264 nodes, where each node contains two Intel Sandy Bridge processors, three NVIDIA M2090 GPU accelerators, 32 GB of host memory, and a Mellanox InfiniBand FDR interconnection network. KFS has delivered sustained performance of over a quarter of a PetaFLOP (one quadrillion calculations per second) in initial testing. The system is space efficient in that it occupies about 400 square feet, including the space for in-row cooling and service areas.\u003C\/p\u003E\u003Cp\u003EDuring the KFS installation and acceptance testing, the initial delivery system, KIDS, was used to start production capacity for XSEDE users seeking to run their applications on the system and who had received allocations for Keeneland through a peer review process. KIDS was upgraded with newer GPUs and used for software and application development and for pre-production testing of codes that utilize the GPU accelerators in the Keeneland systems. Even before KFS began production, allocation requests for time greater than the total available for its lifecycle had been received from XSEDE application users.\u003C\/p\u003E\u003Cp\u003E\u201cOur Keeneland Initial Delivery system has hosted over 130 projects and 200 users over the past two years,\u201d says Vetter. \u201cRequests for access to Keeneland have far outstripped the planned resource delivery, sometimes by as much as twice the availability.\u201d\u003C\/p\u003E\u003Cp\u003EThe Keeneland Project is a five-year Track 2D cooperative agreement, which was awarded by NSF under Contract OCI-0910735 in 2009 for the deployment of an innovative high performance computing system to the open science community. The Georgia Institute of Technology, University of Tennessee-Knoxville, the National Institute for Computational Sciences, and Oak Ridge National Laboratory manage the facility, perform education and outreach activities for advanced architectures, develop and deploy software tools for this class of architecture to ensure productivity, and team with early adopters to map their applications to Keeneland architectures.\u003C\/p\u003E\u003Cp\u003ETo learn more about Keeneland or XSEDE, visit \u003Ca href=\u0022http:\/\/keeneland.gatech.edu\u0022 target=\u0022_self\u0022\u003Ehttp:\/\/keeneland.gatech.edu\u003C\/a\u003E or \u003Ca href=\u0022https:\/\/www.xsede.org\/\u0022 target=\u0022_blank\u0022\u003Ehttps:\/\/www.xsede.org\/\u003C\/a\u003E, respectively.\u003C\/p\u003E\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\u003Cp\u003E###\u003C\/p\u003E\u003Cp\u003E\u003Cbr \/\u003E\u003Cstrong\u003EContacts\u003C\/strong\u003E\u003C\/p\u003E\u003Cp\u003EJoshua Preston\u003C\/p\u003E\u003Cp\u003ECommunications Officer\u003C\/p\u003E\u003Cp\u003ECollege of Computing at Georgia Tech\u003C\/p\u003E\u003Cp\u003E\u003Ca href=\u0022mailto:jpreston@cc.gatech.edu\u0022\u003Ejpreston@cc.gatech.edu \u003C\/a\u003E\u003C\/p\u003E\u003Cp\u003E678-231-0787\u003C\/p\u003E","summary":null,"format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003E\u003Cstrong\u003EATLANTA \u2013 Nov. 14, 2012 \u2013\u003C\/strong\u003E Georgia Tech, along with partner research organizations on the Keeneland Project, including the University of Tennessee-Knoxville, the National Institute for Computational Sciences and Oak Ridge National Laboratory, announced today that the project has completed installation and acceptance of the Keeneland Full Scale System (KFS). \u003Cem\u003ESource: Office of Communications\u003C\/em\u003E\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":"","uid":"27174","created_gmt":"2012-11-14 12:16:01","changed_gmt":"2016-10-08 03:13:10","author":"Mike Terrazas","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2012-11-14T00:00:00-05:00","iso_date":"2012-11-14T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"groups":[{"id":"1304","name":"High Performance Computing (HPC)"}],"categories":[],"keywords":[{"id":"4305","name":"cse"},{"id":"3427","name":"High performance computing"},{"id":"702","name":"hpc"},{"id":"50341","name":"jeffrey vetter"},{"id":"50331","name":"keeneland"},{"id":"166983","name":"School of Computational Science and Engineering"},{"id":"167322","name":"supercomputing"}],"core_research_areas":[{"id":"39431","name":"Data Engineering and Science"}],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EJosh Preston\u003C\/p\u003E\u003Cp\u003ECommunications Officer\u003C\/p\u003E\u003Cp\u003ECollege of Computing\u003C\/p\u003E\u003Cp\u003E678-231-0787\u003C\/p\u003E\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E","format":"limited_html"}],"email":["jpreston@cc.gatech.edu"],"slides":[],"orientation":[],"userdata":""}}}