{"62604":{"#nid":"62604","#data":{"type":"news","title":"Georgia Tech Engaged in $100 Million Next-Generation Computing Initiative","body":[{"value":"\u003Cp\u003EImagine that one of the world\u0027s most powerful high performance \ncomputers could be packed into a single rack just 24 inches wide and \npowered by a fraction of the electricity consumed by comparable current \nmachines.  That would allow an unprecedented amount of computing power \nto be installed on aircraft, carried onto the battlefield for commanders\n -- and made available to researchers everywhere.\u003C\/p\u003E\n\u003Cp\u003EPutting this computing power into a small and energy-efficient \npackage, and making it reliable and easier to program, are among the \ngoals of the new DARPA Ubiquitous High Performance Computing (UHPC) \ninitiative.  Georgia Tech researchers from three different units are \nsupporting key components of this $100 million challenge, which will \nrequire development of revolutionary approaches not bound by existing \ncomputing paradigms.\n\u003C\/p\u003E\n\u003Cp\u003EIf UHPC meets its ambitious eight-year goals, the new approaches and \ntechnologies it develops could redefine the way that computing systems \nare envisioned, designed and used.\n\u003C\/p\u003E\n\u003Cp\u003E\u0022The opportunity we have is to go far beyond the current product \nroadmaps,\u0022 said David Bader, a professor in Georgia Tech\u0027s School of \nComputational Science and Engineering.  \u0022We really have the opportunity \nto change the industry and to design our applications with new computing\n architectures.  For the first time in the history of computing, we will\n be able to work with a clean slate.\u0022\n\u003C\/p\u003E\n\u003Cp\u003ETo attain the program\u0027s ambitious goals, DARPA funded four groups -- \nled by NVIDIA Corp., Intel Corp., the Massachusetts Institute of \nTechnology and Sandia National Laboratories -- to develop UHPC \nprototypes.  A fifth group, led by the Georgia Tech Research Institute \n(GTRI), will develop applications, benchmarking and metrics that will be\n used to drive UHPC system design considerations and support performance\n analysis of the developing system designs.\n\u003C\/p\u003E\n\u003Cp\u003E\u0022Our team is developing a set of five difficult problems of a size \nand scope that the machines they are talking about should be able to \naccomplish,\u0022 said Dan Campbell, a GTRI principal research engineer who \nis co-principal investigator of the benchmarking initiative.  \u0022Our \nchallenge is picking the right problems and specifying them at the right\n level of abstraction to allow innovation and properly represent what \nthe DoD will need in 2018.\u0022\n\u003C\/p\u003E\n\u003Cp\u003EThe five problems highlight the unique computing needs of the U.S. military:\n\u003C\/p\u003E\n\u003Cp\u003E\u2022 Analysis of the vast streams of data originating with widespread \nsensor systems, unmanned aerial vehicles and new generations of radar \nsystems.  The data will be analyzed for nuggets of useful information in\n ways that are not possible today.\n\u003C\/p\u003E\n\u003Cp\u003E\u2022 A dynamic graph challenge, in which many entities interact to \ncreate a problem of \u0022connecting the dots.\u0022  That could mean analyzing \nrelationships in social media to find possible adversaries, or \nunderstanding network traffic for cyber-security challenges.\n\u003C\/p\u003E\n\u003Cp\u003E\u2022 The decision tree, comparable to a chess game in which many \npossible interconnected options, each with complex implications, must be\n analyzed quickly.  This could help field commanders or corporate CEOs \nmake better decisions.\n\u003C\/p\u003E\n\u003Cp\u003E\u2022 Materials shock and hydrodynamics issues, challenges important to improving future generations of materials.\n\u003C\/p\u003E\n\u003Cp\u003E\u2022 Molecular dynamics simulations, which use high-performance \ncomputers to understand interactions between very large systems, such as\n protein folding.\n\u003C\/p\u003E\n\u003Cp\u003E\u0022We need to be able to take in a lot more data and understand it a \nlot more thoroughly than we can now,\u0022 said Mark Richards, a principal \nresearch engineer in the Georgia Tech School of Electrical and Computer \nEngineering and co-principal investigator of the benchmarking team.  \n\u0022That might allow us to find adversaries we can\u0027t find now because we\u0027re\n unable to tease that information out of the data flow.\u0022\n\u003C\/p\u003E\n\u003Cp\u003EWhile the benefits of making such computing power widely available \nare obvious, how these machines will be designed, built and reliably \noperated is not.\n\u003C\/p\u003E\n\u003Cp\u003E\u0022Meeting these very ambitious program goals will pose significant \ntechnical challenges,\u0022 said Bader, who leads application development on \nthe NVIDIA team and is part of the benchmarking group.  \u0022The technology \nroadmaps in such areas as interconnection networks, microprocessor \ndesign and technology fabrication will be pushed to their limits.\u0022\n\u003C\/p\u003E\n\u003Cp\u003EMeeting power limitations of just 57 kilowatts per rack -- the amount\n of electricity produced by a portable military generator -- may be the \ntoughest among them.  The fastest computer currently in operation \nrequires seven megawatts of power.  \n\u003C\/p\u003E\n\u003Cp\u003E\u0022Reducing the power consumption means less energy per computation,\u0022 \nnoted Richards.  \u0022But as we lower the device voltage, we get closer to \nthe physical noise.  That will allow more errors due to the physics of \nthe devices, and all kinds of things will have to be done to address \nthat.\u0022\n\u003C\/p\u003E\n\u003Cp\u003EAnd the entire machine will have to fit into a 24-inch wide, 78-inch high and 40-inch deep cabinet.\n\u003C\/p\u003E\n\u003Cp\u003EBut the physical implementation of the machines is just one part of \nthe challenge, Bader noted.  How people will work with them poses a \nperhaps more difficult challenge because it will require thinking about \ncomputers in a new way.\n\u003C\/p\u003E\n\u003Cp\u003E\u0022Over the past 20 or 30 years, we\u0027ve taken a single computing design \nand kept tweaking it through advances like miniaturizing parts,\u0022 he \nsaid.  \u0022But we really haven\u0027t changed the global nature of how the \nmachine works. To meet DARPA\u0027s power efficiency goals, we really will \nneed to change the way we program the machine.\u0022\n\u003C\/p\u003E\n\u003Cp\u003EThat also affects the humans who interact with these highly-parallel \nmachines, which could have as many as a half-million separate threads \noperating at the same time.  DARPA\u0027s initial goal is to build machines \ncapable of petaflop speed -- a trillion operations per second -- which \ncould lead into the next generation of exascale computers a thousand \ntimes more capable.\n\u003C\/p\u003E\n\u003Cp\u003E\u0022We will need to find new ways of thinking about computers that will \nmake it feasible for humans to comprehend what is going on inside,\u0022 \nCampbell said. \u0022It\u0027s a huge programming challenge.\u0022\n\u003C\/p\u003E\n\u003Cp\u003ETo encourage collaboration in solving these complex problems, DARPA \nhas embraced the idea of open innovation.  It expects the organizations \nto work together on common critical topics, creating a collaborative \nenvironment to address the system challenges.  New technology generated \nby the program -- believed to be today\u0027s largest DoD computing research \ninitiative -- is likely to move quickly into industry.\n\u003C\/p\u003E\n\u003Cp\u003E\u0022There is certainly an expectation among the companies that what they\n are doing in this project is going to change how we do mainstream \ncomputing,\u0022 Bader said. \u0022The technology transfer implications are \ncertainly obvious.\u0022\n\u003C\/p\u003E","summary":null,"format":"limited_html"}],"field_subtitle":[{"value":"DARPA Program Will Put Petascale Computer into a 24-inch Cabinet"}],"field_summary":[{"value":"\u003Cp\u003EGeorgia Tech researchers are engaged in a $100 million DARPA program to fit a high performance petaflop computer into a single rack just 24 inches wide and power it with a fraction of the electricity consumed by comparable current machines. \u003Cem\u003ESource: GT Research News\u003C\/em\u003E\u003C\/p\u003E","format":"limited_html"}],"field_summary_sentence":[{"value":"Georgia Tech is supporting a major new computing initiative."}],"uid":"27174","created_gmt":"2010-11-08 12:20:40","changed_gmt":"2016-10-08 03:07:42","author":"Mike Terrazas","boilerplate_text":"","field_publication":"","field_article_url":"","dateline":{"date":"2010-11-08T00:00:00-05:00","iso_date":"2010-11-08T00:00:00-05:00","tz":"America\/New_York"},"extras":[],"hg_media":{"62602":{"id":"62602","type":"image","title":"Georgia Tech UHPC researchers","body":null,"created":"1449176382","gmt_created":"2015-12-03 20:59:42","changed":"1475894544","gmt_changed":"2016-10-08 02:42:24","alt":"Georgia Tech UHPC researchers","file":{"fid":"191520","name":"tmv30679.jpg","image_path":"\/sites\/default\/files\/images\/tmv30679_0.jpg","image_full_path":"http:\/\/hg.gatech.edu\/\/sites\/default\/files\/images\/tmv30679_0.jpg","mime":"image\/jpeg","size":1219104,"path_740":"http:\/\/hg.gatech.edu\/sites\/default\/files\/styles\/740xx_scale\/public\/images\/tmv30679_0.jpg?itok=KFRzR5oB"}}},"media_ids":["62602"],"groups":[{"id":"47223","name":"College of Computing"}],"categories":[],"keywords":[{"id":"3427","name":"High performance computing"}],"core_research_areas":[],"news_room_topics":[],"event_categories":[],"invited_audience":[],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[{"value":"\u003Cp\u003EStefany Sanders\u003C\/p\u003E\u003Cp\u003ECollege of Computing\u003C\/p\u003E\u003Cp\u003E\u003Ca href=\u0022mailto:stefany@cc.gatech.edu\u0022\u003Estefany@cc.gatech.edu\u003C\/a\u003E\u003C\/p\u003E\u003Cp\u003E404-312-6620\u003C\/p\u003E","format":"limited_html"}],"email":[],"slides":[],"orientation":[],"userdata":""}}}