news
Record-Breaking Simulation Boosts Rocket Science and Supercomputing to New Limits
Primary tabs
Spaceflight is becoming safer, more frequent, and more sustainable thanks to the largest computational fluid flow simulation ever ran on Earth.
Inspired by SpaceX’s Super Heavy booster, a team led by Georgia Tech’s Spencer Bryngelson and New York University’s Florian Schäfer modeled the turbulent interactions of a 33-engine rocket. Their experiment set new records, running the largest ever fluid dynamics simulation by a factor of 20 and the fastest by over a factor of four.
The team ran its custom software on the world’s two fastest supercomputers, as well as the eighth fastest, to construct such a massive model.
Applications from the simulation reach beyond rocket science. The same computing methods can model fluid mechanics in aerospace, medicine, energy, and other fields. At the same time, the work advances understanding of the current limits and future potential of computing.
The team finished as runners-up for the 2025 Gordon Bell Prize for its impactful, multi-domain research. Referred to as the Nobel Prize of supercomputing, the award was presented at the world’s top conference for high-performance computing (HPC) research.
“Fluid dynamics problems of this style, with shocks, turbulence, different interacting fluids, and so on, are a scientific mainstay that marshals our largest supercomputers,” said Bryngelson, an assistant professor with the School of Computational Science and Engineering (CSE).
“Larger and faster simulations that enable solutions to long-standing scientific problems, like the rocket propulsion problem, are always needed. With our work, perhaps we took a big dent out of that issue.”
The Super Heavy booster reflects the space industry’s move toward reusable multi-engine first-stage rockets that are easier to transport and more economical overall.
However, this shift creates research and testing challenges for new designs.
Each of Super Heavy’s 33 thrusters expels propellant at ten times the speed of sound. As individual engines reach extreme temperatures, pressures, and densities, their combined interactions with the airframe make such violent physics even more unpredictable.
Frequent physical experiments would be expensive and risky, so scientists rely on computer models to supplement the engineering process.
Bryngelson’s flagship Multicomponent Flow Code (MFC) software anchored the experiment. MFC is an open-source computer program that simulates fluid dynamic models. Bryngelson’s lab has been modifying MFC since 2022 to run on more powerful computers and solve larger problems.
In computing terms, this MFC-enhanced model simulated fluid flow resolution at 200 trillion grid points and one quadrillion degrees of freedom. These metrics exceeded previous record-setting benchmarks that tallied 10 trillion and 30 trillion grid points.
This means MFC simulations provide greater detail and capture smaller-scale features than previous approaches. The rocket simulation also ran four times faster and achieved 5.7 times the energy efficiency of comparable methods.
Integrating information geometric regularization (IGR) into MFC played a key role in attaining these results. This new approach improved the simulation’s computational efficiency and overcame the challenge of shock dynamics.
In fluid mechanics, shock waves occur when objects move faster than the speed of sound. Along with hampering the performance of airframes and propulsion systems, shocks have historically been difficult to simulate.
Computational scientists have used empirical models based on artificial viscosity to account for shocks. Although these approaches mimic the physical effects of shock waves at the microscopic scale, they struggle to effectively capture the large-scale features of the flow.
Information geometry uses curved spaces to study concepts of statistics and information. IGR uses these tools to modify the underlying geometry in fluid dynamics equations. When traveling in the modified geometry, fluid in the model preserves the shocks in a more natural way.
“When regularizing shocks to much larger scales relevant in these numerical simulations, conventional methods smear out important fine-scale details,” said Schäfer, an assistant professor at NYU’s Courant Institute of Mathematical Sciences.
“IGR introduces ideas from abstract math to CFD that allow creating modified paths that approach the singularity without ever reaching it. In the resulting fluid flow, shocks never become too spiky in simulations, but the fine-scale details do not smear out either.”
Simulating a model this large required the Georgia Tech researchers to run MFC on El Capitan and Frontier, the world's two fastest supercomputers.
The systems are two of four exascale machines in existence. This means they can solve at least one quintillion (“1” followed by 18 zeros) calculations per second. If a person completed a simple math calculation every second, it would take that person about 30 billion years to reach one quintillion operations.
Frontier is housed at Oak Ridge National Laboratory and debuted as the world’s first exascale supercomputer in 2022. El Capitan surpassed Frontier when Lawrence Livermore National Laboratory launched it in 2024.
To prepare MFC for performance on these machines, Bryngelson’s lab followed a methodical approach spanning years of hardware acquisition and software engineering.
In 2022, Bryngelson attained an AMD MI210 GPU accelerator. Optimizing MFC on the component played a critical step toward preparing the software for exascale machines.
AMD hardware underpins both El Capitan and Frontier. The MI300A GPU powers El Capitan while Frontier uses the MI250X GPU.
After configuring MFC on the MI210 GPU, Bryngelson’s lab ran the software on Frontier for the first time during a 2023 hackathon. This confirmed the code was ready for full-scale deployment on exascale supercomputers based on AMD hardware.
In addition to El Capitan and Frontier, the simulation ran on Alps, the world’s eight-fastest supercomputer based at the Swiss National Supercomputing Centre. It is the largest available system that features the NVIDIA GH200 Grace Hopper Superchip.
Like with AMD GPUs, Bryngelson acquired four GH200s in 2024 and began configuring MFC to the latest hardware innovation powering New Age supercomputers. Later that year, the Jülich Research Centre accepted Bryngelson’s group into an early access program to test JUPITER, a developing supercomputer based on the NVIDIA superchip.
The group earned a certificate for scaling efficiency and node performance on the way toward validating that their code worked on the GH200. The early access project proved successful for JUPITER, which launched in 2025 as Europe’s fastest supercomputer and fourth fastest in the world.
“Getting the level of hands-on experience with world-leading supercomputers and computing resources at Georgia Tech through this project has been a fantastic opportunity for a grad student,” said CSE Ph.D. student Ben Wilfong.
“To leverage these machines, I learned more advanced programming techniques that I’m glad to have in my tool belt for future projects. I also enjoyed the opportunity to work closely with and learn from industry experts from NVIDIA, AMD, and HPE/Cray.”
El Capitan, Frontier, JUPITER, and Alps maintained their rankings at the 2025 International Conference for High Performance Computing Networking, Storage and Analysis (SC25). Of note, the TOP500 announced at SC25 that JUPITER surpassed the exaflop threshold.
The SC Conference Series is one of two venues where the TOP500 announces updated supercomputer rankings every June and November. The TOP500 ranks and details the 500 most powerful supercomputers in the world.
The SC Conference Series serves as the venue where the Association for Computing Machinery (ACM) presents the Gordon Bell Prize. The annual award recognizes achievement in HPC research and application. The Tech-led team was among eight finalists for this year’s award.
Along with Bryngelson, Georgia Tech members included Ph.D. students Anand Radhakrishnan and Wilfong, postdoctoral researcher Daniel Vickers, alumnus Henry Le Berre (CS 2025), and undergraduate student Tanush Prathi.
Schäfer’s partnership with the group stems from his previous role as an assistant professor at Georgia Tech from 2021 to 2025.
Collaborators on the project included Nikolaos Tselepidis and Benedikt Dorschner from NVIDIA, Reuben Budiardja from ORNL, Brian Cornille from AMD, and Stephen Abbot from HPE. All were co-authors of the paper and named finalists for the Gordon Bell Prize.
“I’m elated that we have been nominated for such a prestigious award. It wouldn't have been possible without the combined and diligent efforts of our team,” Radhakrishnan said.
“I’m looking forward to presenting our work at SC25 and connecting with other researchers and fellow finalists while showcasing seminal work in the field of computing.”
Status
- Workflow status: Published
- Created by: Bryant Wine
- Created: 12/01/2025
- Modified By: Bryant Wine
- Modified: 12/01/2025
Categories