Supercomputer supports SMR simulation – Nuclear Engineering International
Understanding of physical behavior inside an operating nuclear reactor can be improved through simulations on a supercomputer, says Jared Sagoff
Scientists hoping to build new generations of small modular reactors (SMRs) must be able to design and understand how these reactors behave in simulated environments before they can be built. Large-scale, high-resolution models provide information that can reduce the costs of building a new intrinsically safe nuclear reactor.
Scientists from the US Department of Energy’s (DOE) Argonne National Laboratory (ANL) have collaborated to develop a new computer model that enables visualization of an entire reactor core at unprecedented resolution. The goal of the project, which is being carried out under the auspices of the DOE’s Exascale Computing Project (ExaSMR), is to perform full multi-physics simulations on future state-of-the-art exascale supercomputers. This includes Aurora, which is expected to arrive in Argonne.
An update on the progress was published in April in the journal Nuclear Engineering and Design, which will hopefully inspire researchers to further integrate high-fidelity digital simulations into actual engineering designs.
Model in more detail
In a nuclear reactor, the vortices and vortices of coolant that flow around the fuel needles play an essential role in determining the thermal and hydraulic performance of the reactor. They also provide nuclear engineers with much needed information on how best to design future nuclear reactor systems, both for normal operation and for stress tolerance.
A typical light water reactor core is made up of nuclear fuel assemblies, each containing several hundred individual fuel pins, which in turn are made of fuel pellets. Until now, the limitations of raw computing power have constrained models so that they could only process particular regions of the heart. However, one image can now model all of the individual pins in one of the very first full-core nuclear reactor simulations.
“As we move towards exascale computing, we will see more opportunities to reveal the large-scale dynamics of these complex structures in regimes that were previously inaccessible, giving us real insights that can reshape the way whose challenges we are addressing in reactor design, “said Argonne Nuclear engineer Jun Fang, author of the study, which was published by the ExaSMR teams at Argonne and Prof. Elia Merzari’s group at Pennsylvania State University .
A key aspect of modeling SMR fuel assemblies is the presence of spacer grids. These grids play an important role in pressurized water reactors, such as the considered SMR, because they create turbulence structures and improve the ability of the flow to remove heat from the fuel rods.
Instead of creating a computing grid solving all of the local geometric details, the researchers developed a mathematical mechanism to replicate the overall impact of these structures on coolant flow without sacrificing precision. In doing so, researchers were able for the first time to successfully extend CFD simulations of related computational fluid dynamics to an entire SMR nucleus.
“The mechanisms by which coolant mixes throughout the core remain smooth and relatively consistent. This allows us to take advantage of high-fidelity simulations of turbulent flows in a section of the core to improve the accuracy of our core-scale computational approach, ”said Dillon Shaver, senior nuclear engineer at Argonne.
The technical expertise presented by the ExaSMR teams is based on the history of Argonne’s breakthroughs in related research fields such as nuclear engineering and computer sciences.
Several decades ago, a group of scientists from Argonne, led by Paul Fischer, released a CFD stream solver software package called Nek5000, which was revolutionary because it allowed users to simulate engineering fluid problems with up to a million parallel threads. Recently, Nek5000 has been redesigned into a new solver called NekRS which uses the power of graphics processing units (GPUs) to increase the computational speed of the model. “Having codes designed for this purpose gives us the ability to take full advantage of the raw computing power that the supercomputer offers us,” said Fang.
The team’s calculations were performed on supercomputers at the Argonne Leadership Computing Facility (ALCF), Oak Ridge Leadership Computing Facility (OLCF) and Argonne’s Laboratory Computing Resource Center (LCRC). ALCF and OLCF are facilities for users of the DOE Office of Science. The research is supported by the Exascale Computing Project, a collaborative effort of the DOE’s Office of Science and the National Nuclear Security Administration.
Jared Sagoff is a coordinating editor / editor at the Argonne National Laboratory
Supercomputer at ALCF Photo credit: Argonne National Laboratory