Today : Feb 07, 2025
Science
07 February 2025

Deep Reinforcement Learning Reduces Turbulent Drag Effectively

Researchers reveal advanced techniques to control turbulent separation bubbles and improve energy efficiency.

Researchers have found deep reinforcement learning (DRL) to be more effective than classical methods at controlling turbulent separation bubbles, which have significant impacts on aerodynamic efficiency and energy consumption. The study indicates DRL can achieve reductions of the turbulent separation bubble (TSB) area by 9.0% compared to 6.8% with periodic control strategies.

Controlling turbulent flow is imperative for enhancing aerodynamic performance, especially within the transport industry, where turbulence contributes significantly to energy costs and CO2 emissions. For example, aviation alone accounts for about 12% of total CO2 emissions within the transportation sector, with many of those emissions arising from the need to overcome turbulent drag.

The research employed computational fluid dynamics (CFD) simulations of TSBs under various conditions to evaluate the performance of both DRL and classical periodic control. A notable finding was the adaptability of the DRL agent, which allows for unconstrained closed-loop control strategies, making real-time choices based on previously learned experiences.

During the tests, DRL not only reduced the bubble area more effectively than classical methods, but it also provided smoother control strategies with greater stability, minimizing disruptive fluctuations present when using periodic control. This ability to maintain consistent control is particularly important for practical applications, as turbulence control techniques are typically challenged by the dynamic nature of flows.

From the experiment results, the classical periodic control reduced the TSB area by approximately 6.8% and 15.7% on fine and coarse grids, respectively, demonstrating the efficacy of traditional methods. Meanwhile, DRL technology significantly improved the outcome with its agile control mechanisms.

This breakthrough demonstrates one of the highest Reynolds numbers to date for DRL-based control, emphasizing the performance advantages DRL can offer. The researchers have also open-sourced their SmartSOD2D framework, which allows CFD simulations to run on GPUs, providing the foundation for advanced DRL applications on high-performance computing systems.

Consequently, leveraging DRL could lead to vastly improved flow control techniques, resulting not only in enhanced operational efficiencies across various transport modalities but also significant long-term reductions in global emissions. With the ability to adapt its control mechanisms on-the-fly, DRL presents the potential to revolutionize how we approach turbulent flow control, moving away from rigid methodologies to ones capable of dynamic responses.