欢迎访问过程工程学报, 今天是

过程工程学报 ›› 2022, Vol. 22 ›› Issue (3): 318-328.DOI: 10.12034/j.issn.1009-606X.221118

• 研究论文 • 上一篇    下一篇

基于多GPU并行格子Boltzmann方法的方管湍流模拟

胡涛1,2, 向星1,2, 葛蔚1,2, 王利民1,2*
  

  1. 1. 中国科学院过程工程研究所多相复杂系统国家实验室,北京 100190 2. 中国科学院大学化学工程学院,北京 100049
  • 收稿日期:2021-04-07 修回日期:2021-05-11 出版日期:2022-03-28 发布日期:2022-03-28
  • 通讯作者: 王利民 lmwang@ipe.ac.cn
  • 作者简介:胡涛(1995-),男,湖南省怀化市人,硕士研究生,化学工程专业,E-mail: thu19@ipe.ac.cn;王利民,通讯联系人,E-mail: lmwang@ipe.ac.cn.
  • 基金资助:
    国家重点研发计划资助项目;国家自然科学基金资助项目;国家自然科学基金资助项目;中国科学院前沿科学研究重点计划;国家数值风洞工程

Multi-GPUs simulation of turbulent square duct flow in lattice Boltzmann method

Tao HU1,2,  Xing XIANG1,2,  Wei GE1,2,  Limin WANG1,2*   

  1. 1. State Key Laboratory of Multiphase Complex Systems, Institute of Process Engineering, Chinese Academy of Sciences, Beijing 100190, China 2. School of Chemical Engineering, University of Chinese Academy of Sciences, Beijing 100049, China
  • Received:2021-04-07 Revised:2021-05-11 Online:2022-03-28 Published:2022-03-28
  • Contact: Limin Wang lmwang@ipe.ac.cn
  • Supported by:
    National Key R&D Program of China;National Natural Science Foundation of China;National Natural Science Foundation of China;Chinese Academy of Sciences;National Numerical Wind tunnel project

摘要: 采用CUDA (Compute Unified Device Architecture)和MPI (Message-Passing-Interface)在超级计算机Mole-8.5E上实现了格子Boltzmann方法(LBM)多GPU并行算法,通过三维顶盖驱动方腔流算例验证了多GPU并行LBM算法的准确性和有效性,利用该并行算法分别对雷诺数Reτ为300, 600, 1200下的充分发展的方管湍流进行了大规模模拟。研究发现,当计算网格尺寸小于黏性底层厚度(即Δ+<5)时,在壁面附近的相关传递特性统计误差较小,预测精度满足工程应用范围;Reτ为300, 600时,不同网格尺寸Δ+下的模拟结果表明LBM在方管流中心湍流特性统计具有网格弱相关性,Reτ为600时,与DNS (Direct Numerical Simulation)相比,Δ+=1.667, 3.750, 6.250时平均流向速度的平均误差分别是1.357%, 2.994%和4.766%;Reτ分别为300, 600和1200时,对应网格尺寸Δ+分别为0.833, 1.667和3.333时的方管湍流模拟中,成功捕捉到了二次流特性,预测得到的中心面流向速度、脉动均方根速度等的规律与文献基本吻合,进一步验证了单松弛LBM的可靠性,相关计算结果也为理解高Reτ下的方管湍流特性提供了参考。方管湍流的模拟验证了单松弛LBM多GPU并行算法在超大规模网格计算中的潜力,为进一步实现实际工程流动所需更大规模的数值模拟奠定了基础。

关键词: 格子Boltzmann方法, 多GPU并行, 方管湍流, 计算流体力学, 数值模拟

Abstract: Compute unified device architecture (CUDA) and message-passing-interface (MPI) were used to implement the lattice Boltzmann method (LBM) multi-GPUs parallel algorithm on supercomputing system Mole-8.5E. The accuracy and effectiveness of the multi-GPUs parallel LBM algorithm was verified by the three-dimensional lid-driven cavity flow. Using this parallel algorithm, large-scale simulations of fully developed turbulent square duct flows with Reynolds numbers Reτ of 300, 600, and 1200 were carried out. Numerical results showed that when the grid size was less than the viscous sublayer, Δ+<5, the statistical error of the transfer characteristics near the wall was low, and the simulation accuracy met the need of engineering application. Meanwhile, when Reτ was 300 and 600, the simulation results at different grid sizes Δ+ showed that the LBM had a weak grid-dependent in the statistics of turbulence characteristics in the square duct central area, and when Reτ was 600, the averaged errors of the mean streamwise velocity predicted by LBM at Δ+=1.667, 3.750, and 6.250 compared with direct numerical simulation (DNS) were 1.357%, 2.994%, and 4.766%, respectively. The characteristics of turbulent square duct flow were studied when Reτ was 300, 600, and 1200, and the corresponding grid size Δ+ was 0.833, 1.667, and 3.333, respectively. The secondary flows were successfully captured, and the predicted mean streamwise velocity, pulsating root-mean-square (rms) velocity and other trends and results were consistent with the literature, which further verified the reliability of the single relaxation time (SRT) LBM, and the numerical results also provided a reference for understanding the turbulence characteristics of turbulent square duct flow at high Reτ. The simulation of turbulent square duct flow verified the potential of the SRT LBM multi-GPUs parallel algorithm in ultra-large-scale grid computing, and laid the foundation for further realization of the larger-scale numerical simulation required in practical engineering.

Key words: lattice Boltzmann method, multi-GPUs parallel, turbulent square duct flow, computational fluid dynamics, numerical simulation