Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > physics > arXiv:2505.05716v1

Help | Advanced Search

Physics > Fluid Dynamics

arXiv:2505.05716v1 (physics)
[Submitted on 9 May 2025 ]

Title: A framework for learning symbolic turbulence models from indirect observation data via neural networks and feature importance analysis

Title: 基于神经网络和特征重要性分析从间接观测数据学习符号湍流模型的框架

Authors:Chutian Wu, Xin-Lei Zhang, Duo Xu, Guowei He
Abstract: Learning symbolic turbulence models from indirect observation data is of significant interest as it not only improves the accuracy of posterior prediction but also provides explicit model formulations with good interpretability. However, it typically resorts to gradient-free evolutionary algorithms, which can be relatively inefficient compared to gradient-based approaches, particularly when the Reynolds-averaged Navier-Stokes (RANS) simulations are involved in the training process. In view of this difficulty, we propose a framework that uses neural networks and the associated feature importance analysis to improve the efficiency of symbolic turbulence modeling. In doing so, the gradient-based method can be used to efficiently learn neural network-based representations of Reynolds stress from indirect data, which is further transformed into simplified mathematical expressions with symbolic regression. Moreover, feature importance analysis is introduced to accelerate the convergence of symbolic regression by excluding insignificant input features. The proposed training strategy is tested in the flow in a square duct, where it correctly learns underlying analytic models from indirect velocity data. Further, the method is applied in the flow over the periodic hills, demonstrating that the feature importance analysis can significantly improve the training efficiency and learn symbolic turbulence models with satisfactory generalizability.
Abstract: 从间接观测数据学习符号湍流模型具有重要意义,因为它不仅能提高后验预测的准确性,还能提供具有良好可解释性的显式模型表达。 然而,它通常依赖于无梯度的进化算法,与基于梯度的方法相比,特别是在雷诺平均纳维-斯托克斯(RANS)仿真参与训练过程时,可能相对低效。 鉴于这一困难,我们提出了一种使用神经网络和相关特征重要性分析来提高符号湍流建模效率的框架。 通过这种方法,可以高效地使用基于梯度的方法从间接数据中学习基于神经网络的雷诺应力表示,然后将其进一步转化为简化后的数学表达式进行符号回归。 此外,引入特征重要性分析通过排除不显著的输入特征来加速符号回归的收敛。 所提出的训练策略在方形管道流动中进行了测试,在该流动中它能够正确地从间接速度数据中学习到潜在的解析模型。 进一步地,该方法被应用于周期丘陵上的流动,表明特征重要性分析可以显著提高训练效率,并学习到具有满意泛化能力的符号湍流模型。
Comments: 34 pages, 16 figures
Subjects: Fluid Dynamics (physics.flu-dyn)
Cite as: arXiv:2505.05716 [physics.flu-dyn]
  (or arXiv:2505.05716v1 [physics.flu-dyn] for this version)
  https://doi.org/10.48550/arXiv.2505.05716
arXiv-issued DOI via DataCite

Submission history

From: Chutian Wu [view email]
[v1] Fri, 9 May 2025 01:32:43 UTC (5,587 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • TeX Source
view license
Current browse context:
physics.flu-dyn
< prev   |   next >
new | recent | 2025-05
Change to browse by:
physics

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号