Physics > Fluid Dynamics
[Submitted on 9 May 2025
]
Title: A framework for learning symbolic turbulence models from indirect observation data via neural networks and feature importance analysis
Title: 基于神经网络和特征重要性分析从间接观测数据学习符号湍流模型的框架
Abstract: Learning symbolic turbulence models from indirect observation data is of significant interest as it not only improves the accuracy of posterior prediction but also provides explicit model formulations with good interpretability. However, it typically resorts to gradient-free evolutionary algorithms, which can be relatively inefficient compared to gradient-based approaches, particularly when the Reynolds-averaged Navier-Stokes (RANS) simulations are involved in the training process. In view of this difficulty, we propose a framework that uses neural networks and the associated feature importance analysis to improve the efficiency of symbolic turbulence modeling. In doing so, the gradient-based method can be used to efficiently learn neural network-based representations of Reynolds stress from indirect data, which is further transformed into simplified mathematical expressions with symbolic regression. Moreover, feature importance analysis is introduced to accelerate the convergence of symbolic regression by excluding insignificant input features. The proposed training strategy is tested in the flow in a square duct, where it correctly learns underlying analytic models from indirect velocity data. Further, the method is applied in the flow over the periodic hills, demonstrating that the feature importance analysis can significantly improve the training efficiency and learn symbolic turbulence models with satisfactory generalizability.
Current browse context:
physics.flu-dyn
Change to browse by:
References & Citations
Bibliographic and Citation Tools
Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)
Code, Data and Media Associated with this Article
alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)
Demos
Recommenders and Search Tools
Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender
(What is IArxiv?)
arXivLabs: experimental projects with community collaborators
arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.
Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.
Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.