Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > stat > arXiv:2506.11761

Help | Advanced Search

Statistics > Machine Learning

arXiv:2506.11761 (stat)
[Submitted on 13 Jun 2025 ]

Title: Using Deep Operators to Create Spatio-temporal Surrogates for Dynamical Systems under Uncertainty

Title: 利用深度算子构建不确定性动力系统下的时空代理模型

Authors:Jichuan Tang, Patrick T. Brewick, Ryan G. McClarren, Christopher Sweet
Abstract: Spatio-temporal data, which consists of responses or measurements gathered at different times and positions, is ubiquitous across diverse applications of civil infrastructure. While SciML methods have made significant progress in tackling the issue of response prediction for individual time histories, creating a full spatial-temporal surrogate remains a challenge. This study proposes a novel variant of deep operator networks (DeepONets), namely the full-field Extended DeepONet (FExD), to serve as a spatial-temporal surrogate that provides multi-output response predictions for dynamical systems. The proposed FExD surrogate model effectively learns the full solution operator across multiple degrees of freedom by enhancing the expressiveness of the branch network and expanding the predictive capabilities of the trunk network. The proposed FExD surrogate is deployed to simultaneously capture the dynamics at several sensing locations along a testbed model of a cable-stayed bridge subjected to stochastic ground motions. The ensuing response predictions from the FExD are comprehensively compared against both a vanilla DeepONet and a modified spatio-temporal Extended DeepONet. The results demonstrate the proposed FExD can achieve both superior accuracy and computational efficiency, representing a significant advancement in operator learning for structural dynamics applications.
Abstract: 时空数据,即在不同时间和位置收集的响应或测量值,在土木基础设施的各种应用中无处不在。尽管科学与机器学习(SciML)方法在解决单个时间序列响应预测问题上取得了显著进展,但构建完整的时空代理模型仍是一项挑战。 本研究提出了一种深度算子网络(DeepONets)的新变体,即全场扩展DeepONet(FExD),作为时空代理模型,为动力系统提供多输出响应预测。所提出的FExD代理模型通过增强分支网络的表达能力并扩展主干网络的预测能力,有效学习了多个自由度上的完整解算符。 所提出的FExD代理模型被部署到一个测试模型中的多个传感位置,该测试模型模拟了一座悬索桥在随机地面运动下的动力学行为。FExD生成的响应预测结果与标准DeepONet和修改后的时空扩展DeepONet进行了全面比较。结果显示,所提出的FExD在准确性与计算效率方面均表现出色,代表了结构动力学应用中算子学习的重大进步。
Subjects: Machine Learning (stat.ML) ; Machine Learning (cs.LG)
Cite as: arXiv:2506.11761 [stat.ML]
  (or arXiv:2506.11761v1 [stat.ML] for this version)
  https://doi.org/10.48550/arXiv.2506.11761
arXiv-issued DOI via DataCite

Submission history

From: Patrick Brewick [view email]
[v1] Fri, 13 Jun 2025 13:16:09 UTC (8,651 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • HTML (experimental)
  • TeX Source
view license
Current browse context:
stat.ML
< prev   |   next >
new | recent | 2025-06
Change to browse by:
cs
cs.LG
stat

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号