Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > physics > arXiv:2509.14919v1

Help | Advanced Search

Physics > Geophysics

arXiv:2509.14919v1 (physics)
[Submitted on 18 Sep 2025 ]

Title: Inspired by machine learning optimization: can gradient-based optimizers solve cycle skipping in full waveform inversion given sufficient iterations?

Title: 受机器学习优化的启发:给定足够的迭代次数,基于梯度的优化器能否解决全波形反演中的循环跳过问题?

Authors:Xinru Mu, Omar M. Saad, Shaowen Wang, Tariq Alkhalifah
Abstract: Full waveform inversion (FWI) iteratively updates the velocity model by minimizing the difference between observed and simulated data. Due to the high computational cost and memory requirements associated with global optimization algorithms, FWI is typically implemented using local optimization methods. However, when the initial velocity model is inaccurate and low-frequency seismic data (e.g., below 3 Hz) are absent, the mismatch between simulated and observed data may exceed half a cycle, a phenomenon known as cycle skipping. In such cases, local optimization algorithms (e.g., gradient-based local optimizers) tend to converge to local minima, leading to inaccurate inversion results. In machine learning, neural network training is also an optimization problem prone to local minima. It often employs gradient-based optimizers with a relatively large learning rate (beyond the theoretical limits of local optimization that are usually determined numerically by a line search), which allows the optimization to behave like a quasi-global optimizer. Consequently, after training for several thousand iterations, we can obtain a neural network model with strong generative capability. In this study, we also employ gradient-based optimizers with a relatively large learning rate for FWI. Results from both synthetic and field data experiments show that FWI may initially converge to a local minimum; however, with sufficient additional iterations, the inversion can gradually approach the global minimum, slowly from shallow subsurface to deep, ultimately yielding an accurate velocity model. Furthermore, numerical examples indicate that, given sufficient iterations, reasonable velocity inversion results can still be achieved even when low-frequency data below 5 Hz are missing.
Abstract: 全波形反演(FWI)通过最小化观测数据和模拟数据之间的差异来迭代更新速度模型。 由于全局优化算法相关的高计算成本和内存需求,FWI通常使用局部优化方法来实现。 然而,当初始速度模型不准确且缺乏低频地震数据(例如低于3赫兹)时,模拟数据和观测数据之间的差异可能超过半个周期,这种现象称为周期跳变。 在这些情况下,局部优化算法(例如基于梯度的局部优化器)容易收敛到局部极小值,导致反演结果不准确。 在机器学习中,神经网络训练也是一个容易陷入局部极小值的优化问题。 它通常使用具有相对较大学习率的基于梯度的优化器(超出通常由线搜索数值确定的局部优化理论极限),这使得优化过程表现出类似准全局优化器的行为。 因此,在经过数千次迭代训练后,我们可以获得具有强大生成能力的神经网络模型。 在本研究中,我们也为FWI采用具有相对较大学习率的基于梯度的优化器。 合成数据和实际数据实验的结果表明,FWI可能会最初收敛到局部极小值;然而,随着额外的足够迭代次数,反演可以逐渐接近全局极小值,从浅层地表缓慢向深层推进,最终得到一个准确的速度模型。 此外,数值例子表明,即使缺少低于5赫兹的低频数据,只要迭代次数足够,仍可以获得合理的速度反演结果。
Comments: 40 pages, 40 figures
Subjects: Geophysics (physics.geo-ph) ; Machine Learning (cs.LG)
Cite as: arXiv:2509.14919 [physics.geo-ph]
  (or arXiv:2509.14919v1 [physics.geo-ph] for this version)
  https://doi.org/10.48550/arXiv.2509.14919
arXiv-issued DOI via DataCite

Submission history

From: Xinru Mu [view email]
[v1] Thu, 18 Sep 2025 12:56:43 UTC (34,578 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • HTML (experimental)
  • TeX Source
  • Other Formats
license icon view license
Current browse context:
physics.geo-ph
< prev   |   next >
new | recent | 2025-09
Change to browse by:
cs
cs.LG
physics

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号