Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > math > arXiv:2503.05533

Help | Advanced Search

Mathematics > Numerical Analysis

arXiv:2503.05533 (math)
[Submitted on 7 Mar 2025 (v1) , last revised 30 Sep 2025 (this version, v3)]

Title: Exploiting Inexact Computations in Multilevel Monte Carlo and Other Sampling Methods

Title: 利用多级蒙特卡洛和其他采样方法中的不精确计算

Authors:Josef Martínek, Erin Carson, Robert Scheichl
Abstract: Multilevel sampling methods, such as multilevel and multifidelity Monte Carlo, multilevel stochastic collocation, or delayed acceptance Markov chain Monte Carlo, have become standard uncertainty quantification (UQ) tools for a wide class of forward and inverse problems. The underlying idea is to achieve faster convergence by leveraging a hierarchy of models, such as partial differential equation (PDE) or stochastic differential equation (SDE) discretisations with increasing accuracy. By optimally redistributing work among the levels, multilevel methods can achieve significant performance improvement compared to single level methods working with one high-fidelity model. Intuitively, approximate solutions on coarser levels can tolerate large computational error without affecting the overall accuracy. We show how this can be used in high-performance computing applications to obtain a significant performance gain. As a use case, we analyse the computational error in the standard multilevel Monte Carlo method and formulate an adaptive algorithm which determines a minimum required computational accuracy on each level of discretisation. We show two examples of how the inexactness can be converted into actual gains using an elliptic PDE with lognormal random coefficients. Using a low precision sparse direct solver combined with iterative refinement results in a simulated gain in memory references of up to $3.5\times$ compared to the reference double precision solver; while using a MINRES iterative solver, a practical speedup of up to $1.5\times$ in terms of FLOPs is achieved. These results provide a step in the direction of energy-aware scientific computing, with significant potential for energy savings.
Abstract: 多级采样方法,如多级和多保真度蒙特卡罗、多级随机配置以及延迟接受马尔可夫链蒙特卡罗,已成为广泛类别的正向和反向问题的标准不确定性量化(UQ)工具。其基本思想是通过利用模型的层次结构来实现更快的收敛,例如具有不断增加精度的偏微分方程(PDE)或随机微分方程(SDE)离散化。通过在各层级之间最优地重新分配工作量,多级方法相比仅使用一个高保真度模型的单级方法可以显著提高性能。直观上,较粗层级上的近似解可以容忍较大的计算误差而不影响整体精度。我们展示了如何在高性能计算应用中利用这一点以获得显著的性能提升。作为一个用例,我们分析了标准多级蒙特卡罗方法中的计算误差,并制定了一种自适应算法,该算法确定每个离散化层级所需的最小计算精度。我们展示了两个例子,说明如何通过具有对数正态随机系数的椭圆PDE将不精确性转化为实际收益。使用低精度稀疏直接求解器结合迭代精化,在内存引用方面相对于参考双精度求解器实现了高达$3.5\times$的模拟增益;而使用MINRES迭代求解器,在FLOPs方面实现了高达$1.5\times$的实际加速。这些结果为能源感知的科学计算指明了一个方向,具有显著的节能潜力。
Subjects: Numerical Analysis (math.NA) ; Computation (stat.CO)
MSC classes: 65Y20, 65C05, 65C30, 65G20, 60-08
Cite as: arXiv:2503.05533 [math.NA]
  (or arXiv:2503.05533v3 [math.NA] for this version)
  https://doi.org/10.48550/arXiv.2503.05533
arXiv-issued DOI via DataCite

Submission history

From: Josef Martínek [view email]
[v1] Fri, 7 Mar 2025 16:00:52 UTC (805 KB)
[v2] Wed, 19 Mar 2025 13:06:11 UTC (801 KB)
[v3] Tue, 30 Sep 2025 14:14:42 UTC (88 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • HTML (experimental)
  • TeX Source
view license
Current browse context:
math.NA
< prev   |   next >
new | recent | 2025-03
Change to browse by:
cs
cs.NA
math
stat
stat.CO

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号