Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > cs > arXiv:2502.13105

Help | Advanced Search

Computer Science > Machine Learning

arXiv:2502.13105 (cs)
[Submitted on 18 Feb 2025 (v1) , last revised 24 Sep 2025 (this version, v2)]

Title: Enhanced uncertainty quantification variational autoencoders for the solution of Bayesian inverse problems

Title: 用于贝叶斯反问题求解的增强不确定性量化变分自编码器

Authors:Andrea Tonini, Luca Dede'
Abstract: Among other uses, neural networks are a powerful tool for solving deterministic and Bayesian inverse problems in real-time, where variational autoencoders, a specialized type of neural network, enable the Bayesian estimation of model parameters and their distribution from observational data allowing real-time inverse uncertainty quantification. In this work, we build upon existing research [Goh, H. et al., Proceedings of Machine Learning Research, 2022] by proposing a novel loss function to train variational autoencoders for Bayesian inverse problems. When the forward map is affine, we provide a theoretical proof of the convergence of the latent states of variational autoencoders to the posterior distribution of the model parameters. We validate this theoretical result through numerical tests and we compare the proposed variational autoencoder with the existing one in the literature both in terms of accuracy and generalization properties. Finally, we test the proposed variational autoencoder on a Laplace equation, with comparison to the original one and Markov Chains Monte Carlo.
Abstract: 在其他用途中,神经网络是解决实时确定性和贝叶斯反问题的强大工具,其中变分自编码器作为一种特殊的神经网络,能够从观测数据中实现模型参数及其分布的贝叶斯估计,从而实现实时反问题的不确定性量化。 在本工作中,我们基于现有的研究[Goh, H. 等,机器学习研究会议论文集,2022],提出了一种新颖的损失函数,用于训练变分自编码器解决贝叶斯反问题。 当正向映射是仿射的时候,我们提供了变分自编码器潜在状态收敛到模型参数后验分布的理论证明。 我们通过数值测试验证了这一理论结果,并在准确性和泛化性能方面将所提出的变分自编码器与文献中的现有方法进行了比较。 最后,我们将所提出的变分自编码器应用于拉普拉斯方程,并与原始方法和马尔可夫链蒙特卡洛方法进行了比较。
Comments: 23 pages, 9 figures
Subjects: Machine Learning (cs.LG) ; Numerical Analysis (math.NA)
MSC classes: 68T07, 62C10, 35R30
Cite as: arXiv:2502.13105 [cs.LG]
  (or arXiv:2502.13105v2 [cs.LG] for this version)
  https://doi.org/10.48550/arXiv.2502.13105
arXiv-issued DOI via DataCite

Submission history

From: Andrea Tonini [view email]
[v1] Tue, 18 Feb 2025 18:17:49 UTC (7,259 KB)
[v2] Wed, 24 Sep 2025 06:58:30 UTC (3,376 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • HTML (experimental)
  • TeX Source
  • Other Formats
license icon view license
Current browse context:
cs.LG
< prev   |   next >
new | recent | 2025-02
Change to browse by:
cs
cs.NA
math
math.NA

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号