Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > math > arXiv:2504.05661v1

Help | Advanced Search

Mathematics > Statistics Theory

arXiv:2504.05661v1 (math)
[Submitted on 8 Apr 2025 ]

Title: Online Bernstein-von Mises theorem

Title: 在线伯恩斯坦-冯·米塞斯定理

Authors:Jeyong Lee, Junhyeok Choi, Minwoo Chae
Abstract: Online learning is an inferential paradigm in which parameters are updated incrementally from sequentially available data, in contrast to batch learning, where the entire dataset is processed at once. In this paper, we assume that mini-batches from the full dataset become available sequentially. The Bayesian framework, which updates beliefs about unknown parameters after observing each mini-batch, is naturally suited for online learning. At each step, we update the posterior distribution using the current prior and new observations, with the updated posterior serving as the prior for the next step. However, this recursive Bayesian updating is rarely computationally tractable unless the model and prior are conjugate. When the model is regular, the updated posterior can be approximated by a normal distribution, as justified by the Bernstein-von Mises theorem. We adopt a variational approximation at each step and investigate the frequentist properties of the final posterior obtained through this sequential procedure. Under mild assumptions, we show that the accumulated approximation error becomes negligible once the mini-batch size exceeds a threshold depending on the parameter dimension. As a result, the sequentially updated posterior is asymptotically indistinguishable from the full posterior.
Abstract: 在线学习是一种推断范式,在这种范式中,参数会根据顺序可用的数据逐步更新,与之相对的是批量学习,后者一次性处理整个数据集。 本文假设完整数据集中的小批量数据按顺序变得可用。 贝叶斯框架在每次观察到一个小批量后更新关于未知参数的信念,这种框架天然适用于在线学习。 在每个步骤中,我们使用当前先验和新观测值来更新后验分布,更新后的后验分布则作为下一个步骤的先验分布。 然而,除非模型和先验是共轭的,否则这种递归贝叶斯更新很少具有计算可行性。 当模型是正则的,更新后的后验可以通过正态分布近似,这由伯恩斯坦-冯·米塞斯定理证明合理。 我们在每个步骤采用变分近似,并研究通过这一序列过程获得的最终后验的频率性质。 在温和的假设下,我们证明,一旦小批量大小超过一个依赖于参数维度的阈值,累积的近似误差就会变得可以忽略不计。 因此,逐步更新的后验渐进地与完整的后验不可区分。
Comments: 107 pages, 1 figure
Subjects: Statistics Theory (math.ST)
MSC classes: 62F12, 62F15, 62E17, 62L12
ACM classes: G.3
Cite as: arXiv:2504.05661 [math.ST]
  (or arXiv:2504.05661v1 [math.ST] for this version)
  https://doi.org/10.48550/arXiv.2504.05661
arXiv-issued DOI via DataCite

Submission history

From: Jeyong Lee [view email]
[v1] Tue, 8 Apr 2025 04:22:56 UTC (431 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • TeX Source
  • Other Formats
view license
Current browse context:
math.ST
< prev   |   next >
new | recent | 2025-04
Change to browse by:
math
stat
stat.TH

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号