Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > math > arXiv:2306.00126

Help | Advanced Search

Mathematics > Statistics Theory

arXiv:2306.00126 (math)
[Submitted on 31 May 2023 ]

Title: On Mixing Rates for Bayesian CART

Title: 关于贝叶斯 CART 的混合速率

Authors:Jungeum Kim, Veronika Rockova
Abstract: The success of Bayesian inference with MCMC depends critically on Markov chains rapidly reaching the posterior distribution. Despite the plentitude of inferential theory for posteriors in Bayesian non-parametrics, convergence properties of MCMC algorithms that simulate from such ideal inferential targets are not thoroughly understood. This work focuses on the Bayesian CART algorithm which forms a building block of Bayesian Additive Regression Trees (BART). We derive upper bounds on mixing times for typical posteriors under various proposal distributions. Exploiting the wavelet representation of trees, we provide sufficient conditions for Bayesian CART to mix well (polynomially) under certain hierarchical connectivity restrictions on the signal. We also derive a negative result showing that Bayesian CART (based on simple grow and prune steps) cannot reach deep isolated signals in faster than exponential mixing time. To remediate myopic tree exploration, we propose Twiggy Bayesian CART which attaches/detaches entire twigs (not just single nodes) in the proposal distribution. We show polynomial mixing of Twiggy Bayesian CART without assuming that the signal is connected on a tree. Going further, we show that informed variants achieve even faster mixing. A thorough simulation study highlights discrepancies between spike-and-slab priors and Bayesian CART under a variety of proposals.
Abstract: MCMC 的贝叶斯推断的成功在很大程度上依赖于马尔可夫链迅速达到后验分布的能力。尽管贝叶斯非参数方法中后验的推断理论非常丰富,但从这些理想的推断目标模拟的MCMC算法的收敛性质尚未被充分理解。本研究聚焦于贝叶斯CART算法,该算法构成了贝叶斯加性回归树(BART)的基本构建块。我们针对各种提议分布下典型的后验分布,推导出混合时间的上界。利用树的小波表示,我们给出了贝叶斯CART在信号某些层次连接限制下良好混合(多项式混合)的充分条件。我们还得到了一个负面结果,表明基于简单生长和剪枝步骤的贝叶斯CART无法以比指数混合时间更快的速度达到孤立的深层信号。为了弥补短视的树探索问题,我们提出了Twiggy贝叶斯CART,在提议分布中附加/分离整个树枝(而不仅仅是单个节点)。我们证明了Twiggy贝叶斯CART在不需要假设信号在树上相连的情况下仍具有多项式混合速度。进一步地,我们展示了信息量更大的变体能达到更快的混合速度。通过广泛的仿真研究,揭示了在多种提议下,尖峰-滑动先验与贝叶斯CART之间的差异。
Subjects: Statistics Theory (math.ST) ; Machine Learning (stat.ML)
Cite as: arXiv:2306.00126 [math.ST]
  (or arXiv:2306.00126v1 [math.ST] for this version)
  https://doi.org/10.48550/arXiv.2306.00126
arXiv-issued DOI via DataCite

Submission history

From: Jungeum Kim [view email]
[v1] Wed, 31 May 2023 19:04:28 UTC (9,534 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • TeX Source
view license
Current browse context:
math.ST
< prev   |   next >
new | recent | 2023-06
Change to browse by:
math
stat
stat.ML
stat.TH

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号