Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > cs > arXiv:2510.19163

Help | Advanced Search

Computer Science > Machine Learning

arXiv:2510.19163 (cs)
[Submitted on 22 Oct 2025 ]

Title: Natural Gradient VI: Guarantees for Non-Conjugate Models

Title: 自然梯度VI:非共轭模型的保证

Authors:Fangyuan Sun, Ilyas Fatkhullin, Niao He
Abstract: Stochastic Natural Gradient Variational Inference (NGVI) is a widely used method for approximating posterior distribution in probabilistic models. Despite its empirical success and foundational role in variational inference, its theoretical underpinnings remain limited, particularly in the case of non-conjugate likelihoods. While NGVI has been shown to be a special instance of Stochastic Mirror Descent, and recent work has provided convergence guarantees using relative smoothness and strong convexity for conjugate models, these results do not extend to the non-conjugate setting, where the variational loss becomes non-convex and harder to analyze. In this work, we focus on mean-field parameterization and advance the theoretical understanding of NGVI in three key directions. First, we derive sufficient conditions under which the variational loss satisfies relative smoothness with respect to a suitable mirror map. Second, leveraging this structure, we propose a modified NGVI algorithm incorporating non-Euclidean projections and prove its global non-asymptotic convergence to a stationary point. Finally, under additional structural assumptions about the likelihood, we uncover hidden convexity properties of the variational loss and establish fast global convergence of NGVI to a global optimum. These results provide new insights into the geometry and convergence behavior of NGVI in challenging inference settings.
Abstract: 随机自然梯度变分推断(NGVI)是一种在概率模型中近似后验分布的常用方法。 尽管其在经验上取得了成功并在变分推断中扮演着基础角色,但其理论基础仍然有限,特别是在非共轭似然的情况下。 虽然NGVI已被证明是随机镜面下降的一种特殊情况,并且最近的工作利用相对光滑性和强凸性为共轭模型提供了收敛保证,但这些结果无法扩展到非共轭设置,在这种情况下,变分损失变得非凸且更难分析。 在本工作中,我们专注于均场参数化,并在三个关键方向上推进了NGVI的理论理解。 首先,我们推导了变分损失相对于合适的镜面映射满足相对光滑性的充分条件。 其次,利用这一结构,我们提出了一种改进的NGVI算法,结合了非欧几里得投影,并证明了其全局非渐近收敛到一个平稳点。 最后,在对似然的额外结构假设下,我们揭示了变分损失的隐藏凸性性质,并建立了NGVI到全局最优值的快速全局收敛性。 这些结果为NGVI在具有挑战性的推断设置中的几何结构和收敛行为提供了新的见解。
Comments: NeurIPS 2025
Subjects: Machine Learning (cs.LG) ; Optimization and Control (math.OC)
MSC classes: 90C26, 90C15
ACM classes: G.1.6
Cite as: arXiv:2510.19163 [cs.LG]
  (or arXiv:2510.19163v1 [cs.LG] for this version)
  https://doi.org/10.48550/arXiv.2510.19163
arXiv-issued DOI via DataCite (pending registration)

Submission history

From: Fangyuan Sun [view email]
[v1] Wed, 22 Oct 2025 01:46:31 UTC (2,673 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • TeX Source
view license
Current browse context:
cs.LG
< prev   |   next >
new | recent | 2025-10
Change to browse by:
cs
math
math.OC

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号