Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > cs > arXiv:2506.24042

Help | Advanced Search

Computer Science > Machine Learning

arXiv:2506.24042 (cs)
[Submitted on 30 Jun 2025 (v1) , last revised 13 Aug 2025 (this version, v2)]

Title: Faster Diffusion Models via Higher-Order Approximation

Title: 通过高阶近似加速扩散模型

Authors:Gen Li, Yuchen Zhou, Yuting Wei, Yuxin Chen
Abstract: In this paper, we explore provable acceleration of diffusion models without any additional retraining. Focusing on the task of approximating a target data distribution in $\mathbb{R}^d$ to within $\varepsilon$ total-variation distance, we propose a principled, training-free sampling algorithm that requires only the order of $$ d^{1+2/K} \varepsilon^{-1/K} $$ score function evaluations (up to log factor) in the presence of accurate scores, where $K>0$ is an arbitrary fixed integer. This result applies to a broad class of target data distributions, without the need for assumptions such as smoothness or log-concavity. Our theory is robust vis-a-vis inexact score estimation, degrading gracefully as the score estimation error increases -- without demanding higher-order smoothness on the score estimates as assumed in previous work. The proposed algorithm draws insight from high-order ODE solvers, leveraging high-order Lagrange interpolation and successive refinement to approximate the integral derived from the probability flow ODE. More broadly, our work develops a theoretical framework towards understanding the efficacy of high-order methods for accelerated sampling.
Abstract: 在本文中,我们探索了在无需任何额外训练的情况下对扩散模型进行可证明的加速。 专注于在$\mathbb{R}^d$中将目标数据分布近似到$\varepsilon$的总变分距离以内,我们提出了一种有原则的、无需训练的采样算法,该算法在存在精确分数函数的情况下,仅需要$$ d^{1+2/K} \varepsilon^{-1/K} $$阶的分数函数评估次数(忽略对数因子),其中$K>0$是一个任意的固定整数。 这一结果适用于广泛的目标数据分布,而无需诸如平滑性或对数凹性等假设。 我们的理论对于不精确的分数估计具有鲁棒性,随着分数估计误差的增加,性能会逐渐下降——而无需像以前的工作那样要求分数估计的高阶平滑性。 所提出的算法从高阶常微分方程求解器中获得启发,利用高阶拉格朗日插值和逐步细化来近似从概率流常微分方程中导出的积分。 更广泛地说,我们的工作建立了一个理论框架,以理解高阶方法在加速采样中的有效性。
Subjects: Machine Learning (cs.LG) ; Numerical Analysis (math.NA); Statistics Theory (math.ST); Machine Learning (stat.ML)
Cite as: arXiv:2506.24042 [cs.LG]
  (or arXiv:2506.24042v2 [cs.LG] for this version)
  https://doi.org/10.48550/arXiv.2506.24042
arXiv-issued DOI via DataCite

Submission history

From: Yuchen Zhou [view email]
[v1] Mon, 30 Jun 2025 16:49:03 UTC (67 KB)
[v2] Wed, 13 Aug 2025 15:05:42 UTC (69 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • HTML (experimental)
  • TeX Source
  • Other Formats
license icon view license
Current browse context:
cs.LG
< prev   |   next >
new | recent | 2025-06
Change to browse by:
cs
cs.NA
math
math.NA
math.ST
stat
stat.ML
stat.TH

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号