Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > math > arXiv:2501.07863

Help | Advanced Search

Mathematics > Optimization and Control

arXiv:2501.07863 (math)
[Submitted on 14 Jan 2025 (v1) , last revised 5 Feb 2025 (this version, v5)]

Title: An accelerated gradient method with adaptive restart for convex multiobjective optimization problems

Title: 具有自适应重启的加速梯度方法在凸多目标优化问题中的应用

Authors:Hao Luo, Liping Tang, Xinmin Yang
Abstract: In this work, based on the continuous time approach, we propose an accelerated gradient method with adaptive residual restart for convex multiobjective optimization problems. For the first, we derive rigorously the continuous limit of the multiobjective accelerated proximal gradient method by Tanabe et al. [Comput. Optim. Appl., 2023]. It is a second-order ordinary differential equation (ODE) that involves a special projection operator and can be viewed as an extension of the ODE by Su et al. [J. Mach. Learn. Res., 2016] for Nesterov acceleration. Then, we introduce a novel accelerated multiobjective gradient (AMG) flow with tailored time scaling that adapts automatically to the convex case and the strongly convex case, and the exponential decay rate of a merit function along with the solution trajectory of AMG flow is established via the Lyapunov analysis. After that, we consider an implicit-explicit time discretization and obtain an accelerated multiobjective gradient method with a convex quadratic programming subproblem. The fast sublinear rate and linear rate are proved respectively for convex and strongly convex problems. In addition, we present an efficient residual based adaptive restart technique to overcome the oscillation issue and improve the convergence significantly. Numerical results are provided to validate the practical performance of the proposed method.
Abstract: 在这项工作中,基于连续时间方法,我们提出了一种具有自适应残差重启的凸多目标优化问题加速梯度法。 首先,我们严格推导出 Tanabe 等人提出的多目标加速近端梯度方法的连续极限 [Comput. Optim. Appl., 2023]。这是一个涉及特殊投影算子的二阶常微分方程(ODE),可以看作是 Su 等人提出的 ODE 的扩展 [J. Mach. Learn. Res., 2016],用于 Nesterov 加速。 然后,我们引入了一种新颖的具有定制时间尺度的加速多目标梯度(AMG)流,该流能够自动适应凸情形和强凸情形,并通过李雅普诺夫分析建立了沿 AMG 流解轨迹的 merit 函数的指数衰减率。 之后,我们考虑隐式-显式时间离散化,并得到一个具有凸二次规划子问题的加速多目标梯度法。分别证明了凸问题和强凸问题的快速次线性收敛率和线性收敛率。 此外,我们提出了一个基于残差的自适应重启技术以克服振荡问题并显著提高收敛速度。 数值结果验证了所提出方法的实际性能。
Subjects: Optimization and Control (math.OC) ; Numerical Analysis (math.NA)
Cite as: arXiv:2501.07863 [math.OC]
  (or arXiv:2501.07863v5 [math.OC] for this version)
  https://doi.org/10.48550/arXiv.2501.07863
arXiv-issued DOI via DataCite

Submission history

From: Hao Luo [view email]
[v1] Tue, 14 Jan 2025 05:59:37 UTC (659 KB)
[v2] Wed, 15 Jan 2025 05:27:42 UTC (659 KB)
[v3] Sat, 1 Feb 2025 17:16:26 UTC (660 KB)
[v4] Tue, 4 Feb 2025 14:06:20 UTC (660 KB)
[v5] Wed, 5 Feb 2025 14:24:30 UTC (660 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • HTML (experimental)
  • TeX Source
view license
Current browse context:
math.OC
< prev   |   next >
new | recent | 2025-01
Change to browse by:
cs
cs.NA
math
math.NA

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号