Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > math > arXiv:2410.01410

Help | Advanced Search

Mathematics > Optimization and Control

arXiv:2410.01410 (math)
[Submitted on 2 Oct 2024 ]

Title: On the Convergence of FedProx with Extrapolation and Inexact Prox

Title: 关于带外推和不精确邻近的FedProx的收敛性

Authors:Hanmin Li, Peter Richtárik
Abstract: Enhancing the FedProx federated learning algorithm (Li et al., 2020) with server-side extrapolation, Li et al. (2024a) recently introduced the FedExProx method. Their theoretical analysis, however, relies on the assumption that each client computes a certain proximal operator exactly, which is impractical since this is virtually never possible to do in real settings. In this paper, we investigate the behavior of FedExProx without this exactness assumption in the smooth and globally strongly convex setting. We establish a general convergence result, showing that inexactness leads to convergence to a neighborhood of the solution. Additionally, we demonstrate that, with careful control, the adverse effects of this inexactness can be mitigated. By linking inexactness to biased compression (Beznosikov et al., 2023), we refine our analysis, highlighting robustness of extrapolation to inexact proximal updates. We also examine the local iteration complexity required by each client to achieved the required level of inexactness using various local optimizers. Our theoretical insights are validated through comprehensive numerical experiments.
Abstract: 增强FedProx联邦学习算法(Li等,2020),Li等(2024a)最近引入了FedExProx方法,该方法采用服务器端外推。然而,他们的理论分析依赖于每个客户端精确计算某个近似算子的假设,这在实际情况下几乎不可能实现。在本文中,我们研究了在平滑且全局强凸设置下,不考虑此精确性假设的FedExProx的行为。我们建立了一个一般的收敛结果,表明不精确性会导致收敛到解的邻域。此外,我们证明了通过仔细控制,可以减轻这种不精确性的不利影响。通过将不精确性与有偏压缩(Beznosikov等,2023)联系起来,我们改进了分析,突出了外推对不精确近似更新的鲁棒性。我们还考察了每个客户端使用各种本地优化器达到所需不精确水平所需的局部迭代复杂度。我们的理论见解通过全面的数值实验得到了验证。
Comments: 36 pages, 6 figures
Subjects: Optimization and Control (math.OC) ; Artificial Intelligence (cs.AI)
MSC classes: 90C25
Cite as: arXiv:2410.01410 [math.OC]
  (or arXiv:2410.01410v1 [math.OC] for this version)
  https://doi.org/10.48550/arXiv.2410.01410
arXiv-issued DOI via DataCite

Submission history

From: Hanmin Li [view email]
[v1] Wed, 2 Oct 2024 10:42:27 UTC (4,971 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • HTML (experimental)
  • TeX Source
view license
Current browse context:
math.OC
< prev   |   next >
new | recent | 2024-10
Change to browse by:
cs
cs.AI
math

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号