Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > cs > arXiv:2509.13933

Help | Advanced Search

Computer Science > Machine Learning

arXiv:2509.13933 (cs)
[Submitted on 17 Sep 2025 (v1) , last revised 19 Sep 2025 (this version, v2)]

Title: Adaptive Client Selection via Q-Learning-based Whittle Index in Wireless Federated Learning

Title: 基于Q学习的怀特指数的自适应客户端选择在无线联邦学习中

Authors:Qiyue Li, Yingxin Liu, Hang Qi, Jieping Luo, Zhizhang Liu, Jingjin Wu
Abstract: We consider the client selection problem in wireless Federated Learning (FL), with the objective of reducing the total required time to achieve a certain level of learning accuracy. Since the server cannot observe the clients' dynamic states that can change their computation and communication efficiency, we formulate client selection as a restless multi-armed bandit problem. We propose a scalable and efficient approach called the Whittle Index Learning in Federated Q-learning (WILF-Q), which uses Q-learning to adaptively learn and update an approximated Whittle index associated with each client, and then selects the clients with the highest indices. Compared to existing approaches, WILF-Q does not require explicit knowledge of client state transitions or data distributions, making it well-suited for deployment in practical FL settings. Experiment results demonstrate that WILF-Q significantly outperforms existing baseline policies in terms of learning efficiency, providing a robust and efficient approach to client selection in wireless FL.
Abstract: 我们考虑无线联邦学习(FL)中的客户端选择问题,目标是减少达到一定学习精度所需的总时间。 由于服务器无法观察可能改变其计算和通信效率的客户端动态状态,我们将客户端选择建模为一个非静止多臂老虎机问题。 我们提出了一种可扩展且高效的方法,称为联邦Q-learning中的Whittle指数学习(WILF-Q),该方法使用Q-learning来自适应地学习和更新与每个客户端相关的近似Whittle指数,然后选择指数最高的客户端。 与现有方法相比,WILF-Q不需要显式了解客户端状态转移或数据分布,使其非常适合在实际FL设置中部署。 实验结果表明,WILF-Q在学习效率方面显著优于现有基线策略,为无线FL中的客户端选择提供了一种稳健且高效的方法。
Subjects: Machine Learning (cs.LG) ; Distributed, Parallel, and Cluster Computing (cs.DC)
Cite as: arXiv:2509.13933 [cs.LG]
  (or arXiv:2509.13933v2 [cs.LG] for this version)
  https://doi.org/10.48550/arXiv.2509.13933
arXiv-issued DOI via DataCite

Submission history

From: Jingjin Wu [view email]
[v1] Wed, 17 Sep 2025 13:04:14 UTC (145 KB)
[v2] Fri, 19 Sep 2025 05:24:50 UTC (171 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • TeX Source
license icon view license
Current browse context:
cs.LG
< prev   |   next >
new | recent | 2025-09
Change to browse by:
cs
cs.DC

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号