Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > cs > arXiv:2501.00732

Help | Advanced Search

Computer Science > Distributed, Parallel, and Cluster Computing

arXiv:2501.00732 (cs)
[Submitted on 1 Jan 2025 ]

Title: Gradient Compression and Correlation Driven Federated Learning for Wireless Traffic Prediction

Title: 梯度压缩与相关性驱动的联邦学习在无线流量预测中的应用

Authors:Chuanting Zhang, Haixia Zhang, Shuping Dang, Basem Shihada, Mohamed-Slim Alouini
Abstract: Wireless traffic prediction plays an indispensable role in cellular networks to achieve proactive adaptation for communication systems. Along this line, Federated Learning (FL)-based wireless traffic prediction at the edge attracts enormous attention because of the exemption from raw data transmission and enhanced privacy protection. However FL-based wireless traffic prediction methods still rely on heavy data transmissions between local clients and the server for local model updates. Besides, how to model the spatial dependencies of local clients under the framework of FL remains uncertain. To tackle this, we propose an innovative FL algorithm that employs gradient compression and correlation-driven techniques, effectively minimizing data transmission load while preserving prediction accuracy. Our approach begins with the introduction of gradient sparsification in wireless traffic prediction, allowing for significant data compression during model training. We then implement error feedback and gradient tracking methods to mitigate any performance degradation resulting from this compression. Moreover, we develop three tailored model aggregation strategies anchored in gradient correlation, enabling the capture of spatial dependencies across diverse clients. Experiments have been done with two real-world datasets and the results demonstrate that by capturing the spatio-temporal characteristics and correlation among local clients, the proposed algorithm outperforms the state-of-the-art algorithms and can increase the communication efficiency by up to two orders of magnitude without losing prediction accuracy. Code is available at https://github.com/chuanting/FedGCC.
Abstract: 无线流量预测在蜂窝网络中起着不可替代的作用,以实现通信系统的主动适应。 沿着这一方向,边缘处基于联邦学习(FL)的无线流量预测引起了广泛关注,因为避免了原始数据传输并增强了隐私保护。 然而,基于FL的无线流量预测方法仍然依赖于本地客户端和服务器之间的大量数据传输来进行本地模型更新。 此外,在FL框架下如何建模本地客户端的空间依赖性仍不确定。 为了解决这个问题,我们提出了一种创新的FL算法,该算法采用梯度压缩和相关性驱动技术,有效减少数据传输负载同时保持预测准确性。 我们的方法首先引入了无线流量预测中的梯度稀疏化,允许在模型训练过程中进行显著的数据压缩。 然后,我们实现了误差反馈和梯度跟踪方法,以减轻由此压缩导致的性能下降。 此外,我们开发了三种基于梯度相关性的定制模型聚合策略,使能够捕捉不同客户端之间的空间依赖性。 我们使用两个真实世界的数据集进行了实验,结果表明,通过捕捉本地客户端之间的时空特征和相关性,所提出的算法优于最先进的算法,并且可以在不损失预测准确性的情况下将通信效率提高多达两个数量级。 代码可在 https://github.com/chuanting/FedGCC 获取。
Comments: This paper has been accepted for publication by IEEE Transactions on Cognitive Communications and Networking (2024)
Subjects: Distributed, Parallel, and Cluster Computing (cs.DC)
Cite as: arXiv:2501.00732 [cs.DC]
  (or arXiv:2501.00732v1 [cs.DC] for this version)
  https://doi.org/10.48550/arXiv.2501.00732
arXiv-issued DOI via DataCite
Related DOI: https://doi.org/10.1109/TCCN.2024.3524183
DOI(s) linking to related resources

Submission history

From: Chuanting Zhang [view email]
[v1] Wed, 1 Jan 2025 05:28:58 UTC (9,131 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • HTML (experimental)
  • TeX Source
license icon view license
Current browse context:
cs.DC
< prev   |   next >
new | recent | 2025-01
Change to browse by:
cs

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号