Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > cs > arXiv:2507.18219v1

Help | Advanced Search

Computer Science > Machine Learning

arXiv:2507.18219v1 (cs)
[Submitted on 24 Jul 2025 (this version) , latest version 5 Aug 2025 (v2) ]

Title: FedSA-GCL: A Semi-Asynchronous Federated Graph Learning Framework with Personalized Aggregation and Cluster-Aware Broadcasting

Title: FedSA-GCL:一种具有个性化聚合和聚类感知广播的半异步联邦图学习框架

Authors:Zhongzheng Yuan, Lianshuai Guo, Xunkai Li, Yinlin Zhu, Wenyu Wang, Meixia Qu
Abstract: Federated Graph Learning (FGL) is a distributed learning paradigm that enables collaborative training over large-scale subgraphs located on multiple local systems. However, most existing FGL approaches rely on synchronous communication, which leads to inefficiencies and is often impractical in real-world deployments. Meanwhile, current asynchronous federated learning (AFL) methods are primarily designed for conventional tasks such as image classification and natural language processing, without accounting for the unique topological properties of graph data. Directly applying these methods to graph learning can possibly result in semantic drift and representational inconsistency in the global model. To address these challenges, we propose FedSA-GCL, a semi-asynchronous federated framework that leverages both inter-client label distribution divergence and graph topological characteristics through a novel ClusterCast mechanism for efficient training. We evaluate FedSA-GCL on multiple real-world graph datasets using the Louvain and Metis split algorithms, and compare it against 9 baselines. Extensive experiments demonstrate that our method achieves strong robustness and outstanding efficiency, outperforming the baselines by an average of 2.92% with the Louvain and by 3.4% with the Metis.
Abstract: 联邦图学习(FGL)是一种分布式学习范式,允许在多个本地系统上的大规模子图上进行协作训练。 然而,大多数现有的FGL方法依赖于同步通信,这会导致效率低下,并且在实际部署中往往不切实际。 同时,当前的异步联邦学习(AFL)方法主要针对图像分类和自然语言处理等传统任务设计,而未考虑图数据的独特拓扑特性。 直接将这些方法应用于图学习可能导致全局模型中的语义漂移和表示不一致。 为了解决这些挑战,我们提出了FedSA-GCL,这是一种半异步联邦框架,通过一种新颖的ClusterCast机制,利用客户端间标签分布差异和图拓扑特性以实现高效的训练。 我们在多个真实世界的图数据集上使用Louvain和Metis分割算法评估FedSA-GCL,并与9个基线方法进行比较。 大量实验表明,我们的方法表现出强大的鲁棒性和卓越的效率,在Louvain下平均优于基线2.92%,在Metis下平均优于基线3.4%。
Subjects: Machine Learning (cs.LG) ; Artificial Intelligence (cs.AI)
Cite as: arXiv:2507.18219 [cs.LG]
  (or arXiv:2507.18219v1 [cs.LG] for this version)
  https://doi.org/10.48550/arXiv.2507.18219
arXiv-issued DOI via DataCite

Submission history

From: Xunkai Li [view email]
[v1] Thu, 24 Jul 2025 09:15:07 UTC (9,618 KB)
[v2] Tue, 5 Aug 2025 14:52:53 UTC (9,610 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • HTML (experimental)
  • Other Formats
view license
Current browse context:
cs.LG
< prev   |   next >
new | recent | 2025-07
Change to browse by:
cs
cs.AI

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号