Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > cs > arXiv:2510.03860

Help | Advanced Search

Computer Science > Information Theory

arXiv:2510.03860 (cs)
[Submitted on 4 Oct 2025 ]

Title: Privacy Enhancement in Over-the-Air Federated Learning via Adaptive Receive Scaling

Title: 通过自适应接收缩放实现空中联邦学习的隐私增强

Authors:Faeze Moradi Kalarde, Ben Liang, Min Dong, Yahia A. Eldemerdash Ahmed, Ho Ting Cheng
Abstract: In Federated Learning (FL) with over-the-air aggregation, the quality of the signal received at the server critically depends on the receive scaling factors. While a larger scaling factor can reduce the effective noise power and improve training performance, it also compromises the privacy of devices by reducing uncertainty. In this work, we aim to adaptively design the receive scaling factors across training rounds to balance the trade-off between training convergence and privacy in an FL system under dynamic channel conditions. We formulate a stochastic optimization problem that minimizes the overall R\'enyi differential privacy (RDP) leakage over the entire training process, subject to a long-term constraint that ensures convergence of the global loss function. Our problem depends on unknown future information, and we observe that standard Lyapunov optimization is not applicable. Thus, we develop a new online algorithm, termed AdaScale, based on a sequence of novel per-round problems that can be solved efficiently. We further derive upper bounds on the dynamic regret and constraint violation of AdaSacle, establishing that it achieves diminishing dynamic regret in terms of time-averaged RDP leakage while ensuring convergence of FL training to a stationary point. Numerical experiments on canonical classification tasks show that our approach effectively reduces RDP and DP leakages compared with state-of-the-art benchmarks without compromising learning performance.
Abstract: 在无线聚合的联邦学习(FL)中,服务器接收到的信号质量关键取决于接收缩放因子。 虽然较大的缩放因子可以降低有效噪声功率并提高训练性能,但它也会通过减少不确定性而损害设备的隐私。 在本工作中,我们旨在适应性地设计训练轮次中的接收缩放因子,以在动态信道条件下平衡FL系统中训练收敛与隐私之间的权衡。 我们制定一个随机优化问题,以最小化整个训练过程中总体的Rényi差分隐私(RDP)泄露,在确保全局损失函数收敛的长期约束下。 我们的问题依赖于未知的未来信息,我们观察到标准Lyapunov优化不适用。 因此,我们基于一系列新颖的每轮问题开发了一个新的在线算法,称为AdaScale,这些问题可以高效求解。 我们进一步推导了AdaScale的动态遗憾和约束违反的上界,证明它在时间平均RDP泄露方面实现了递减的动态遗憾,同时确保FL训练收敛到平稳点。 在经典分类任务上的数值实验表明,与最先进的基准相比,我们的方法有效减少了RDP和DP泄露,而不会影响学习性能。
Comments: 12 pages, 2 figures
Subjects: Information Theory (cs.IT) ; Signal Processing (eess.SP)
Cite as: arXiv:2510.03860 [cs.IT]
  (or arXiv:2510.03860v1 [cs.IT] for this version)
  https://doi.org/10.48550/arXiv.2510.03860
arXiv-issued DOI via DataCite

Submission history

From: Faeze Moradi Kalarde [view email]
[v1] Sat, 4 Oct 2025 16:15:19 UTC (169 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • HTML (experimental)
  • TeX Source
view license
Current browse context:
math.IT
< prev   |   next >
new | recent | 2025-10
Change to browse by:
cs
cs.IT
eess
eess.SP
math

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号