Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > cs > arXiv:2505.24113

Help | Advanced Search

Computer Science > Multiagent Systems

arXiv:2505.24113 (cs)
[Submitted on 30 May 2025 ]

Title: Distributed Neural Policy Gradient Algorithm for Global Convergence of Networked Multi-Agent Reinforcement Learning

Title: 网络多智能体强化学习全局收敛的分布式神经策略梯度算法

Authors:Pengcheng Dai, Yuanqiu Mo, Wenwu Yu, Wei Ren
Abstract: This paper studies the networked multi-agent reinforcement learning (NMARL) problem, where the objective of agents is to collaboratively maximize the discounted average cumulative rewards. Different from the existing methods that suffer from poor expression due to linear function approximation, we propose a distributed neural policy gradient algorithm that features two innovatively designed neural networks, specifically for the approximate Q-functions and policy functions of agents. This distributed neural policy gradient algorithm consists of two key components: the distributed critic step and the decentralized actor step. In the distributed critic step, agents receive the approximate Q-function parameters from their neighboring agents via a time-varying communication networks to collaboratively evaluate the joint policy. In contrast, in the decentralized actor step, each agent updates its local policy parameter solely based on its own approximate Q-function. In the convergence analysis, we first establish the global convergence of agents for the joint policy evaluation in the distributed critic step. Subsequently, we rigorously demonstrate the global convergence of the overall distributed neural policy gradient algorithm with respect to the objective function. Finally, the effectiveness of the proposed algorithm is demonstrated by comparing it with a centralized algorithm through simulation in the robot path planning environment.
Abstract: 本文研究了网络化多智能体强化学习(NMARL)问题,其中智能体的目标是协作最大化折扣后的平均累积奖励。 不同于因线性函数逼近而导致表达能力较差的现有方法,我们提出了一种分布式神经策略梯度算法,该算法特别设计了两个神经网络,分别用于近似智能体的Q函数和策略函数。 这种分布式神经策略梯度算法由两个关键组件组成:分布式批评者步和去中心化行动者步。 在分布式批评者步中,智能体通过随时间变化的通信网络从其邻近智能体接收近似Q函数参数,以协同评估联合策略。 相比之下,在去中心化行动者步中,每个智能体仅基于自身的近似Q函数更新其本地策略参数。 在收敛性分析中,我们首先证明了智能体在分布式批评者步中的联合策略评估具有全局收敛性。 随后,我们严格证明了整体分布式神经策略梯度算法关于目标函数的全局收敛性。 最后,通过在机器人路径规划环境中与集中式算法进行仿真比较,验证了所提出算法的有效性。
Subjects: Multiagent Systems (cs.MA)
Cite as: arXiv:2505.24113 [cs.MA]
  (or arXiv:2505.24113v1 [cs.MA] for this version)
  https://doi.org/10.48550/arXiv.2505.24113
arXiv-issued DOI via DataCite

Submission history

From: Pengcheng Dai [view email]
[v1] Fri, 30 May 2025 01:23:14 UTC (5,297 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • HTML (experimental)
  • TeX Source
  • Other Formats
view license
Current browse context:
cs.MA
< prev   |   next >
new | recent | 2025-05
Change to browse by:
cs

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号