Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > cs > arXiv:2506.00982v1

Help | Advanced Search

Computer Science > Robotics

arXiv:2506.00982v1 (cs)
[Submitted on 1 Jun 2025 ]

Title: Robust and Safe Multi-Agent Reinforcement Learning Framework with Communication for Autonomous Vehicles

Title: 用于自动驾驶车辆的具有通信功能的鲁棒且安全的多智能体强化学习框架

Authors:Keshawn Smith, Zhili Zhang, H M Sabbir Ahmad, Ehsan Sabouni, Maniak Mondal, Song Han, Wenchao Li, Fei Miao
Abstract: Deep multi-agent reinforcement learning (MARL) has been demonstrated effectively in simulations for many multi-robot problems. For autonomous vehicles, the development of vehicle-to-vehicle (V2V) communication technologies provide opportunities to further enhance safety of the system. However, zero-shot transfer of simulator-trained MARL policies to hardware dynamic systems remains challenging, and how to leverage communication and shared information for MARL has limited demonstrations on hardware. This problem is challenged by discrepancies between simulated and physical states, system state and model uncertainties, practical shared information design, and the need for safety guarantees in both simulation and hardware. This paper introduces RSR-RSMARL, a novel Robust and Safe MARL framework that supports Real-Sim-Real (RSR) policy adaptation for multi-agent systems with communication among agents, with both simulation and hardware demonstrations. RSR-RSMARL leverages state (includes shared state information among agents) and action representations considering real system complexities for MARL formulation. The MARL policy is trained with robust MARL algorithm to enable zero-shot transfer to hardware considering the sim-to-real gap. A safety shield module using Control Barrier Functions (CBFs) provides safety guarantee for each individual agent. Experiment results on F1/10th-scale autonomous vehicles with V2V communication demonstrate the ability of RSR-RSMARL framework to enhance driving safety and coordination across multiple configurations. These findings emphasize the importance of jointly designing robust policy representations and modular safety architectures to enable scalable, generalizable RSR transfer in multi-agent autonomy.
Abstract: 多智能体强化学习(MARL)在许多多机器人问题的仿真中已被证明是有效的。对于自动驾驶车辆,车对车(V2V)通信技术的发展为进一步提高系统的安全性提供了机会。然而,从仿真训练的MARL策略到硬件动态系统的零样本迁移仍然具有挑战性,并且如何利用通信和共享信息来进行MARL在硬件上的演示还很有限。该问题受到仿真与物理状态之间的差异、系统状态和模型不确定性、实际共享信息设计以及在仿真和硬件中都需要安全保证等因素的挑战。本文介绍了一种新颖的鲁棒且安全的MARL框架——RSR-RSMARL,它支持带有多智能体之间通信的多智能体系统的实时-仿真-实时(RSR)策略适应,并在仿真和硬件上进行了演示。RSR-RSMARL利用了状态(包括智能体间的共享状态信息)和动作表示,考虑到了真实系统复杂性来构建MARL公式。MARL策略通过鲁棒的MARL算法进行训练,以实现考虑到仿真到现实差距的零样本迁移。使用控制屏障函数(CBF)的安全罩模块为每个单独的智能体提供安全保障。在带有V2V通信的1/10比例自主车辆上的实验结果表明,RSR-RSMARL框架能够增强多种配置下的驾驶安全性和协调性。这些发现强调了联合设计鲁棒策略表示和模块化安全架构的重要性,以实现多智能体自治中的可扩展、通用化的实时-仿真-实时迁移。
Comments: 19 pages, 9 Figures
Subjects: Robotics (cs.RO) ; Multiagent Systems (cs.MA)
Cite as: arXiv:2506.00982 [cs.RO]
  (or arXiv:2506.00982v1 [cs.RO] for this version)
  https://doi.org/10.48550/arXiv.2506.00982
arXiv-issued DOI via DataCite

Submission history

From: Keshawn Smith [view email]
[v1] Sun, 1 Jun 2025 12:29:53 UTC (14,461 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • HTML (experimental)
  • TeX Source
  • Other Formats
view license
Current browse context:
cs.RO
< prev   |   next >
new | recent | 2025-06
Change to browse by:
cs
cs.MA

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号