Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > cs > arXiv:2505.00671

Help | Advanced Search

Computer Science > Robotics

arXiv:2505.00671 (cs)
[Submitted on 1 May 2025 ]

Title: Multi-Constraint Safe Reinforcement Learning via Closed-form Solution for Log-Sum-Exp Approximation of Control Barrier Functions

Title: 多约束安全强化学习通过控制屏障函数的Log-Sum-Exp近似闭式解实现

Authors:Chenggang Wang, Xinyi Wang, Yutong Dong, Lei Song, Xinping Guan
Abstract: The safety of training task policies and their subsequent application using reinforcement learning (RL) methods has become a focal point in the field of safe RL. A central challenge in this area remains the establishment of theoretical guarantees for safety during both the learning and deployment processes. Given the successful implementation of Control Barrier Function (CBF)-based safety strategies in a range of control-affine robotic systems, CBF-based safe RL demonstrates significant promise for practical applications in real-world scenarios. However, integrating these two approaches presents several challenges. First, embedding safety optimization within the RL training pipeline requires that the optimization outputs be differentiable with respect to the input parameters, a condition commonly referred to as differentiable optimization, which is non-trivial to solve. Second, the differentiable optimization framework confronts significant efficiency issues, especially when dealing with multi-constraint problems. To address these challenges, this paper presents a CBF-based safe RL architecture that effectively mitigates the issues outlined above. The proposed approach constructs a continuous AND logic approximation for the multiple constraints using a single composite CBF. By leveraging this approximation, a close-form solution of the quadratic programming is derived for the policy network in RL, thereby circumventing the need for differentiable optimization within the end-to-end safe RL pipeline. This strategy significantly reduces computational complexity because of the closed-form solution while maintaining safety guarantees. Simulation results demonstrate that, in comparison to existing approaches relying on differentiable optimization, the proposed method significantly reduces training computational costs while ensuring provable safety throughout the training process.
Abstract: 训练任务策略的安全性及其后续使用强化学习(RL)方法的应用已成为安全RL领域的一个焦点。该领域中的一个核心挑战仍然是在学习和部署过程中建立理论安全保证。 鉴于基于控制屏障函数(CBF)的安全策略在一系列仿射控制机器人系统中的成功实施,基于CBF的RL展示了在现实场景中实际应用的巨大潜力。然而,将这两种方法结合起来提出了几个挑战。 首先,将安全优化嵌入到RL训练管道中要求优化输出相对于输入参数可微,这通常被称为可微优化,解决起来并不简单。 其次,可微优化框架面临显著的效率问题,尤其是在处理多约束问题时。 为了解决这些挑战,本文提出了一种基于CBF的RL架构,有效地缓解了上述问题。 所提出的方案通过单一复合CBF构建多个约束的连续AND逻辑近似。通过利用这一近似,推导出了RL中策略网络的二次规划的闭式解,从而避免了在端到端安全RL管道中需要可微优化。 由于闭式解的存在,这种策略显著降低了计算复杂度,同时保持了安全保证。 仿真结果显示,与依赖可微优化的现有方法相比,所提出的方法显著减少了训练计算成本,并在整个训练过程中确保了可证明的安全性。
Subjects: Robotics (cs.RO) ; Systems and Control (eess.SY)
Cite as: arXiv:2505.00671 [cs.RO]
  (or arXiv:2505.00671v1 [cs.RO] for this version)
  https://doi.org/10.48550/arXiv.2505.00671
arXiv-issued DOI via DataCite

Submission history

From: Xinyi Wang [view email]
[v1] Thu, 1 May 2025 17:22:11 UTC (5,316 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • HTML (experimental)
  • TeX Source
view license
Current browse context:
eess
< prev   |   next >
new | recent | 2025-05
Change to browse by:
cs
cs.RO
cs.SY
eess.SY

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号