Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > cs > arXiv:2310.07263v3

Help | Advanced Search

Computer Science > Robotics

arXiv:2310.07263v3 (cs)
[Submitted on 11 Oct 2023 (v1) , last revised 24 Apr 2025 (this version, v3)]

Title: CoPAL: Corrective Planning of Robot Actions with Large Language Models

Title: CoPAL:使用大型语言模型的机器人行动纠正性规划

Authors:Frank Joublin, Antonello Ceravola, Pavel Smirnov, Felix Ocker, Joerg Deigmoeller, Anna Belardinelli, Chao Wang, Stephan Hasler, Daniel Tanneberg, Michael Gienger
Abstract: In the pursuit of fully autonomous robotic systems capable of taking over tasks traditionally performed by humans, the complexity of open-world environments poses a considerable challenge. Addressing this imperative, this study contributes to the field of Large Language Models (LLMs) applied to task and motion planning for robots. We propose a system architecture that orchestrates a seamless interplay between multiple cognitive levels, encompassing reasoning, planning, and motion generation. At its core lies a novel replanning strategy that handles physically grounded, logical, and semantic errors in the generated plans. We demonstrate the efficacy of the proposed feedback architecture, particularly its impact on executability, correctness, and time complexity via empirical evaluation in the context of a simulation and two intricate real-world scenarios: blocks world, barman and pizza preparation.
Abstract: 在追求能够接管人类传统任务的完全自主机器人系统的过程中,开放世界环境的复杂性构成了相当大的挑战。 为了解决这一迫切需求,本研究致力于将大规模语言模型(LLMs)应用于机器人任务和运动规划领域。 我们提出了一种系统架构,该架构协调多个认知层次之间的无缝互动,包括推理、规划和运动生成。 其核心是一种新颖的重规划策略,能够处理生成计划中的物理接地、逻辑和语义错误。 我们通过仿真以及两个复杂的现实场景(积木世界、调酒师和披萨制作)的实证评估,展示了所提出的反馈架构的有效性,特别是在可执行性、正确性和时间复杂度方面的影响力。
Comments: IEEE International Conference on Robotics and Automation (ICRA) 2024
Subjects: Robotics (cs.RO) ; Artificial Intelligence (cs.AI)
Cite as: arXiv:2310.07263 [cs.RO]
  (or arXiv:2310.07263v3 [cs.RO] for this version)
  https://doi.org/10.48550/arXiv.2310.07263
arXiv-issued DOI via DataCite
Related DOI: https://doi.org/10.1109/ICRA57147.2024.10610434
DOI(s) linking to related resources

Submission history

From: Daniel Tanneberg [view email]
[v1] Wed, 11 Oct 2023 07:39:42 UTC (6,172 KB)
[v2] Fri, 14 Mar 2025 13:03:24 UTC (6,191 KB)
[v3] Thu, 24 Apr 2025 14:34:46 UTC (6,191 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • HTML (experimental)
  • TeX Source
view license
Current browse context:
cs.RO
< prev   |   next >
new | recent | 2023-10
Change to browse by:
cs
cs.AI

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号