Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > cs > arXiv:2409.02522v2

Help | Advanced Search

Computer Science > Artificial Intelligence

arXiv:2409.02522v2 (cs)
[Submitted on 4 Sep 2024 (v1) , last revised 23 Sep 2024 (this version, v2)]

Title: Cog-GA: A Large Language Models-based Generative Agent for Vision-Language Navigation in Continuous Environments

Title: 基于大型语言模型的视觉-语言导航生成代理Cog-GA在连续环境中的应用

Authors:Zhiyuan Li, Yanfeng Lu, Yao Mu, Hong Qiao
Abstract: Vision Language Navigation in Continuous Environments (VLN-CE) represents a frontier in embodied AI, demanding agents to navigate freely in unbounded 3D spaces solely guided by natural language instructions. This task introduces distinct challenges in multimodal comprehension, spatial reasoning, and decision-making. To address these challenges, we introduce Cog-GA, a generative agent founded on large language models (LLMs) tailored for VLN-CE tasks. Cog-GA employs a dual-pronged strategy to emulate human-like cognitive processes. Firstly, it constructs a cognitive map, integrating temporal, spatial, and semantic elements, thereby facilitating the development of spatial memory within LLMs. Secondly, Cog-GA employs a predictive mechanism for waypoints, strategically optimizing the exploration trajectory to maximize navigational efficiency. Each waypoint is accompanied by a dual-channel scene description, categorizing environmental cues into 'what' and 'where' streams as the brain. This segregation enhances the agent's attentional focus, enabling it to discern pertinent spatial information for navigation. A reflective mechanism complements these strategies by capturing feedback from prior navigation experiences, facilitating continual learning and adaptive replanning. Extensive evaluations conducted on VLN-CE benchmarks validate Cog-GA's state-of-the-art performance and ability to simulate human-like navigation behaviors. This research significantly contributes to the development of strategic and interpretable VLN-CE agents.
Abstract: 连续环境中视觉语言导航(VLN-CE)代表了具身人工智能(embodied AI)的前沿领域,要求智能体仅通过自然语言指令,在无界三维空间中自由导航。 此任务引入了多模态理解、空间推理和决策制定的独特挑战。 为应对这些挑战,我们提出了Cog-GA,这是一种基于大型语言模型(LLMs)生成型代理,专门针对VLN-CE任务设计。 Cog-GA采用双重策略来模拟类人认知过程。 首先,它构建了一张认知地图,整合时间、空间和语义元素,从而促进大型语言模型中的空间记忆发展。 其次,Cog-GA采用预测机制来确定路标点,战略性地优化探索轨迹以最大化导航效率。 每个路标点都伴随着双通道场景描述,将环境线索分类为“什么”和“哪里”,如同大脑一般。 这种分离增强了代理的注意力集中能力,使其能够辨别与导航相关的空间信息。 一个反思机制补充了这些策略,通过捕捉先前导航经验的反馈,促进持续学习和适应性重规划。 在VLN-CE基准测试上的广泛评估验证了Cog-GA的最先进性能以及模拟类人导航行为的能力。 这项研究显著推动了战略性和可解释性的VLN-CE代理的发展。
Subjects: Artificial Intelligence (cs.AI) ; Robotics (cs.RO)
Cite as: arXiv:2409.02522 [cs.AI]
  (or arXiv:2409.02522v2 [cs.AI] for this version)
  https://doi.org/10.48550/arXiv.2409.02522
arXiv-issued DOI via DataCite

Submission history

From: Zhiyuan Li [view email]
[v1] Wed, 4 Sep 2024 08:30:03 UTC (10,557 KB)
[v2] Mon, 23 Sep 2024 03:18:27 UTC (10,559 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • HTML (experimental)
  • TeX Source
view license
Current browse context:
cs.AI
< prev   |   next >
new | recent | 2024-09
Change to browse by:
cs
cs.RO

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号