Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > cs > arXiv:2509.16325

Help | Advanced Search

Computer Science > Computation and Language

arXiv:2509.16325 (cs)
[Submitted on 19 Sep 2025 ]

Title: Overhearing LLM Agents: A Survey, Taxonomy, and Roadmap

Title: 窃听大语言模型代理:综述、分类和路线图

Authors:Andrew Zhu, Chris Callison-Burch
Abstract: Imagine AI assistants that enhance conversations without interrupting them: quietly providing relevant information during a medical consultation, seamlessly preparing materials as teachers discuss lesson plans, or unobtrusively scheduling meetings as colleagues debate calendars. While modern conversational LLM agents directly assist human users with tasks through a chat interface, we study this alternative paradigm for interacting with LLM agents, which we call "overhearing agents." Rather than demanding the user's attention, overhearing agents continuously monitor ambient activity and intervene only when they can provide contextual assistance. In this paper, we present the first analysis of overhearing LLM agents as a distinct paradigm in human-AI interaction and establish a taxonomy of overhearing agent interactions and tasks grounded in a survey of works on prior LLM-powered agents and exploratory HCI studies. Based on this taxonomy, we create a list of best practices for researchers and developers building overhearing agent systems. Finally, we outline the remaining research gaps and reveal opportunities for future research in the overhearing paradigm.
Abstract: 想象具有AI助手的对话,它们在不打断对话的情况下增强对话:在医疗咨询中安静地提供相关信息,在老师讨论教学计划时无缝准备材料,或在同事争论日历的时候不显眼地安排会议。 虽然现代对话型大语言模型代理通过聊天界面直接帮助用户完成任务,但我们研究了这种与大语言模型代理交互的替代范式,我们称之为“旁听代理”。 与其要求用户的注意力,旁听代理会持续监控环境活动,并仅在能够提供上下文帮助时才介入。 在本文中,我们首次分析了旁听大语言模型代理作为人机交互中的一个独立范式,并基于对先前大语言模型驱动代理的研究和探索性人机交互研究的调查,建立了旁听代理交互和任务的分类法。 基于此分类法,我们制定了一份最佳实践清单,供研究人员和开发人员构建旁听代理系统使用。 最后,我们概述了剩余的研究差距,并揭示了旁听范式中未来研究的机会。
Comments: 8 pages, 1 figure
Subjects: Computation and Language (cs.CL) ; Artificial Intelligence (cs.AI); Human-Computer Interaction (cs.HC)
Cite as: arXiv:2509.16325 [cs.CL]
  (or arXiv:2509.16325v1 [cs.CL] for this version)
  https://doi.org/10.48550/arXiv.2509.16325
arXiv-issued DOI via DataCite (pending registration)

Submission history

From: Andrew Zhu [view email]
[v1] Fri, 19 Sep 2025 18:11:04 UTC (212 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • HTML (experimental)
  • TeX Source
  • Other Formats
license icon view license
Current browse context:
cs.CL
< prev   |   next >
new | recent | 2025-09
Change to browse by:
cs
cs.AI
cs.HC

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号