Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > cs > arXiv:2510.17617

Help | Advanced Search

Computer Science > Human-Computer Interaction

arXiv:2510.17617 (cs)
[Submitted on 20 Oct 2025 ]

Title: ImaGGen: Zero-Shot Generation of Co-Speech Semantic Gestures Grounded in Language and Image Input

Title: ImaGGen:基于语言和图像输入的零样本生成协同言语语义手势

Authors:Hendric Voss, Stefan Kopp
Abstract: Human communication combines speech with expressive nonverbal cues such as hand gestures that serve manifold communicative functions. Yet, current generative gesture generation approaches are restricted to simple, repetitive beat gestures that accompany the rhythm of speaking but do not contribute to communicating semantic meaning. This paper tackles a core challenge in co-speech gesture synthesis: generating iconic or deictic gestures that are semantically coherent with a verbal utterance. Such gestures cannot be derived from language input alone, which inherently lacks the visual meaning that is often carried autonomously by gestures. We therefore introduce a zero-shot system that generates gestures from a given language input and additionally is informed by imagistic input, without manual annotation or human intervention. Our method integrates an image analysis pipeline that extracts key object properties such as shape, symmetry, and alignment, together with a semantic matching module that links these visual details to spoken text. An inverse kinematics engine then synthesizes iconic and deictic gestures and combines them with co-generated natural beat gestures for coherent multimodal communication. A comprehensive user study demonstrates the effectiveness of our approach. In scenarios where speech alone was ambiguous, gestures generated by our system significantly improved participants' ability to identify object properties, confirming their interpretability and communicative value. While challenges remain in representing complex shapes, our results highlight the importance of context-aware semantic gestures for creating expressive and collaborative virtual agents or avatars, marking a substantial step forward towards efficient and robust, embodied human-agent interaction. More information and example videos are available here: https://review-anon-io.github.io/ImaGGen.github.io/
Abstract: 人类交流结合了言语和表达性的非语言提示,如手势,这些提示具有多种交流功能。 然而,当前的生成手势方法仅限于简单、重复的节奏性手势,这些手势伴随说话的节奏,但不参与传达语义意义。 本文解决了一个核心挑战:在语音同步手势合成中生成与口头陈述语义一致的象征性或指示性手势。 这种手势不能仅从语言输入中得出,因为语言输入本身缺乏手势通常自主携带的视觉意义。 因此,我们引入了一个零样本系统,该系统从给定的语言输入生成手势,并且还通过图像输入进行补充,而无需人工标注或人类干预。 我们的方法集成了一个图像分析流程,提取关键对象属性,如形状、对称性和对齐方式,同时还集成了一个语义匹配模块,将这些视觉细节与口语文本联系起来。 然后,逆运动学引擎合成象征性和指示性手势,并将它们与同步生成的自然节奏性手势结合,以实现连贯的多模态交流。 一项全面的用户研究表明了我们方法的有效性。 在仅靠言语存在歧义的情况下,由我们的系统生成的手势显著提高了参与者识别物体属性的能力,证实了它们的可解释性和交流价值。 虽然在表示复杂形状方面仍存在挑战,但我们的结果突显了上下文感知语义手势在创建富有表现力和协作性的虚拟代理或化身中的重要性,标志着在实现高效且稳健的具身人机交互方面迈出了重要一步。 更多信息和示例视频请访问:https://review-anon-io.github.io/ImaGGen.github.io/
Subjects: Human-Computer Interaction (cs.HC) ; Computer Vision and Pattern Recognition (cs.CV)
Cite as: arXiv:2510.17617 [cs.HC]
  (or arXiv:2510.17617v1 [cs.HC] for this version)
  https://doi.org/10.48550/arXiv.2510.17617
arXiv-issued DOI via DataCite

Submission history

From: Hendric Voss [view email]
[v1] Mon, 20 Oct 2025 15:01:56 UTC (11,188 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • HTML (experimental)
  • TeX Source
license icon view license
Current browse context:
cs.HC
< prev   |   next >
new | recent | 2025-10
Change to browse by:
cs
cs.CV

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号