Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > cs > arXiv:2506.10010

Help | Advanced Search

Computer Science > Multimedia

arXiv:2506.10010 (cs)
[Submitted on 8 May 2025 ]

Title: Multimodal Emotion Coupling via Speech-to-Facial and Bodily Gestures in Dyadic Interaction

Title: 双人互动中通过语音到面部及身体手势的多模态情感耦合

Authors:Von Ralph Dane Marquez Herbuela, Yukie Nagai
Abstract: Human emotional expression emerges through coordinated vocal, facial, and gestural signals. While speech face alignment is well established, the broader dynamics linking emotionally expressive speech to regional facial and hand motion remains critical for gaining a deeper insight into how emotional and behavior cues are communicated in real interactions. Further modulating the coordination is the structure of conversational exchange like sequential turn taking, which creates stable temporal windows for multimodal synchrony, and simultaneous speech, often indicative of high arousal moments, disrupts this alignment and impacts emotional clarity. Understanding these dynamics enhances realtime emotion detection by improving the accuracy of timing and synchrony across modalities in both human interactions and AI systems. This study examines multimodal emotion coupling using region specific motion capture from dyadic interactions in the IEMOCAP corpus. Speech features included low level prosody, MFCCs, and model derived arousal, valence, and categorical emotions (Happy, Sad, Angry, Neutral), aligned with 3D facial and hand marker displacements. Expressive activeness was quantified through framewise displacement magnitudes, and speech to gesture prediction mapped speech features to facial and hand movements. Nonoverlapping speech consistently elicited greater activeness particularly in the lower face and mouth. Sadness showed increased expressivity during nonoverlap, while anger suppressed gestures during overlaps. Predictive mapping revealed highest accuracy for prosody and MFCCs in articulatory regions while arousal and valence had lower and more context sensitive correlations. Notably, hand speech synchrony was enhanced under low arousal and overlapping speech, but not for valence.
Abstract: 人类情感表达通过协调的语音、面部和手势信号得以展现。 虽然语音与面部对齐已经得到充分研究,但将情感丰富的语音与局部面部及手部运动动态联系起来仍然对于深入理解真实互动中情感和行为线索如何传递至关重要。 进一步调节这种协调的是对话交流的结构,例如顺序轮流发言,它为多模态同步创造了稳定的时序窗口;而同时说话,通常表明高唤醒时刻,会破坏这种对齐并影响情感清晰度。 理解这些动态可以提高实时情感检测的准确性,特别是在人类互动和人工智能系统中跨模态的时间和同步精度。 本研究使用IEMOCAP语料库中的二元交互数据,考察多模态情感耦合现象,利用区域特定的动作捕捉技术。 语音特征包括低级韵律、MFCCs(梅尔频率倒谱系数),以及模型推导出的唤醒、效价和情感类别(快乐、悲伤、愤怒、中性),并与三维面部和手部标记位移对齐。 通过帧间位移幅度量化表现活跃度,并通过语音到手势预测将语音特征映射到面部和手部运动。 非重叠语音始终引发更大的活跃度,尤其是在下脸部和嘴巴部位。 悲伤在非重叠期间表现出更高的表现力,而愤怒则在重叠期间抑制了手势。 预测映射显示,在发音区域,韵律和MFCCs具有最高的准确率,而唤醒和效价的相关性较低且更依赖于上下文。 值得注意的是,在低唤醒和重叠语音情况下,手部与语音的同步性增强,但效价并非如此。
Subjects: Multimedia (cs.MM) ; Machine Learning (cs.LG); Sound (cs.SD); Audio and Speech Processing (eess.AS)
Cite as: arXiv:2506.10010 [cs.MM]
  (or arXiv:2506.10010v1 [cs.MM] for this version)
  https://doi.org/10.48550/arXiv.2506.10010
arXiv-issued DOI via DataCite

Submission history

From: Von Ralph Dane Herbuela [view email]
[v1] Thu, 8 May 2025 10:55:54 UTC (3,874 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • Other Formats
license icon view license
Current browse context:
cs
< prev   |   next >
new | recent | 2025-06
Change to browse by:
cs.LG
cs.MM
cs.SD
eess
eess.AS

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号