Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > cs > arXiv:2510.17660v1

Help | Advanced Search

Computer Science > Human-Computer Interaction

arXiv:2510.17660v1 (cs)
[Submitted on 20 Oct 2025 ]

Title: Muscle Anatomy-aware Geometric Deep Learning for sEMG-based Gesture Decoding

Title: 肌肉解剖感知的几何深度学习用于sEMG手势解码

Authors:Adyasha Dash, Giulia Zappoli, Laya Das, Robert Riener
Abstract: Robust and accurate decoding of gesture from non-invasive surface electromyography (sEMG) is important for various applications including spatial computing, healthcare, and entertainment, and has been actively pursued by researchers and industry. Majority of sEMG-based gesture decoding algorithms employ deep neural networks that are designed for Euclidean data, and may not be suitable for analyzing multi-dimensional, non-stationary time-series with long-range dependencies such as sEMG. State-of-the-art sEMG-based decoding methods also demonstrate high variability across subjects and sessions, requiring re-calibration and adaptive fine-tuning to boost performance. To address these shortcomings, this work proposes a geometric deep learning model that learns on symmetric positive definite (SPD) manifolds and leverages unsupervised domain adaptation to desensitize the model to subjects and sessions. The model captures the features in time and across sensors with multiple kernels, projects the features onto SPD manifold, learns on manifolds and projects back to Euclidean space for classification. It uses a domain-specific batch normalization layer to address variability between sessions, alleviating the need for re-calibration or fine-tuning. Experiments with publicly available benchmark gesture decoding datasets (Ninapro DB6, Flexwear-HD) demonstrate the superior generalizability of the model compared to Euclidean and other SPD-based models in the inter-session scenario, with up to 8.83 and 4.63 points improvement in accuracy, respectively. Detailed analyses reveal that the model extracts muscle-specific information for different tasks and ablation studies highlight the importance of modules introduced in the work. The proposed method pushes the state-of-the-art in sEMG-based gesture recognition and opens new research avenues for manifold-based learning for muscle signals.
Abstract: 从非侵入性表面肌电图(sEMG)中稳健且准确的姿势解码对于空间计算、医疗保健和娱乐等各种应用非常重要,并且一直受到研究人员和工业界的积极追求。 大多数基于sEMG的姿势解码算法采用为欧几里得数据设计的深度神经网络,可能不适合分析具有长程依赖性的多维非平稳时间序列,如sEMG。 最先进的基于sEMG的解码方法在不同受试者和会话中也表现出高变异性,需要重新校准和自适应微调以提高性能。 为了解决这些不足,本工作提出了一种几何深度学习模型,在对称正定(SPD)流形上进行学习,并利用无监督领域自适应来减少模型对受试者和会话的敏感性。 该模型使用多个内核捕捉时间和传感器上的特征,将特征投影到SPD流形上,在流形上进行学习,然后将特征投影回欧几里得空间进行分类。 它使用特定领域的批量归一化层来解决会话之间的变异性,减轻了重新校准或微调的需要。 在公开可用的基准姿势解码数据集(Ninapro DB6, Flexwear-HD)上的实验表明,与欧几里得和其他基于SPD的模型相比,该模型在跨会话场景中表现出更优越的泛化能力,分别提高了8.83和4.63个百分点的准确性。 详细的分析表明,该模型为不同的任务提取肌肉特异性信息,消融研究强调了本工作中引入模块的重要性。 所提出的方法推动了基于sEMG的姿势识别的最先进水平,并为基于流形的学习在肌肉信号中的研究开辟了新的研究方向。
Subjects: Human-Computer Interaction (cs.HC)
Cite as: arXiv:2510.17660 [cs.HC]
  (or arXiv:2510.17660v1 [cs.HC] for this version)
  https://doi.org/10.48550/arXiv.2510.17660
arXiv-issued DOI via DataCite

Submission history

From: Adyasha Dash [view email]
[v1] Mon, 20 Oct 2025 15:32:47 UTC (1,263 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • HTML (experimental)
  • TeX Source
view license
Current browse context:
cs.HC
< prev   |   next >
new | recent | 2025-10
Change to browse by:
cs

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号