Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > cs > arXiv:2506.23906

Help | Advanced Search

Computer Science > Data Structures and Algorithms

arXiv:2506.23906 (cs)
[Submitted on 30 Jun 2025 (v1) , last revised 24 Sep 2025 (this version, v2)]

Title: Segmented Operations using Matrix Multiplications

Title: 使用矩阵乘法的分段操作

Authors:Aleksandros Sobczyk, Giuseppe Sorrentino, Anastasios Zouzias
Abstract: Specialized computational units that perform small matrix multiplications as primitive operations are typically present in modern AI accelerators. However, these Matrix Multiplication Units (MMUs) are often underutilized for many fundamental deep learning operations besides dense matrix multiplications. Coincidentally, the lack of a rigorous theoretical model of computation for such architectures obstructs algorithmic design. In this work, we propose MMV-RAM, a computational model which judiciously extends the Vector-RAM model with an additional MMU. We provide a detailed theoretical analysis and carefully balance the computational power between the matrix and vector units, guided by the circuit complexity lower bound that parity is not in AC{[0]}. Given MMV-RAM, we proceed to algorithm design, starting with two fundamental parallel operations: segmented scan and sum. By expressing them as compositions of elementary parallel primitives (e.g., seg. sum reduces to: scan, compress, and vector differentiation), we can exploit MMUs to perform speculative blocked computations, ultimately leading to provable theoretical speed-ups against vector-only approaches. These results extend to other ubiquitous AI kernels, including dense matrix product, and sparse matrix-vector product. As a case study, we implemented the proposed algorithms on the Ascend 910B AI accelerator, which contains matrix and vector cores. We evaluate these implementations on synthetic and real-world datasets from various applications, including Large Language Models.
Abstract: 专门执行小矩阵乘法作为原始操作的计算单元通常存在于现代AI加速器中。 然而,这些矩阵乘法单元(MMUs)在许多基本的深度学习操作中除了密集矩阵乘法之外常常被低估使用。 巧合的是,这种架构缺乏严格的理论计算模型阻碍了算法设计。 在本工作中,我们提出了MMV-RAM,这是一种计算模型,它明智地扩展了向量-RAM模型,增加了一个MMU。 我们提供了详细的理论分析,并在矩阵和向量单元之间仔细平衡计算能力,这是由电路复杂性下界决定的,即奇偶性不在AC{[0]}中。 在MMV-RAM的基础上,我们继续进行算法设计,从两个基本的并行操作开始:分段扫描和求和。 通过将它们表示为基本并行原语的组合(例如,分段求和可以简化为:扫描、压缩和向量微分),我们可以利用MMU执行推测性分块计算,最终导致相对于仅向量方法的可证明理论加速。 这些结果也适用于其他常见的AI内核,包括密集矩阵乘积和稀疏矩阵-向量乘积。 作为一个案例研究,我们在Ascend 910B AI加速器上实现了所提出的算法,该加速器包含矩阵和向量核心。 我们在来自各种应用的合成和真实世界数据集上评估了这些实现,包括大型语言模型。
Subjects: Data Structures and Algorithms (cs.DS) ; Computational Complexity (cs.CC); Distributed, Parallel, and Cluster Computing (cs.DC)
Cite as: arXiv:2506.23906 [cs.DS]
  (or arXiv:2506.23906v2 [cs.DS] for this version)
  https://doi.org/10.48550/arXiv.2506.23906
arXiv-issued DOI via DataCite

Submission history

From: Aleksandros Sobczyk [view email]
[v1] Mon, 30 Jun 2025 14:36:44 UTC (504 KB)
[v2] Wed, 24 Sep 2025 15:52:13 UTC (523 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • HTML (experimental)
  • TeX Source
  • Other Formats
license icon view license
Current browse context:
cs.DS
< prev   |   next >
new | recent | 2025-06
Change to browse by:
cs
cs.CC
cs.DC

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号