Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > cs > arXiv:2510.09932v1

Help | Advanced Search

Computer Science > Programming Languages

arXiv:2510.09932v1 (cs)
[Submitted on 11 Oct 2025 ]

Title: ACT: Automatically Generating Compiler Backends from Tensor Accelerator ISA Descriptions

Title: ACT:从张量加速器ISA描述自动生成编译器后端

Authors:Devansh Jain, Akash Pardeshi, Marco Frigo, Krut Patel, Kaustubh Khulbe, Jai Arora, Charith Mendis
Abstract: Tensor compilers play a key role in enabling high-performance implementations of deep learning workloads. These compilers rely on existing CPU and GPU code generation backends to generate device-specific code. Recently, many tensor accelerators (neural processing units) have been proposed to further accelerate these workloads. Compared to commodity hardware, however, most of the proposed tensor accelerators do not have compiler backends with code generation support. Moreover, the accelerator designs are subject to fast iteration cycles, making it difficult to manually develop compiler backends similar to commodity hardware platforms. Therefore, to increase adoption and enable faster software development cycles for novel tensor accelerator designs, we need to make the compiler backend construction process more agile. To address this gap, we introduce ACT, a compiler backend generator that automatically generates compiler backends for tensor accelerators, given just the instruction set architecture (ISA) descriptions. We first formally specify the compiler backend generation problem that introduces a novel specification for describing tensor accelerator ISAs. Next, we design ACT such that it supports user-programmable memories and complex parameterized instructions that are prevalent in tensor accelerators. ACT uses a novel parameterized equality saturation-based instruction selection phase and a constraint programming-based memory allocation phase. We prove that compiler backends generated by ACT are sound and complete. Finally, we generate compiler backends for three accelerator platforms from industry and academia, and show that they match or outperform code written using hand-optimized kernel libraries while maintaining low compilation overheads.
Abstract: 张量编译器在实现深度学习工作负载的高性能方面起着关键作用。 这些编译器依赖于现有的CPU和GPU代码生成后端来生成设备特定的代码。 最近,许多张量加速器(神经处理单元)被提出以进一步加速这些工作负载。 然而,与商品硬件相比,大多数提出的张量加速器没有具有代码生成支持的编译器后端。 此外,加速器设计需要快速的迭代周期,使得手动开发类似于商品硬件平台的编译器后端变得困难。 因此,为了提高采用率并为新型张量加速器设计实现更快的软件开发周期,我们需要使编译器后端构建过程更加敏捷。 为了解决这一差距,我们引入ACT,这是一个编译器后端生成器,只需给定指令集架构(ISA)描述,就能自动生成张量加速器的编译器后端。 我们首先形式化地指定了编译器后端生成问题,并引入了一种新的描述张量加速器ISAs的规范。 接下来,我们设计ACT,使其支持用户可编程的内存和张量加速器中常见的复杂参数化指令。 ACT使用一种基于参数化等式饱和的指令选择阶段和基于约束编程的内存分配阶段。 我们证明由ACT生成的编译器后端是正确且完整的。 最后,我们从工业界和学术界生成了三个加速器平台的编译器后端,并表明它们在保持低编译开销的同时,其性能可以与使用手工优化的内核库编写的代码相媲美或超越。
Subjects: Programming Languages (cs.PL) ; Hardware Architecture (cs.AR)
Cite as: arXiv:2510.09932 [cs.PL]
  (or arXiv:2510.09932v1 [cs.PL] for this version)
  https://doi.org/10.48550/arXiv.2510.09932
arXiv-issued DOI via DataCite

Submission history

From: Devansh Jain [view email]
[v1] Sat, 11 Oct 2025 00:11:34 UTC (2,564 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • HTML (experimental)
  • TeX Source
license icon view license
Current browse context:
cs.PL
< prev   |   next >
new | recent | 2025-10
Change to browse by:
cs
cs.AR

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号