Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > cs > arXiv:2503.24358

Help | Advanced Search

Computer Science > Machine Learning

arXiv:2503.24358 (cs)
[Submitted on 31 Mar 2025 (v1) , last revised 28 Jul 2025 (this version, v2)]

Title: SQuat: Subspace-orthogonal KV Cache Quantization

Title: SQuat:子空间正交KV缓存量化

Authors:Hao Wang, Ligong Han, Kai Xu, Akash Srivastava
Abstract: The key-value (KV) cache accelerates LLMs decoding by storing KV tensors from previously generated tokens. It reduces redundant computation at the cost of increased memory usage. To mitigate this overhead, existing approaches compress KV tensors into lower-bit representations; however, quantization errors can accumulate as more tokens are generated, potentially resulting in undesired outputs. In this paper, we introduce SQuat (Subspace-orthogonal KV cache quantization). It first constructs a subspace spanned by query tensors to capture the most critical task-related information. During key tensor quantization, it enforces that the difference between the (de)quantized and original keys remains orthogonal to this subspace, minimizing the impact of quantization errors on the attention mechanism's outputs. SQuat requires no model fine-tuning, no additional calibration dataset for offline learning, and is grounded in a theoretical framework we develop. Through numerical experiments, we show that our method reduces peak memory by 2.17 to 2.82, improves throughput by 2.45 to 3.60, and achieves more favorable benchmark scores than existing KV cache quantization algorithms.
Abstract: 键值(KV)缓存通过存储之前生成标记的KV张量来加速LLMs解码。 它以增加内存使用为代价减少了冗余计算。 为了减轻这种开销,现有方法将KV张量压缩为低比特表示;然而,随着更多标记的生成,量化误差可能会累积,可能导致不期望的输出。 在本文中,我们引入了SQuat(子空间正交KV缓存量化)。 它首先构建由查询张量张成的子空间,以捕捉最关键的任务相关信息。 在键张量量化过程中,它强制(反量化)后的键与原始键之间的差异保持与此子空间正交,从而最小化量化误差对注意力机制输出的影响。 SQuat不需要模型微调,不需要额外的校准数据集进行离线学习,并基于我们开发的理论框架。 通过数值实验,我们证明我们的方法将峰值内存减少了2.17到2.82,吞吐量提高了2.45到3.60,并且比现有的KV缓存量化算法取得了更有利的基准分数。
Subjects: Machine Learning (cs.LG) ; Artificial Intelligence (cs.AI); Computation and Language (cs.CL); Information Theory (cs.IT)
Cite as: arXiv:2503.24358 [cs.LG]
  (or arXiv:2503.24358v2 [cs.LG] for this version)
  https://doi.org/10.48550/arXiv.2503.24358
arXiv-issued DOI via DataCite

Submission history

From: Hao Wang [view email]
[v1] Mon, 31 Mar 2025 17:37:32 UTC (774 KB)
[v2] Mon, 28 Jul 2025 20:44:23 UTC (776 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • HTML (experimental)
  • TeX Source
license icon view license
Current browse context:
cs.AI
< prev   |   next >
new | recent | 2025-03
Change to browse by:
cs
cs.CL
cs.IT
cs.LG
math
math.IT

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号