Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > cs > arXiv:2505.03782

Help | Advanced Search

Computer Science > Hardware Architecture

arXiv:2505.03782 (cs)
[Submitted on 30 Apr 2025 ]

Title: Exploration of Cryptocurrency Mining-Specific GPUs in AI Applications: A Case Study of CMP 170HX

Title: 加密货币挖矿专用GPU在AI应用领域的探索:以CMP 170HX为例

Authors:Xing Kangwei
Abstract: This study systematically tests a computational power reuse scheme proposed by the open source community disabling specific instruction sets (Fused Multiply Add instructions) through CUDA source code modifications on the NVIDIA CMP 170HX platform. Experimental results validate the effectiveness of this approach, partially restoring the GPU's computational capabilities in artificial intelligence (AI) tasks. Performance evaluations using open-source GPU benchmarks (OpenCL benchmark, mixbench) and AI benchmarks (LLAMA-benchmark) reveal that its FP32 floating-point performance exceeds 15 times the original capability, while inference performance for certain precision levels in large language models surpasses threefold improvements. Furthermore, based on hardware architecture analysis, this paper proposes theoretical conjectures for further improving computational utilization through alternative adaptation pathways.Combining energy efficiency ratios and cost models, the recycling value of such obsolete GPUs in edge computing and lightweight AI inference scenarios is evaluated. The findings demonstrate that rationally reusing residual computational power from mining GPUs can significantly mitigate the environmental burden of electronic waste while offering cost-effective hardware solutions for low-budget computing scenarios.
Abstract: 本研究系统地测试了开源社区提出的一种计算能力再利用方案,该方案通过修改CUDA源代码禁用特定的指令集(融合乘加指令)在NVIDIA CMP 170HX平台上实现。实验结果验证了这种方法的有效性,部分恢复了GPU在人工智能任务中的计算能力。使用开源GPU基准测试(OpenCL基准测试,mixbench)和人工智能基准测试(LLAMA基准测试)的性能评估显示,其FP32浮点性能超过原始能力的15倍,而大型语言模型在某些精度水平下的推理性能提高了三倍多。此外,基于硬件架构分析,本文提出了进一步通过替代适应途径提高计算利用率的理论假设。结合能效比和成本模型,评估了此类过时GPU在边缘计算和轻量级人工智能推理场景中的再利用价值。研究结果表明,合理地重新利用挖矿GPU剩余的计算能力,可以显著减轻电子垃圾带来的环境负担,同时为低成本计算场景提供具有成本效益的硬件解决方案。
Comments: 31 pages, 10 figures, 12 tables
Subjects: Hardware Architecture (cs.AR)
Cite as: arXiv:2505.03782 [cs.AR]
  (or arXiv:2505.03782v1 [cs.AR] for this version)
  https://doi.org/10.48550/arXiv.2505.03782
arXiv-issued DOI via DataCite

Submission history

From: Kangwei Xing [view email]
[v1] Wed, 30 Apr 2025 14:31:07 UTC (1,409 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • Other Formats
view license
Current browse context:
cs.AR
< prev   |   next >
new | recent | 2025-05
Change to browse by:
cs

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号