Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > cs > arXiv:2506.00247v1

Help | Advanced Search

Computer Science > Machine Learning

arXiv:2506.00247v1 (cs)
[Submitted on 30 May 2025 ]

Title: Performance Analysis of Convolutional Neural Network By Applying Unconstrained Binary Quadratic Programming

Title: 基于无约束二元二次规划的卷积神经网络性能分析

Authors:Aasish Kumar Sharma, Sanjeeb Prashad Pandey, Julian M. Kunkel
Abstract: Convolutional Neural Networks (CNNs) are pivotal in computer vision and Big Data analytics but demand significant computational resources when trained on large-scale datasets. Conventional training via back-propagation (BP) with losses like Mean Squared Error or Cross-Entropy often requires extensive iterations and may converge sub-optimally. Quantum computing offers a promising alternative by leveraging superposition, tunneling, and entanglement to search complex optimization landscapes more efficiently. In this work, we propose a hybrid optimization method that combines an Unconstrained Binary Quadratic Programming (UBQP) formulation with Stochastic Gradient Descent (SGD) to accelerate CNN training. Evaluated on the MNIST dataset, our approach achieves a 10--15\% accuracy improvement over a standard BP-CNN baseline while maintaining similar execution times. These results illustrate the potential of hybrid quantum-classical techniques in High-Performance Computing (HPC) environments for Big Data and Deep Learning. Fully realizing these benefits, however, requires a careful alignment of algorithmic structures with underlying quantum mechanisms.
Abstract: 卷积神经网络(CNNs)在计算机视觉和大数据分析中至关重要,但在大规模数据集上训练时需要大量的计算资源。通过反向传播(BP)进行的传统训练,使用均方误差或交叉熵等损失函数通常需要大量的迭代,并且可能会次优收敛。量子计算通过利用叠加态、隧穿效应和纠缠性,能够更高效地搜索复杂的优化景观,从而提供了一种有前景的替代方案。在这项工作中,我们提出了一种混合优化方法,结合无约束二进制二次规划(UBQP)公式与随机梯度下降(SGD),以加速CNN的训练。在MNIST数据集上的评估表明,我们的方法比标准BP-CNN基准模型的准确率提高了10%到15%,同时保持了相似的执行时间。这些结果展示了混合量子经典技术在大数据和深度学习的高性能计算(HPC)环境中的潜力。然而,要完全实现这些优势,需要仔细调整算法结构以适应底层的量子机制。
Comments: 11 pages, 22 figures, accepted in IEEE COMPSAC 2025 Conference. Preprint before peer review
Subjects: Machine Learning (cs.LG) ; Emerging Technologies (cs.ET)
Cite as: arXiv:2506.00247 [cs.LG]
  (or arXiv:2506.00247v1 [cs.LG] for this version)
  https://doi.org/10.48550/arXiv.2506.00247
arXiv-issued DOI via DataCite
Journal reference: SUBMISSION ID: 7827

Submission history

From: Aasish Kumar Sharma [view email]
[v1] Fri, 30 May 2025 21:25:31 UTC (973 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • HTML (experimental)
  • TeX Source
  • Other Formats
license icon view license
Current browse context:
cs.LG
< prev   |   next >
new | recent | 2025-06
Change to browse by:
cs
cs.ET

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号