Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > quant-ph > arXiv:2407.02553v1

Help | Advanced Search

Quantum Physics

arXiv:2407.02553v1 (quant-ph)
[Submitted on 2 Jul 2024 ]

Title: Large-scale quantum reservoir learning with an analog quantum computer

Title: 大规模模拟量子储层学习

Authors:Milan Kornjača, Hong-Ye Hu, Chen Zhao, Jonathan Wurtz, Phillip Weinberg, Majd Hamdan, Andrii Zhdanov, Sergio H. Cantu, Hengyun Zhou, Rodrigo Araiza Bravo, Kevin Bagnall, James I. Basham, Joseph Campo, Adam Choukri, Robert DeAngelo, Paige Frederick, David Haines, Julian Hammett, Ning Hsu, Ming-Guang Hu, Florian Huber, Paul Niklas Jepsen, Ningyuan Jia, Thomas Karolyshyn, Minho Kwon, John Long, Jonathan Lopatin, Alexander Lukin, Tommaso Macrì, Ognjen Marković, Luis A. Martínez-Martínez, Xianmei Meng, Evgeny Ostroumov, David Paquette, John Robinson, Pedro Sales Rodriguez, Anshuman Singh, Nandan Sinha, Henry Thoreen, Noel Wan, Daniel Waxman-Lenz, Tak Wong, Kai-Hsin Wu, Pedro L. S. Lopes, Yuval Boger, Nathan Gemelke, Takuya Kitagawa, Alexander Keesling, Xun Gao, Alexei Bylinskii, Susanne F. Yelin, Fangli Liu, Sheng-Tao Wang
Abstract: Quantum machine learning has gained considerable attention as quantum technology advances, presenting a promising approach for efficiently learning complex data patterns. Despite this promise, most contemporary quantum methods require significant resources for variational parameter optimization and face issues with vanishing gradients, leading to experiments that are either limited in scale or lack potential for quantum advantage. To address this, we develop a general-purpose, gradient-free, and scalable quantum reservoir learning algorithm that harnesses the quantum dynamics of neutral-atom analog quantum computers to process data. We experimentally implement the algorithm, achieving competitive performance across various categories of machine learning tasks, including binary and multi-class classification, as well as timeseries prediction. Effective and improving learning is observed with increasing system sizes of up to 108 qubits, demonstrating the largest quantum machine learning experiment to date. We further observe comparative quantum kernel advantage in learning tasks by constructing synthetic datasets based on the geometric differences between generated quantum and classical data kernels. Our findings demonstrate the potential of utilizing classically intractable quantum correlations for effective machine learning. We expect these results to stimulate further extensions to different quantum hardware and machine learning paradigms, including early fault-tolerant hardware and generative machine learning tasks.
Abstract: 量子机器学习随着量子技术的进步而受到广泛关注,展现出一种高效学习复杂数据模式的有前途的方法。 尽管如此,大多数现代量子方法在变分参数优化中需要大量资源,并且面临梯度消失的问题,导致实验规模受限或缺乏量子优势的潜力。 为了解决这个问题,我们开发了一种通用的、无梯度且可扩展的量子水库学习算法,利用中性原子模拟量子计算机的量子动力学来处理数据。 我们实验实现了该算法,在各种机器学习任务类别中取得了具有竞争力的性能,包括二分类和多类分类以及时间序列预测。 随着系统规模增加到108个量子比特,观察到有效且不断改进的学习效果,证明了迄今为止最大的量子机器学习实验。 通过基于生成的量子和经典数据核之间的几何差异构建合成数据集,我们在学习任务中观察到了比较性的量子核优势。 我们的研究结果表明,利用经典难以处理的量子关联进行有效机器学习的潜力。 我们期望这些结果能激发对不同量子硬件和机器学习范式的进一步扩展,包括早期容错硬件和生成式机器学习任务。
Comments: 10 + 14 pages, 4 + 7 figures
Subjects: Quantum Physics (quant-ph) ; Disordered Systems and Neural Networks (cond-mat.dis-nn); Atomic Physics (physics.atom-ph)
Cite as: arXiv:2407.02553 [quant-ph]
  (or arXiv:2407.02553v1 [quant-ph] for this version)
  https://doi.org/10.48550/arXiv.2407.02553
arXiv-issued DOI via DataCite

Submission history

From: Milan Kornjača [view email]
[v1] Tue, 2 Jul 2024 18:00:00 UTC (3,439 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • HTML (experimental)
  • TeX Source
view license
Current browse context:
quant-ph
< prev   |   next >
new | recent | 2024-07
Change to browse by:
cond-mat
cond-mat.dis-nn
physics
physics.atom-ph

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号