新兴技术
查看 最近的 文章
显示 2025年07月21日, 星期一 新的列表
- [1] arXiv:2507.13601 (交叉列表自 cs.DC) [中文pdf, pdf, html, 其他]
-
标题: 通过可塑任务调度利用多实例GPU标题: Leveraging Multi-Instance GPUs through moldable task scheduling期刊参考: 并行与分布式计算杂志,第204卷,第105128页(2025)主题: 分布式、并行与集群计算 (cs.DC) ; 新兴技术 (cs.ET) ; 性能 (cs.PF)
NVIDIA MIG(多实例GPU)允许将物理GPU划分为多个具有完全隔离资源的逻辑实例,这些实例可以动态重新配置。 这项工作通过具有动态重新配置的可塑任务调度突显了MIG未被开发的潜力。 具体而言,我们提出了一个在MIG约束下多任务执行的完成时间最小化问题。 我们的分析表明,假设任务工作量随资源单调变化是不可行的,这在多核调度中是常见的做法。 依赖于一项不需要这种假设的最新提案,我们提出了FAR,一种解决该问题的三阶段算法。 FAR的第一阶段建立在经典的任务可塑性方法上,第二阶段结合了最长处理时间优先和列表调度,并采用了一种针对MIG约束量身定制的新分区树启发式方法,第三阶段通过任务移动和交换进行局部搜索。 FAR离线地以批次方式调度任务,在运行时以改进的方式连接它们的调度,以有利于资源重用。 不考虑重新配置成本,列表调度证明在NVIDIA A30模型上的近似因子为7/4。 我们将该技术适应到NVIDIA A100/H100的特定约束,以获得2的近似因子。 包括重新配置成本,我们的实际实验结果显示,对于一个知名基准测试套件,完成时间相对于最优值不超过1.22倍,而对于受真实内核启发的合成输入,完成时间不超过1.10倍。 我们在每个任务批次中都获得了良好的实验结果,而且在批次的连接中也取得了显著的改进,优于当前最先进的方法以及没有GPU重新配置的方案。 除了算法之外,本文还展示了MIG技术的研究潜力,并为该领域的未来工作提出了有用的指标、工作负载特征和评估技术。
NVIDIA MIG (Multi-Instance GPU) allows partitioning a physical GPU into multiple logical instances with fully-isolated resources, which can be dynamically reconfigured. This work highlights the untapped potential of MIG through moldable task scheduling with dynamic reconfigurations. Specifically, we propose a makespan minimization problem for multi-task execution under MIG constraints. Our profiling shows that assuming monotonicity in task work with respect to resources is not viable, as is usual in multicore scheduling. Relying on a state-of-the-art proposal that does not require such an assumption, we present FAR, a 3-phase algorithm to solve the problem. Phase 1 of FAR builds on a classical task moldability method, phase 2 combines Longest Processing Time First and List Scheduling with a novel repartitioning tree heuristic tailored to MIG constraints, and phase 3 employs local search via task moves and swaps. FAR schedules tasks in batches offline, concatenating their schedules on the fly in an improved way that favors resource reuse. Excluding reconfiguration costs, the List Scheduling proof shows an approximation factor of 7/4 on the NVIDIA A30 model. We adapt the technique to the particular constraints of an NVIDIA A100/H100 to obtain an approximation factor of 2. Including the reconfiguration cost, our real-world experiments reveal a makespan with respect to the optimum no worse than 1.22x for a well-known suite of benchmarks, and 1.10x for synthetic inputs inspired by real kernels. We obtain good experimental results for each batch of tasks, but also in the concatenation of batches, with large improvements over the state-of-the-art and proposals without GPU reconfiguration. Beyond the algorithm, the paper demonstrates the research potential of the MIG technology and suggests useful metrics, workload characterizations and evaluation techniques for future work in this field.
- [2] arXiv:2507.13616 (交叉列表自 cs.HC) [中文pdf, pdf, html, 其他]
-
标题: 从企业到计算:人工智能治理与制度的演进标题: From Firms to Computation: AI Governance and the Evolution of Institutions评论: 44页主题: 人机交互 (cs.HC) ; 计算机与社会 (cs.CY) ; 新兴技术 (cs.ET) ; 信息论 (cs.IT) ; 多智能体系统 (cs.MA)
将代理人工智能整合到经济社会系统中,要求我们重新审视描述我们经济制度变化的进化过程。 本文综合了三个框架:多层级选择理论,Aoki关于公司作为计算过程的观点,以及Ostrom关于稳健制度的设计原则。 我们开发了一个框架,其中选择在组织层级上同时运作,公司通过博弈论架构实现分布式推理,并且 Ostrom式规则演变为解决人工智能相关风险的对齐机制。 这种综合产生了在一个嵌套博弈上表达的多层级Price方程,为选择和治理如何共同决定经济结果提供了量化指标。 我们研究了与Acemoglu关于包容性制度的研究的联系,分析了制度结构如何塑造人工智能的部署,并通过案例研究展示了该框架的解释力。 我们最后提出了一组设计原则,这些原则在制度层面上实现了人类与人工智能之间的对齐,从而实现了代理人工智能系统的可扩展、适应性和包容性治理。 我们以实际的政策建议和进一步的研究方向结束,以将这些原则扩展到现实世界的实施中。
The integration of agential artificial intelligence into socioeconomic systems requires us to reexamine the evolutionary processes that describe changes in our economic institutions. This article synthesizes three frameworks: multi-level selection theory, Aoki's view of firms as computational processes, and Ostrom's design principles for robust institutions. We develop a framework where selection operates concurrently across organizational levels, firms implement distributed inference via game-theoretic architectures, and Ostrom-style rules evolve as alignment mechanisms that address AI-related risks. This synthesis yields a multi-level Price equation expressed over nested games, providing quantitative metrics for how selection and governance co-determine economic outcomes. We examine connections to Acemoglu's work on inclusive institutions, analyze how institutional structures shape AI deployment, and demonstrate the framework's explanatory power via case studies. We conclude by proposing a set of design principles that operationalize alignment between humans and AI across institutional layers, enabling scalable, adaptive, and inclusive governance of agential AI systems. We conclude with practical policy recommendations and further research to extend these principles into real-world implementation.
- [3] arXiv:2507.13720 (交叉列表自 cs.CR) [中文pdf, pdf, 其他]
-
标题: 量子区块链综述:基础、趋势与空白标题: Quantum Blockchain Survey: Foundations, Trends, and Gaps评论: 12页,4图主题: 密码学与安全 (cs.CR) ; 分布式、并行与集群计算 (cs.DC) ; 新兴技术 (cs.ET) ; 网络与互联网架构 (cs.NI)
量子计算通过削弱广泛使用的密码学原语,对经典区块链系统构成了根本性风险。 作为回应,出现了两个主要的研究方向:后量子区块链,其集成了抗量子算法,以及量子区块链,其利用了纠缠和量子密钥分发等量子特性。 本综述回顾了这两个领域的关键发展,分析了它们的密码学基础、架构设计和实现挑战。 这项工作提供了技术方案的比较概述,突出了安全性和可扩展性以及部署之间的权衡,并识别了硬件、共识和网络设计中的开放研究问题。 目标是为在量子时代推进安全的区块链系统提供一个结构化且全面的参考。
Quantum computing poses fundamental risks to classical blockchain systems by undermining widely used cryptographic primitives. In response, two major research directions have emerged: post-quantum blockchains, which integrate quantum-resistant algorithms, and quantum blockchains, which leverage quantum properties such as entanglement and quantum key distribution. This survey reviews key developments in both areas, analyzing their cryptographic foundations, architectural designs, and implementation challenges. This work provides a comparative overview of technical proposals, highlight trade-offs in security, scalability, and deployment, and identify open research problems across hardware, consensus, and network design. The goal is to offer a structured and comprehensive reference for advancing secure blockchain systems in the quantum era.
- [4] arXiv:2507.13775 (交叉列表自 physics.optics) [中文pdf, pdf, html, 其他]
-
标题: 通过前馈光子神经网络在多段光纤链路中的非线性失真均衡标题: Nonlinear Distortion Equalization in Multi-Span Optical Links Via a Feed-Forward Photonic Neural Network评论: 21页,14图,2表主题: 光学 (physics.optics) ; 新兴技术 (cs.ET) ; 信号处理 (eess.SP)
使用集成前馈光子神经网络(PNN)对光通信信号中的线性和非线性失真进行均衡。 PNN 基于一个由 8 抽头有限脉冲响应(FIR)滤波器组成的线性阶段,每个抽头具有可调的幅度和相位权重,并通过线路末端光电探测器的平方模运算实现非线性阶段。 在强度调制/直接检测(IMDD)系统中,PNN 被应用于经过多段传输的 2 级脉冲幅度调制(PAM2)光信号。 每个 50 公里段包括光纤传输、光功率恢复以及通过可调色散补偿器进行的可选色散补偿。 位于接收端,PNN 实现了低延迟和低功耗的全光信号处理。 实验验证使用了一个在 10 Gbps 信号上运行的硅基绝缘体器件。 它展示了在长达 200 公里的距离上的色散均衡以及在去除色散后的自相位调制(450 公里)。 仿真研究了 PNN 在 100 Gbps 调制下的适应性及其在交叉相位调制均衡方面的潜力。
Linear and nonlinear distortions in optical communication signals are equalized using an integrated feed-forward Photonic Neural Network (PNN). The PNN is based on a linear stage made of an 8-tap Finite Impulse Response (FIR) filter, featuring tunable amplitude and phase weights at each tap, and of a nonlinear stage achieved through the square modulus operation at the end-of-line photodetector. Within an Intensity Modulation/Direct Detection (IMDD) system, the PNN is applied to 2-level Pulse Amplitude Modulated (PAM2) optical signals undergoing multi-span propagation. Each 50 km segment includes fiber transmission, optical power restoration, and optional chromatic dispersion compensation via a Tunable Dispersion Compensator. Positioned at the receiver, the PNN enables fully optical signal processing with minimal latency and power consumption. Experimental validation is conducted using a Silicon-On-Insulator device operating on 10 Gbps signals. It demonstrates chromatic dispersion equalization over distances up to 200 km and self-phase modulation (with dispersion removed) up to 450 km. Simulations explore PNN adaptation for 100 Gbps modulations and its potential for cross-phase modulation equalization.
- [5] arXiv:2507.14007 (交叉列表自 cs.CR) [中文pdf, pdf, html, 其他]
-
标题: CryptoNeo威胁建模框架(CNTMF):在集成区块链生态系统中保护新银行和金融科技标题: The CryptoNeo Threat Modelling Framework (CNTMF): Securing Neobanks and Fintech in Integrated Blockchain Ecosystems主题: 密码学与安全 (cs.CR) ; 新兴技术 (cs.ET)
区块链、加密货币和Web3技术在数字银行和金融科技运营中的快速整合,创造了一个将传统金融系统与去中心化元素相结合的集成环境。 本文介绍了CryptoNeo威胁建模框架(CNTMF),这是一个旨在解决这些生态系统中风险的建议框架,例如预言机操控和跨链攻击。 CNTMF代表了对STRIDE、OWASP Top 10、NIST框架、LINDDUN和PASTA等现有方法论的提议扩展,同时结合了定制组件,包括混合层分析、用于加密货币特定风险的CRYPTOQ记忆法,以及人工智能增强的反馈循环。 基于2025年事件的真实数据,CNTMF支持数据驱动的缓解措施,以减少损失,这些损失在2025年上半年的344起安全事件中总计约为24.7亿美元(CertiK通过GlobeNewswire,2025;Infosecurity Magazine,2025)。 其阶段指导资产映射、风险概况、优先级排序、缓解和迭代反馈。 这有助于应对如国家支持的攻击等不断演变的风险。
The rapid integration of blockchain, cryptocurrency, and Web3 technologies into digital banks and fintech operations has created an integrated environment blending traditional financial systems with decentralised elements. This paper introduces the CryptoNeo Threat Modelling Framework (CNTMF), a proposed framework designed to address the risks in these ecosystems, such as oracle manipulation and cross-chain exploits. CNTMF represents a proposed extension of established methodologies like STRIDE, OWASP Top 10, NIST frameworks, LINDDUN, and PASTA, while incorporating tailored components including Hybrid Layer Analysis, the CRYPTOQ mnemonic for cryptocurrency-specific risks, and an AI-Augmented Feedback Loop. Drawing on real-world data from 2025 incidents, CNTMF supports data-driven mitigation to reduce losses, which totalled approximately $2.47 billion in the first half of 2025 across 344 security events (CertiK via GlobeNewswire, 2025; Infosecurity Magazine, 2025). Its phases guide asset mapping, risk profiling, prioritisation, mitigation, and iterative feedback. This supports security against evolving risks like state-sponsored attacks.
- [6] arXiv:2507.14031 (交叉列表自 cs.CV) [中文pdf, pdf, html, 其他]
-
标题: QuantEIT:用于胸部电气阻抗断层扫描的超轻量级量子辅助推理标题: QuantEIT: Ultra-Lightweight Quantum-Assisted Inference for Chest Electrical Impedance Tomography评论: 10页,12图主题: 计算机视觉与模式识别 (cs.CV) ; 新兴技术 (cs.ET) ; 机器学习 (cs.LG)
电学阻抗断层扫描(EIT)是一种非侵入性、低成本的床旁成像方式,具有高时间分辨率,使其适用于床旁监测。 然而,其固有的不适定逆问题对精确图像重建提出了重大挑战。 基于深度学习(DL)的方法显示出前景,但通常依赖于复杂的网络架构和大量参数,限制了效率和可扩展性。 在此,我们提出了一种超轻量级量子辅助推理(QuantEIT)框架用于EIT图像重建。 QuantEIT利用一种量子辅助网络(QA-Net),结合并行2量子比特量子电路生成具有表现力的潜在表示,作为隐式非线性先验,随后通过一个线性层进行电导率重建。 这种设计显著降低了模型复杂度和参数数量。 独特的是,QuantEIT以无监督、无需训练数据的方式运行,并代表了首次将量子电路集成到EIT图像重建中。 在模拟和真实世界2D和3D EIT肺部成像数据上的大量实验表明,QuantEIT优于传统方法,在仅使用0.2%参数的情况下实现了相当或更优的重建精度,并增强了对噪声的鲁棒性。
Electrical Impedance Tomography (EIT) is a non-invasive, low-cost bedside imaging modality with high temporal resolution, making it suitable for bedside monitoring. However, its inherently ill-posed inverse problem poses significant challenges for accurate image reconstruction. Deep learning (DL)-based approaches have shown promise but often rely on complex network architectures with a large number of parameters, limiting efficiency and scalability. Here, we propose an Ultra-Lightweight Quantum-Assisted Inference (QuantEIT) framework for EIT image reconstruction. QuantEIT leverages a Quantum-Assisted Network (QA-Net), combining parallel 2-qubit quantum circuits to generate expressive latent representations that serve as implicit nonlinear priors, followed by a single linear layer for conductivity reconstruction. This design drastically reduces model complexity and parameter number. Uniquely, QuantEIT operates in an unsupervised, training-data-free manner and represents the first integration of quantum circuits into EIT image reconstruction. Extensive experiments on simulated and real-world 2D and 3D EIT lung imaging data demonstrate that QuantEIT outperforms conventional methods, achieving comparable or superior reconstruction accuracy using only 0.2% of the parameters, with enhanced robustness to noise.
- [7] arXiv:2507.14069 (交叉列表自 cs.DC) [中文pdf, pdf, html, 其他]
-
标题: 边缘智能与脉冲神经网络标题: Edge Intelligence with Spiking Neural NetworksShuiguang Deng, Di Yu, Changze Lv, Xin Du, Linshan Jiang, Xiaofan Zhao, Wentao Tong, Xiaoqing Zheng, Weijia Fang, Peng Zhao, Gang Pan, Schahram Dustdar, Albert Y. Zomaya评论: 此工作已提交给IEEE会议论文集以可能发表主题: 分布式、并行与集群计算 (cs.DC) ; 人工智能 (cs.AI) ; 新兴技术 (cs.ET) ; 神经与进化计算 (cs.NE)
人工智能与边缘计算的融合激发了在资源受限设备上直接提供智能服务的兴趣。 虽然传统深度学习模型需要大量的计算资源和集中式数据管理,但由此产生的延迟、带宽消耗和隐私问题暴露了以云为中心的范式中的关键局限性。 受大脑启发的计算,特别是脉冲神经网络(SNNs),通过模拟生物神经元动力学,提供了低功耗、事件驱动计算的有希望的替代方案。 本综述提供了基于SNN的边缘智能(EdgeSNNs)的全面概述,探讨了它们在边缘场景中解决设备端学习、推理和安全性的潜力。 我们系统地介绍了EdgeSNN的基础,包括神经元模型、学习算法和支持的硬件平台。 深入讨论了EdgeSNN的三个代表性实际考虑因素:使用轻量级SNN模型进行设备端推理,在非平稳数据条件下进行资源感知的训练和更新,以及安全和隐私保护问题。 此外,我们强调了在传统硬件上评估EdgeSNN的局限性,并介绍了一种双轨基准测试策略,以支持公平比较和硬件感知优化。 通过这项研究,我们的目标是弥合受大脑启发的学习与实际边缘部署之间的差距,为当前进展、开放挑战和未来研究方向提供见解。 据我们所知,这是第一份专门且全面的关于EdgeSNN的综述,为在神经形态计算和边缘智能交叉领域工作的研究人员和实践者提供了重要的参考。
The convergence of artificial intelligence and edge computing has spurred growing interest in enabling intelligent services directly on resource-constrained devices. While traditional deep learning models require significant computational resources and centralized data management, the resulting latency, bandwidth consumption, and privacy concerns have exposed critical limitations in cloud-centric paradigms. Brain-inspired computing, particularly Spiking Neural Networks (SNNs), offers a promising alternative by emulating biological neuronal dynamics to achieve low-power, event-driven computation. This survey provides a comprehensive overview of Edge Intelligence based on SNNs (EdgeSNNs), examining their potential to address the challenges of on-device learning, inference, and security in edge scenarios. We present a systematic taxonomy of EdgeSNN foundations, encompassing neuron models, learning algorithms, and supporting hardware platforms. Three representative practical considerations of EdgeSNN are discussed in depth: on-device inference using lightweight SNN models, resource-aware training and updating under non-stationary data conditions, and secure and privacy-preserving issues. Furthermore, we highlight the limitations of evaluating EdgeSNNs on conventional hardware and introduce a dual-track benchmarking strategy to support fair comparisons and hardware-aware optimization. Through this study, we aim to bridge the gap between brain-inspired learning and practical edge deployment, offering insights into current advancements, open challenges, and future research directions. To the best of our knowledge, this is the first dedicated and comprehensive survey on EdgeSNNs, providing an essential reference for researchers and practitioners working at the intersection of neuromorphic computing and edge intelligence.
- [8] arXiv:2507.14116 (交叉列表自 quant-ph) [中文pdf, pdf, html, 其他]
-
标题: 基于并行退火的量子玻尔兹曼机在医学图像分类中的应用标题: Quantum Boltzmann Machines using Parallel Annealing for Medical Image ClassificationDaniëlle Schuman, Mark V. Seebode, Tobias Rohe, Maximilian Balthasar Mansky, Michael Schroedl-Baumann, Jonas Stein, Claudia Linnhoff-Popien, Florian Krellner评论: 12页,5图(若包括子图则为10张),2表。将发表于2025年IEEE量子计算与工程国际会议(QCE)论文集中主题: 量子物理 (quant-ph) ; 新兴技术 (cs.ET) ; 机器学习 (cs.LG)
利用从量子退火器中抽取的样本本质上遵循类似于玻尔兹曼分布的事实,基于退火的量子玻尔兹曼机(QBMs)在量子研究界越来越受欢迎。虽然它们在量子加速方面具有巨大潜力,但目前其使用仍是一项昂贵的举措,因为训练它们需要大量的QPU时间。这限制了它们在NISQ时代的应用。遵循Noè等人(2024)的想法,他们尝试通过将并行量子退火纳入QBMs的无监督训练中来减轻这种成本,本文提出了一种改进的并行量子退火版本,我们将其用于监督设置下训练QBMs。节省用于编码输入的量子比特,后一种设置使我们能够在来自MedMNIST数据集(Yang等,2023)的医学图像上测试我们的方法,从而更接近该技术的实际应用。我们的实验表明,使用我们方法的QBMs已经取得了合理的结果,与类似大小的卷积神经网络(CNNs)相当,且比这些经典模型显著更少的纪元数。我们的并行退火技术相比常规的基于退火的BM执行实现了近70%的速度提升。
Exploiting the fact that samples drawn from a quantum annealer inherently follow a Boltzmann-like distribution, annealing-based Quantum Boltzmann Machines (QBMs) have gained increasing popularity in the quantum research community. While they harbor great promises for quantum speed-up, their usage currently stays a costly endeavor, as large amounts of QPU time are required to train them. This limits their applicability in the NISQ era. Following the idea of No\`e et al. (2024), who tried to alleviate this cost by incorporating parallel quantum annealing into their unsupervised training of QBMs, this paper presents an improved version of parallel quantum annealing that we employ to train QBMs in a supervised setting. Saving qubits to encode the inputs, the latter setting allows us to test our approach on medical images from the MedMNIST data set (Yang et al., 2023), thereby moving closer to real-world applicability of the technology. Our experiments show that QBMs using our approach already achieve reasonable results, comparable to those of similarly-sized Convolutional Neural Networks (CNNs), with markedly smaller numbers of epochs than these classical models. Our parallel annealing technique leads to a speed-up of almost 70 % compared to regular annealing-based BM executions.
交叉提交 (展示 8 之 8 条目 )
- [9] arXiv:2507.09067 (替换) [中文pdf, pdf, html, 其他]
-
标题: 量子抗性隐私账本(QRPL):后量子时代的主权数字货币标题: Quantum-Resilient Privacy Ledger (QRPL): A Sovereign Digital Currency for the Post-Quantum Era主题: 新兴技术 (cs.ET) ; 密码学与安全 (cs.CR)
量子计算的出现对现有的密码基础设施提出了深刻的挑战,同时中央银行数字货币(CBDCs)的发展引发了关于数字支付系统中隐私保护和过度集中化的担忧。 本文提出了量子抗性隐私账本(QRPL),这是一种创新的基于代币的数字货币架构,结合了国家技术标准局(NIST)标准化的后量子密码学(PQC)与基于哈希的零知识证明,以确保用户主权、可扩展性和交易机密性。 主要贡献包括为不可链接交易适应短暂证明链,一种隐私加权的权益证明(PoS)共识以促进公平参与,以及一种基于零知识证明的隐私保护选择性披露机制。 QRPL旨在解决现有CBDC设计中的关键缺陷,包括普遍监控的风险,并通过10-20秒的区块时间在未来的货币系统中平衡安全性和吞吐量。 虽然概念上是初步的,但计划进行实证原型。 未来的工作包括开发原型以实证验证这些模型。
The emergence of quantum computing presents profound challenges to existing cryptographic infrastructures, whilst the development of central bank digital currencies (CBDCs) has raised concerns regarding privacy preservation and excessive centralisation in digital payment systems. This paper proposes the Quantum-Resilient Privacy Ledger (QRPL) as an innovative token-based digital currency architecture that incorporates National Institute of Standards and Technology (NIST)-standardised post-quantum cryptography (PQC) with hash-based zero-knowledge proofs to ensure user sovereignty, scalability, and transaction confidentiality. Key contributions include adaptations of ephemeral proof chains for unlinkable transactions, a privacy-weighted Proof-of-Stake (PoS) consensus to promote equitable participation, and a novel zero-knowledge proof-based mechanism for privacy-preserving selective disclosure. QRPL aims to address critical shortcomings in prevailing CBDC designs, including risks of pervasive surveillance, with a 10-20 second block time to balance security and throughput in future monetary systems. While conceptual, empirical prototypes are planned. Future work includes prototype development to validate these models empirically.
- [10] arXiv:2506.17135 (替换) [中文pdf, pdf, html, 其他]
-
标题: 无需擦除的量子计算通过减少量子比特开销实现高效算术运算标题: No Scratch Quantum Computing by Reducing Qubit Overhead for Efficient Arithmetics主题: 量子物理 (quant-ph) ; 新兴技术 (cs.ET) ; 计算机科学中的逻辑 (cs.LO)
量子算术计算需要大量临时量子位以保持可逆性。 这些操作由于状态编码的需要,要求的量子位和门资源与输入或输出寄存器中较大的那个相当。 量子哈密顿计算(QHC)通过在一个旋转的量子门内对逻辑操作的输入进行编码,引入了一种新方法。 这项创新将所需的量子位寄存器$ N $减小到输出状态$ O $的大小,其中$ N = \log_2 O $。 利用 QHC 原理,我们提出了可逆半加器和全加器电路,将标准的 Toffoli + CNOT 布局 [Vedral 等,PRA,54,11,(1996)] 从用于量子半加器电路的三量子位和四量子位格式,以及用于全加器电路的五个顺序 Fredkin 门使用五个量子位 [Moutinho 等,PRX Energy 2,033002 (2023)],压缩为一个两量子位、4$\times $4 Hilbert 空间。 在此提出的方法针对在量子硬件上评估的经典逻辑进行了优化,由于单位演化,可以在一定程度上绕过经典 CMOS 能量限制。 尽管在本文中我们避免了输入和输出状态的叠加,但原则上这是可行的。 我们认为 QHC 最好的应用是找到评估任何真值表所需的最小量子位和门资源,利用集成量子电路或光子学来提升 FPGA 的能力。
Quantum arithmetic computation requires a substantial number of scratch qubits to stay reversible. These operations necessitate qubit and gate resources equivalent to those needed for the larger of the input or output registers due to state encoding. Quantum Hamiltonian Computing (QHC) introduces a novel approach by encoding input for logic operations within a single rotating quantum gate. This innovation reduces the required qubit register $ N $ to the size of the output states $ O $, where $ N = \log_2 O $. Leveraging QHC principles, we present reversible half-adder and full-adder circuits that compress the standard Toffoli + CNOT layout [Vedral et al., PRA, 54, 11, (1996)] from three-qubit and four-qubit formats for the Quantum half-adder circuit and five sequential Fredkin gates using five qubits [Moutinho et al., PRX Energy 2, 033002 (2023)] for full-adder circuit; into a two-qubit, 4$\times $4 Hilbert space. This scheme, presented here, is optimized for classical logic evaluated on quantum hardware, which due to unitary evolution can bypass classical CMOS energy limitations to certain degree. Although we avoid superposition of input and output states in this manuscript, this remains feasible in principle. We see the best application for QHC in finding the minimal qubit and gate resources needed to evaluate any truth table, advancing FPGA capabilities using integrated quantum circuits or photonics.