Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > cs > arXiv:2506.22122

Help | Advanced Search

Computer Science > Neural and Evolutionary Computing

arXiv:2506.22122 (cs)
[Submitted on 27 Jun 2025 ]

Title: In situ fine-tuning of in silico trained Optical Neural Networks

Title: 原位微调在体训练的光学神经网络

Authors:Gianluca Kosmella, Ripalta Stabile, Jaron Sanders
Abstract: Optical Neural Networks (ONNs) promise significant advantages over traditional electronic neural networks, including ultrafast computation, high bandwidth, and low energy consumption, by leveraging the intrinsic capabilities of photonics. However, training ONNs poses unique challenges, notably the reliance on simplified in silico models whose trained parameters must subsequently be mapped to physical hardware. This process often introduces inaccuracies due to discrepancies between the idealized digital model and the physical ONN implementation, particularly stemming from noise and fabrication imperfections. In this paper, we analyze how noise misspecification during in silico training impacts ONN performance and we introduce Gradient-Informed Fine-Tuning (GIFT), a lightweight algorithm designed to mitigate this performance degradation. GIFT uses gradient information derived from the noise structure of the ONN to adapt pretrained parameters directly in situ, without requiring expensive retraining or complex experimental setups. GIFT comes with formal conditions under which it improves ONN performance. We also demonstrate the effectiveness of GIFT via simulation on a five-layer feed forward ONN trained on the MNIST digit classification task. GIFT achieves up to $28\%$ relative accuracy improvement compared to the baseline performance under noise misspecification, without resorting to costly retraining. Overall, GIFT provides a practical solution for bridging the gap between simplified digital models and real-world ONN implementations.
Abstract: 光学神经网络(ONNs)通过利用光子学的固有特性,相较于传统的电子神经网络具有显著优势,包括超快计算、高带宽和低能耗。 然而,训练ONNs面临独特的挑战,特别是依赖于简化的仿真模型,这些模型训练后的参数随后必须映射到物理硬件上。 这一过程通常会因理想化的数字模型与物理ONN实现之间的差异而引入不准确性,尤其是由于噪声和制造缺陷引起的差异。 在本文中,我们分析了在仿真训练过程中噪声误设对ONN性能的影响,并引入了梯度感知微调(GIFT),这是一种轻量级算法,旨在缓解这种性能下降。 GIFT利用来自ONN噪声结构的梯度信息,在不需昂贵重新训练或复杂实验设置的情况下,直接在现场调整预训练参数。 GIFT附带有正式条件,表明在这些条件下可以提高ONN性能。 我们还通过在五层前馈ONN上的仿真展示了GIFT的有效性,该ONN在MNIST数字分类任务上进行了训练。 在噪声误设情况下,GIFT相对于基线性能实现了高达$28\%$的相对准确率提升,而无需 resorting 到昂贵的重新训练。 总体而言,GIFT为弥合简化数字模型与实际ONN实现之间的差距提供了一个实用的解决方案。
Subjects: Neural and Evolutionary Computing (cs.NE) ; Emerging Technologies (cs.ET); Signal Processing (eess.SP)
Cite as: arXiv:2506.22122 [cs.NE]
  (or arXiv:2506.22122v1 [cs.NE] for this version)
  https://doi.org/10.48550/arXiv.2506.22122
arXiv-issued DOI via DataCite

Submission history

From: Jaron Sanders [view email]
[v1] Fri, 27 Jun 2025 11:00:36 UTC (3,671 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • HTML (experimental)
  • TeX Source
  • Other Formats
license icon view license
Current browse context:
cs.NE
< prev   |   next >
new | recent | 2025-06
Change to browse by:
cs
cs.ET
eess
eess.SP

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号