Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > q-bio > arXiv:2503.02642v1

Help | Advanced Search

Quantitative Biology > Neurons and Cognition

arXiv:2503.02642v1 (q-bio)
[Submitted on 4 Mar 2025 ]

Title: Weight transport through spike timing for robust local gradients

Title: 通过尖峰时间进行权重传输以获得稳健的局部梯度

Authors:Timo Gierlich, Andreas Baumbach, Akos F. Kungl, Kevin Max, Mihai A. Petrovici
Abstract: In both machine learning and in computational neuroscience, plasticity in functional neural networks is frequently expressed as gradient descent on a cost. Often, this imposes symmetry constraints that are difficult to reconcile with local computation, as is required for biological networks or neuromorphic hardware. For example, wake-sleep learning in networks characterized by Boltzmann distributions builds on the assumption of symmetric connectivity. Similarly, the error backpropagation algorithm is notoriously plagued by the weight transport problem between the representation and the error stream. Existing solutions such as feedback alignment tend to circumvent the problem by deferring to the robustness of these algorithms to weight asymmetry. However, they are known to scale poorly with network size and depth. We introduce spike-based alignment learning (SAL), a complementary learning rule for spiking neural networks, which uses spike timing statistics to extract and correct the asymmetry between effective reciprocal connections. Apart from being spike-based and fully local, our proposed mechanism takes advantage of noise. Based on an interplay between Hebbian and anti-Hebbian plasticity, synapses can thereby recover the true local gradient. This also alleviates discrepancies that arise from neuron and synapse variability -- an omnipresent property of physical neuronal networks. We demonstrate the efficacy of our mechanism using different spiking network models. First, we show how SAL can significantly improve convergence to the target distribution in probabilistic spiking networks as compared to Hebbian plasticity alone. Second, in neuronal hierarchies based on cortical microcircuits, we show how our proposed mechanism effectively enables the alignment of feedback weights to the forward pathway, thus allowing the backpropagation of correct feedback errors.
Abstract: 在机器学习和计算神经科学中,功能神经网络中的可塑性通常表现为对成本的梯度下降。 通常,这会施加对称性约束,这很难与局部计算相协调,正如生物网络或类脑硬件所要求的那样。 例如,由玻尔兹曼分布表征的网络中的清醒-睡眠学习建立在连接对称性的假设之上。 同样,误差反向传播算法因表示和误差流之间的权重传输问题而臭名昭著。 现有的解决方案,如反馈对齐,倾向于通过依赖这些算法对权重不对称的鲁棒性来绕过这个问题。 然而,它们已知在网络规模和深度上扩展效果不佳。 我们引入了基于脉冲的对齐学习(SAL),这是一种用于脉冲神经网络的互补学习规则,它利用脉冲时间统计量来提取并校正有效互连之间的不对称性。 除了基于脉冲且完全局部外,我们提出的机制还利用了噪声。 基于赫布和反赫布可塑性的相互作用,突触可以恢复真实的局部梯度。 这也缓解了由于神经元和突触变异性产生的差异——这是物理神经网络的一个普遍特性。 我们使用不同的脉冲网络模型证明了我们机制的有效性。 首先,我们展示了与单独的赫布可塑性相比,SAL如何显著提高概率脉冲网络收敛到目标分布的效果。 其次,在基于皮层微电路的神经层次结构中,我们展示了我们提出的机制如何有效地使反馈权重与前向路径对齐,从而允许正确反馈误差的反向传播。
Comments: 19 pages, 9 figures
Subjects: Neurons and Cognition (q-bio.NC) ; Emerging Technologies (cs.ET); Machine Learning (cs.LG); Neural and Evolutionary Computing (cs.NE)
Cite as: arXiv:2503.02642 [q-bio.NC]
  (or arXiv:2503.02642v1 [q-bio.NC] for this version)
  https://doi.org/10.48550/arXiv.2503.02642
arXiv-issued DOI via DataCite

Submission history

From: Timo Gierlich [view email]
[v1] Tue, 4 Mar 2025 14:05:39 UTC (2,375 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • TeX Source
license icon view license
Current browse context:
q-bio.NC
< prev   |   next >
new | recent | 2025-03
Change to browse by:
cs
cs.ET
cs.LG
cs.NE
q-bio

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号