Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > cs > arXiv:2407.19566

Help | Advanced Search

Computer Science > Emerging Technologies

arXiv:2407.19566 (cs)
[Submitted on 28 Jul 2024 (v1) , last revised 13 Apr 2025 (this version, v3)]

Title: To Spike or Not to Spike, that is the Question

Title: 尖还是不尖,这是个问题

Authors:Sanaz Mahmoodi Takaghaj, Jack Sampson
Abstract: Neuromorphic computing has recently gained momentum with the emergence of various neuromorphic processors. As the field advances, there is an increasing focus on developing training methods that can effectively leverage the unique properties of spiking neural networks (SNNs). SNNs emulate the temporal dynamics of biological neurons, making them particularly well-suited for real-time, event-driven processing. To fully harness the potential of SNNs across different neuromorphic platforms, effective training methodologies are essential. In SNNs, learning rules are based on neurons' spiking behavior, that is, if and when spikes are generated due to a neuron's membrane potential exceeding that neuron's spiking threshold, and this spike timing encodes vital information. However, the threshold is generally treated as a hyperparameter, and incorrect selection can lead to neurons that do not spike for large portions of the training process, hindering the effective rate of learning. This work focuses on the significance of learning neuron thresholds alongside weights in SNNs. Our results suggest that promoting threshold from a hyperparameter to a trainable parameter effectively addresses the issue of dead neurons during training. This leads to a more robust training algorithm, resulting in improved convergence, increased test accuracy, and a substantial reduction in the number of training epochs required to achieve viable accuracy on spatiotemporal datasets such as NMNIST, DVS128, and Spiking Heidelberg Digits (SHD), with up to 30% training speed-up and up to 2% higher accuracy on these datasets.
Abstract: 神经形态计算随着各种神经形态处理器的出现最近获得了动力。随着该领域的发展,越来越多的关注集中在开发能够有效利用尖峰神经网络(SNN)独特属性的训练方法上。SNN 模仿生物神经元的时间动态特性,使其特别适合实时、事件驱动的处理。为了充分利用不同神经形态平台上的 SNN 的潜力,有效的训练方法至关重要。在 SNN 中,学习规则基于神经元的尖峰行为,即由于神经元膜电位超过该神经元的尖峰阈值而产生的尖峰,以及这种尖峰时间编码关键信息。然而,通常将阈值视为超参数,并且选择不当可能导致在训练过程中长时间内不产生尖峰的神经元,从而阻碍学习的有效速率。本工作专注于 SNN 中学习神经元阈值与权重的重要性。我们的结果显示,将阈值从超参数提升为可训练参数可以有效解决训练过程中的死神经元问题。这导致了一个更健壮的训练算法,从而提高了收敛性,增加了测试精度,并大幅减少了在时空数据集(如 NMNIST、DVS128 和 Spiking Heidelberg Digits (SHD))上达到可行精度所需的训练周期数,最多可达 30 个这些数据集。
Comments: Accepted to Artificial Intelligence Circuits And Systems(AICAS), 2015
Subjects: Emerging Technologies (cs.ET) ; Neural and Evolutionary Computing (cs.NE)
Cite as: arXiv:2407.19566 [cs.ET]
  (or arXiv:2407.19566v3 [cs.ET] for this version)
  https://doi.org/10.48550/arXiv.2407.19566
arXiv-issued DOI via DataCite

Submission history

From: Sanaz Mahmoodi Takaghaj [view email]
[v1] Sun, 28 Jul 2024 19:23:09 UTC (1,263 KB)
[v2] Thu, 31 Oct 2024 19:45:37 UTC (2,086 KB)
[v3] Sun, 13 Apr 2025 01:39:25 UTC (2,086 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • HTML (experimental)
  • TeX Source
license icon view license
Current browse context:
cs.ET
< prev   |   next >
new | recent | 2024-07
Change to browse by:
cs
cs.NE

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号