Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > cond-mat > arXiv:1911.12689

Help | Advanced Search

Condensed Matter > Disordered Systems and Neural Networks

arXiv:1911.12689 (cond-mat)
[Submitted on 28 Nov 2019 ]

Title: Neural networks with redundant representation: detecting the undetectable

Title: 具有冗余表示的神经网络:检测不可检测的

Authors:Elena Agliari, Francesco Alemanno, Adriano Barra, Martino Centonze, Alberto Fachechi
Abstract: We consider a three-layer Sejnowski machine and show that features learnt via contrastive divergence have a dual representation as patterns in a dense associative memory of order P=4. The latter is known to be able to Hebbian-store an amount of patterns scaling as N^{P-1}, where N denotes the number of constituting binary neurons interacting P-wisely. We also prove that, by keeping the dense associative network far from the saturation regime (namely, allowing for a number of patterns scaling only linearly with N, while P>2) such a system is able to perform pattern recognition far below the standard signal-to-noise threshold. In particular, a network with P=4 is able to retrieve information whose intensity is O(1) even in the presence of a noise O(\sqrt{N}) in the large N limit. This striking skill stems from a redundancy representation of patterns -- which is afforded given the (relatively) low-load information storage -- and it contributes to explain the impressive abilities in pattern recognition exhibited by new-generation neural networks. The whole theory is developed rigorously, at the replica symmetric level of approximation, and corroborated by signal-to-noise analysis and Monte Carlo simulations.
Abstract: 我们研究了一个三层的Sejnowski机器,并证明了通过对比散度学习到的特征可以以P=4阶密集关联记忆中的模式形式进行双重表示。 后者已知能够Hebbian存储数量按N^{P-1}规模的模式,其中N表示相互P-wise交互的构成二进制神经元的数量。 我们还证明,通过保持密集关联网络远离饱和状态(即,允许模式数量仅线性增长于N,同时P>2),该系统能够在标准信噪比阈值以下执行模式识别。 特别是,一个P=4的网络即使在存在强度为O(\sqrt{N})的噪声的情况下,在大N极限下也能够检索信息,其强度为O(1)。 这种惊人的能力源于模式的冗余表示——这是在(相对)低负载信息存储条件下提供的——并有助于解释新一代神经网络在模式识别方面令人印象深刻的能力。 整个理论严格地在复制对称近似水平上发展,并通过信噪分析和蒙特卡洛模拟得到证实。
Subjects: Disordered Systems and Neural Networks (cond-mat.dis-nn) ; Machine Learning (stat.ML)
Cite as: arXiv:1911.12689 [cond-mat.dis-nn]
  (or arXiv:1911.12689v1 [cond-mat.dis-nn] for this version)
  https://doi.org/10.48550/arXiv.1911.12689
arXiv-issued DOI via DataCite
Journal reference: Roma01.Math
Related DOI: https://doi.org/10.1103/PhysRevLett.124.028301
DOI(s) linking to related resources

Submission history

From: Adriano Barra Dr. [view email]
[v1] Thu, 28 Nov 2019 13:00:54 UTC (593 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • TeX Source
view license
Current browse context:
cond-mat.dis-nn
< prev   |   next >
new | recent | 2019-11
Change to browse by:
cond-mat
stat
stat.ML

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号