Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > q-bio > arXiv:2502.14337

Help | Advanced Search

Quantitative Biology > Neurons and Cognition

arXiv:2502.14337 (q-bio)
[Submitted on 20 Feb 2025 ]

Title: Latent computing by biological neural networks: A dynamical systems framework

Title: 生物神经网络的潜在计算:动力系统框架

Authors:Fatih Dinc, Marta Blanco-Pozo, David Klindt, Francisco Acosta, Yiqi Jiang, Sadegh Ebrahimi, Adam Shai, Hidenori Tanaka, Peng Yuan, Mark J. Schnitzer, Nina Miolane
Abstract: Although individual neurons and neural populations exhibit the phenomenon of representational drift, perceptual and behavioral outputs of many neural circuits can remain stable across time scales over which representational drift is substantial. These observations motivate a dynamical systems framework for neural network activity that focuses on the concept of \emph{latent processing units,} core elements for robust coding and computation embedded in collective neural dynamics. Our theoretical treatment of these latent processing units yields five key attributes of computing through neural network dynamics. First, neural computations that are low-dimensional can nevertheless generate high-dimensional neural dynamics. Second, the manifolds defined by neural dynamical trajectories exhibit an inherent coding redundancy as a direct consequence of the universal computing capabilities of the underlying dynamical system. Third, linear readouts or decoders of neural population activity can suffice to optimally subserve downstream circuits controlling behavioral outputs. Fourth, whereas recordings from thousands of neurons may suffice for near optimal decoding from instantaneous neural activity patterns, experimental access to millions of neurons may be necessary to predict neural ensemble dynamical trajectories across timescales of seconds. Fifth, despite the variable activity of single cells, neural networks can maintain stable representations of the variables computed by the latent processing units, thereby making computations robust to representational drift. Overall, our framework for latent computation provides an analytic description and empirically testable predictions regarding how large systems of neurons perform robust computations via their collective dynamics.
Abstract: 尽管单个神经元和神经种群表现出表征漂移的现象,但许多神经回路的感知和行为输出可以在表征漂移显著的时间尺度上保持稳定。 这些观察结果促使我们提出了一个用于神经网络活动的动力系统框架,该框架关注\emph{潜在处理单元,}核心元素,这些元素嵌入在集体神经动力学中,用于鲁棒编码和计算。 我们对这些潜在处理单元的理论分析得出了通过神经网络动力学进行计算的五个关键属性。 第一,尽管神经计算是低维的,但仍可以产生高维的神经动力学。 第二,由神经动力学轨迹定义的流形由于底层动力系统的通用计算能力而具有固有的编码冗余性。 第三,神经种群活动的线性读出或解码器足以最优地支持控制行为输出的下游回路。 第四,虽然记录数千个神经元可能足以从瞬时神经活动模式中接近最优解码,但实验上对数百万个神经元的访问可能是预测跨秒级时间尺度的神经集合动力学轨迹所必需的。 第五,尽管单个细胞的活动具有可变性,神经网络可以维持潜在处理单元所计算变量的稳定表征,从而使计算对表征漂移具有鲁棒性。 总体而言,我们的潜在计算框架提供了关于神经元大规模系统如何通过其集体动力学执行鲁棒计算的分析描述和可经验验证的预测。
Subjects: Neurons and Cognition (q-bio.NC)
Cite as: arXiv:2502.14337 [q-bio.NC]
  (or arXiv:2502.14337v1 [q-bio.NC] for this version)
  https://doi.org/10.48550/arXiv.2502.14337
arXiv-issued DOI via DataCite

Submission history

From: Fatih Dinc [view email]
[v1] Thu, 20 Feb 2025 07:45:23 UTC (20,313 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • HTML (experimental)
  • TeX Source
license icon view license
Current browse context:
q-bio.NC
< prev   |   next >
new | recent | 2025-02
Change to browse by:
q-bio

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号