Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > cs > arXiv:2407.10315

Help | Advanced Search

Computer Science > Machine Learning

arXiv:2407.10315 (cs)
[Submitted on 14 Jul 2024 (v1) , last revised 26 Jan 2025 (this version, v2)]

Title: Order parameters and phase transitions of continual learning in deep neural networks

Title: 连续学习在深度神经网络中的序参量和相变

Authors:Haozhe Shan, Qianyi Li, Haim Sompolinsky
Abstract: Continual learning (CL) enables animals to learn new tasks without erasing prior knowledge. CL in artificial neural networks (NNs) is challenging due to catastrophic forgetting, where new learning degrades performance on older tasks. While various techniques exist to mitigate forgetting, theoretical insights into when and why CL fails in NNs are lacking. Here, we present a statistical-mechanics theory of CL in deep, wide NNs, which characterizes the network's input-output mapping as it learns a sequence of tasks. It gives rise to order parameters (OPs) that capture how task relations and network architecture influence forgetting and anterograde interference, as verified by numerical evaluations. For networks with a shared readout for all tasks (single-head CL), the relevant-feature and rule similarity between tasks, respectively measured by two OPs, are sufficient to predict a wide range of CL behaviors. In addition, the theory predicts that increasing the network depth can effectively reduce interference between tasks, thereby lowering forgetting. For networks with task-specific readouts (multi-head CL), the theory identifies a phase transition where CL performance shifts dramatically as tasks become less similar, as measured by another task-similarity OP. While forgetting is relatively mild compared to single-head CL across all tasks, sufficiently low similarity leads to catastrophic anterograde interference, where the network retains old tasks perfectly but completely fails to generalize new learning. Our results delineate important factors affecting CL performance and suggest strategies for mitigating forgetting.
Abstract: 持续学习(CL)使动物能够在不遗忘先前知识的情况下学习新任务。 在人工神经网络(NNs)中,由于灾难性遗忘,即新学习会降低旧任务的性能,因此CL具有挑战性。 虽然存在各种减轻遗忘的技术,但关于NNs中CL何时以及为何失败的理论见解仍然缺乏。 在此,我们提出了一种深度、宽广NNs中CL的统计力学理论,该理论描述了网络在学习一系列任务时的输入输出映射。 它产生了序参数(OPs),这些参数捕捉了任务关系和网络架构如何影响遗忘和顺行性干扰,这通过数值评估得到了验证。 对于所有任务共享一个读出层的网络(单头CL),任务的相关特征和规则相似性分别由两个OP测量,足以预测CL的广泛行为。 此外,该理论预测增加网络深度可以有效减少任务之间的干扰,从而降低遗忘。 对于具有任务特定读出层的网络(多头CL),该理论识别出一个相变,当任务相似性由另一个任务相似性OP测量时,CL性能会随着任务变得不那么相似而发生显著变化。 尽管与单头CL相比,所有任务中的遗忘相对轻微,但足够的低相似性会导致灾难性顺行性干扰,其中网络完美保留旧任务,但完全无法推广新学习。 我们的结果明确了影响CL性能的重要因素,并提出了减轻遗忘的策略。
Comments: 54 pages, 7 figures, 8 SI figures
Subjects: Machine Learning (cs.LG) ; Applied Physics (physics.app-ph); Neurons and Cognition (q-bio.NC)
Cite as: arXiv:2407.10315 [cs.LG]
  (or arXiv:2407.10315v2 [cs.LG] for this version)
  https://doi.org/10.48550/arXiv.2407.10315
arXiv-issued DOI via DataCite

Submission history

From: Qianyi Li [view email]
[v1] Sun, 14 Jul 2024 20:22:36 UTC (34,284 KB)
[v2] Sun, 26 Jan 2025 04:27:17 UTC (17,273 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • HTML (experimental)
  • TeX Source
license icon view license
Current browse context:
cs.LG
< prev   |   next >
new | recent | 2024-07
Change to browse by:
cs
physics
physics.app-ph
q-bio
q-bio.NC

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号