Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > stat > arXiv:2506.01143

Help | Advanced Search

Statistics > Machine Learning

arXiv:2506.01143 (stat)
[Submitted on 1 Jun 2025 (v1) , last revised 28 Oct 2025 (this version, v2)]

Title: Linear regression with overparameterized linear neural networks: Tight upper and lower bounds for implicit $\ell^1$-regularization

Title: 带过度参数化线性神经网络的线性回归:隐式$\ell^1$-正则化的紧上界和下界

Authors:Hannes Matt, Dominik Stöger
Abstract: Modern machine learning models are often trained in a setting where the number of parameters exceeds the number of training samples. To understand the implicit bias of gradient descent in such overparameterized models, prior work has studied diagonal linear neural networks in the regression setting. These studies have shown that, when initialized with small weights, gradient descent tends to favor solutions with minimal $\ell^1$-norm - an effect known as implicit regularization. In this paper, we investigate implicit regularization in diagonal linear neural networks of depth $D\ge 2$ for overparameterized linear regression problems. We focus on analyzing the approximation error between the limit point of gradient flow trajectories and the solution to the $\ell^1$-minimization problem. By deriving tight upper and lower bounds on the approximation error, we precisely characterize how the approximation error depends on the scale of initialization $\alpha$. Our results reveal a qualitative difference between depths: for $D \ge 3$, the error decreases linearly with $\alpha$, whereas for $D=2$, it decreases at rate $\alpha^{1-\varrho}$, where the parameter $\varrho \in [0,1)$ can be explicitly characterized. Interestingly, this parameter is closely linked to so-called null space property constants studied in the sparse recovery literature. We demonstrate the asymptotic tightness of our bounds through explicit examples. Numerical experiments corroborate our theoretical findings and suggest that deeper networks, i.e., $D \ge 3$, may lead to better generalization, particularly for realistic initialization scales.
Abstract: 现代机器学习模型通常在参数数量超过训练样本数量的环境下进行训练。 为了理解这种过参数化模型中梯度下降的隐式偏差,之前的工作在回归设置中研究了对角线线性神经网络。 这些研究显示,当使用小权重初始化时,梯度下降倾向于选择具有最小$\ell^1$-范数的解——这一现象被称为隐式正则化。 在本文中,我们研究了深度为$D\ge 2$的对角线线性神经网络在过参数化线性回归问题中的隐式正则化。 我们专注于分析梯度流轨迹的极限点与$\ell^1$-最小化问题解之间的近似误差。 通过推导近似误差的紧致上界和下界,我们精确地描述了近似误差如何依赖于初始化的尺度$\alpha$。 我们的结果揭示了深度之间的定性差异:对于$D \ge 3$,误差随着$\alpha$线性减少,而对于$D=2$,其减少速率为$\alpha^{1-\varrho}$,其中参数$\varrho \in [0,1)$可以被显式表征。有趣的是,这个参数与稀疏恢复文献中研究的所谓零空间性质常数密切相关。我们通过显式例子展示了我们界限的渐近紧性。数值实验验证了我们的理论结果,并表明更深的网络,即$D \ge 3$,可能在实际初始化尺度下带来更好的泛化性能。
Subjects: Machine Learning (stat.ML) ; Information Theory (cs.IT); Machine Learning (cs.LG); Optimization and Control (math.OC)
Cite as: arXiv:2506.01143 [stat.ML]
  (or arXiv:2506.01143v2 [stat.ML] for this version)
  https://doi.org/10.48550/arXiv.2506.01143
arXiv-issued DOI via DataCite

Submission history

From: Hannes Matt [view email]
[v1] Sun, 1 Jun 2025 19:55:31 UTC (440 KB)
[v2] Tue, 28 Oct 2025 12:49:08 UTC (442 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • TeX Source
view license
Current browse context:
stat.ML
< prev   |   next >
new | recent | 2025-06
Change to browse by:
cs
cs.IT
cs.LG
math
math.IT
math.OC
stat

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号