Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > cs > arXiv:1911.12927

Help | Advanced Search

Computer Science > Machine Learning

arXiv:1911.12927 (cs)
[Submitted on 29 Nov 2019 ]

Title: Richer priors for infinitely wide multi-layer perceptrons

Title: 无穷宽多层感知器的 richer 先验

Authors:Russell Tsuchida, Fred Roosta, Marcus Gallagher
Abstract: It is well-known that the distribution over functions induced through a zero-mean iid prior distribution over the parameters of a multi-layer perceptron (MLP) converges to a Gaussian process (GP), under mild conditions. We extend this result firstly to independent priors with general zero or non-zero means, and secondly to a family of partially exchangeable priors which generalise iid priors. We discuss how the second prior arises naturally when considering an equivalence class of functions in an MLP and through training processes such as stochastic gradient descent. The model resulting from partially exchangeable priors is a GP, with an additional level of inference in the sense that the prior and posterior predictive distributions require marginalisation over hyperparameters. We derive the kernels of the limiting GP in deep MLPs, and show empirically that these kernels avoid certain pathologies present in previously studied priors. We empirically evaluate our claims of convergence by measuring the maximum mean discrepancy between finite width models and limiting models. We compare the performance of our new limiting model to some previously discussed models on synthetic regression problems. We observe increasing ill-conditioning of the marginal likelihood and hyper-posterior as the depth of the model increases, drawing parallels with finite width networks which require notoriously involved optimisation tricks.
Abstract: 众所周知,在多层感知机(MLP)参数上通过零均值独立同分布(iid)先验分布诱导出的函数分布在一定条件下收敛到一个高斯过程(GP)。 我们首先将这一结果扩展到具有广义零均值或非零均值的独立先验分布,其次扩展到一类部分可交换先验分布,该类先验分布推广了iid先验。 我们讨论了当考虑MLP中的函数等价类以及通过随机梯度下降等训练过程时,第二种先验如何自然出现。 由部分可交换先验得出的模型是一个GP,在某种意义上具有额外的推理层级,因为先验和后验预测分布需要对超参数进行边缘化。 我们推导了深度MLP中极限GP的核,并显示这些核在某些方面避免了之前研究的先验所存在的问题。 我们通过测量有限宽度模型与极限模型之间的最大均值差异来经验性地验证我们的收敛性主张。 我们将新提出的极限模型的性能与一些先前讨论的模型在合成回归问题上的表现进行了比较。 我们观察到随着模型深度的增加,边缘似然和超后验的病态条件数增加,这与需要复杂优化技巧的有限宽度网络形成了相似之处。
Comments: Pre-print
Subjects: Machine Learning (cs.LG) ; Machine Learning (stat.ML)
Cite as: arXiv:1911.12927 [cs.LG]
  (or arXiv:1911.12927v1 [cs.LG] for this version)
  https://doi.org/10.48550/arXiv.1911.12927
arXiv-issued DOI via DataCite

Submission history

From: Russell Tsuchida B.E. [view email]
[v1] Fri, 29 Nov 2019 02:34:35 UTC (9,424 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • TeX Source
view license
Current browse context:
cs.LG
< prev   |   next >
new | recent | 2019-11
Change to browse by:
cs
stat
stat.ML

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号