Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > cond-mat > arXiv:2309.00195v1

Help | Advanced Search

Condensed Matter > Materials Science

arXiv:2309.00195v1 (cond-mat)
[Submitted on 1 Sep 2023 ]

Title: On the Uncertainty Estimates of Equivariant-Neural-Network-Ensembles Interatomic Potentials

Title: 关于等变神经网络集合的原子间势能不确定性估计

Authors:Shuaihua Lu, Luca M. Ghiringhelli, Christian Carbogno, Jinlan Wang, Matthias Scheffler
Abstract: Machine-learning (ML) interatomic potentials (IPs) trained on first-principles datasets are becoming increasingly popular since they promise to treat larger system sizes and longer time scales, compared to the {\em ab initio} techniques producing the training data. Estimating the accuracy of MLIPs and reliably detecting when predictions become inaccurate is key for enabling their unfailing usage. In this paper, we explore this aspect for a specific class of MLIPs, the equivariant-neural-network (ENN) IPs using the ensemble technique for quantifying their prediction uncertainties. We critically examine the robustness of uncertainties when the ENN ensemble IP (ENNE-IP) is applied to the realistic and physically relevant scenario of predicting local-minima structures in the configurational space. The ENNE-IP is trained on data for liquid silicon, created by density-functional theory (DFT) with the generalized gradient approximation (GGA) for the exchange-correlation functional. Then, the ensemble-derived uncertainties are compared with the actual errors (comparing the results of the ENNE-IP with those of the underlying DFT-GGA theory) for various test sets, including liquid silicon at different temperatures and out-of-training-domain data such as solid phases with and without point defects as well as surfaces. Our study reveals that the predicted uncertainties are generally overconfident and hold little quantitative predictive power for the actual errors.
Abstract: 机器学习(ML)原子间势能(IPs)在第一性原理数据集上训练,由于它们相比生成训练数据的 {\em 从头计算} 技术能够处理更大的系统尺寸和更长的时间尺度,因此变得越来越受欢迎。 估计 MLIPs 的准确性并可靠地检测预测何时变得不准确是实现其可靠使用的关键。 在本文中,我们探讨了这一方面,针对一种特定类型的 MLIPs,即使用集成技术来量化其预测不确定性的等变神经网络(ENN)IPs。 我们仔细检查了当 ENN 集成 IP(ENNE-IP)应用于预测构型空间中的局部极小结构这一现实且物理相关的场景时,不确定性是否具有鲁棒性。 ENNE-IP 是在液态硅的数据上训练的,这些数据由密度泛函理论(DFT)生成,交换相关泛函采用广义梯度近似(GGA)。 然后,将集成得出的不确定性与实际误差进行比较(将 ENNE-IP 的结果与底层 DFT-GGA 理论的结果进行比较),测试集包括不同温度下的液态硅以及训练数据域外的数据,如含有或不含点缺陷的固态相以及表面。 我们的研究结果显示,预测的不确定性通常过于自信,并且对于实际误差几乎没有定量预测能力。
Subjects: Materials Science (cond-mat.mtrl-sci)
Cite as: arXiv:2309.00195 [cond-mat.mtrl-sci]
  (or arXiv:2309.00195v1 [cond-mat.mtrl-sci] for this version)
  https://doi.org/10.48550/arXiv.2309.00195
arXiv-issued DOI via DataCite

Submission history

From: Luca Ghiringhelli [view email]
[v1] Fri, 1 Sep 2023 01:18:18 UTC (7,174 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • TeX Source
  • Other Formats
license icon view license
Ancillary-file links:

Ancillary files (details):

Current browse context:
cond-mat.mtrl-sci
< prev   |   next >
new | recent | 2023-09
Change to browse by:
cond-mat

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号