Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > cs > arXiv:2304.03805

Help | Advanced Search

Computer Science > Machine Learning

arXiv:2304.03805 (cs)
[Submitted on 7 Apr 2023 ]

Title: Correcting Model Misspecification via Generative Adversarial Networks

Title: 通过生成对抗网络校正模型误指

Authors:Pronoma Banerjee, Manasi V Gude, Rajvi J Sampat, Sharvari M Hedaoo, Soma Dhavala, Snehanshu Saha
Abstract: Machine learning models are often misspecified in the likelihood, which leads to a lack of robustness in the predictions. In this paper, we introduce a framework for correcting likelihood misspecifications in several paradigm agnostic noisy prior models and test the model's ability to remove the misspecification. The "ABC-GAN" framework introduced is a novel generative modeling paradigm, which combines Generative Adversarial Networks (GANs) and Approximate Bayesian Computation (ABC). This new paradigm assists the existing GANs by incorporating any subjective knowledge available about the modeling process via ABC, as a regularizer, resulting in a partially interpretable model that operates well under low data regimes. At the same time, unlike any Bayesian analysis, the explicit knowledge need not be perfect, since the generator in the GAN can be made arbitrarily complex. ABC-GAN eliminates the need for summary statistics and distance metrics as the discriminator implicitly learns them and enables simultaneous specification of multiple generative models. The model misspecification is simulated in our experiments by introducing noise of various biases and variances. The correction term is learnt via the ABC-GAN, with skip connections, referred to as skipGAN. The strength of the skip connection indicates the amount of correction needed or how misspecified the prior model is. Based on a simple experimental setup, we show that the ABC-GAN models not only correct the misspecification of the prior, but also perform as well as or better than the respective priors under noisier conditions. In this proposal, we show that ABC-GANs get the best of both worlds.
Abstract: 机器学习模型在似然函数方面常常存在错误设定,这导致预测结果缺乏稳健性。 在本文中,我们引入了一个框架,用于纠正几种与范式无关的噪声先验模型中的似然函数错误设定,并测试模型消除错误设定的能力。 引入的“ABC-GAN”框架是一种新颖的生成建模范式,它结合了生成对抗网络(GANs)和近似贝叶斯计算(ABC)。 这一新范式通过ABC将关于建模过程的任何主观知识作为正则化器引入现有的GANs,从而得到一个部分可解释的模型,在数据量较少的情况下也能良好运行。 同时,与任何贝叶斯分析不同,显式知识不需要完美,因为GAN中的生成器可以变得任意复杂。 ABC-GAN消除了对摘要统计量和距离度量的需求,因为判别器会隐式地学习它们,并且能够同时指定多个生成模型。 在我们的实验中,通过引入具有不同偏差和方差的噪声来模拟模型错误设定。 通过带有跳跃连接的ABC-GAN学习校正项,称为skipGAN。 跳跃连接的强度表明所需的校正量或先验模型的错误设定程度。 基于一个简单的实验设置,我们展示了ABC-GAN模型不仅能够校正先验模型的错误设定,而且在更嘈杂的条件下表现得与相应先验模型相当或更好。 在这个提案中,我们展示了ABC-GAN能够兼得两者的优点。
Subjects: Machine Learning (cs.LG)
Cite as: arXiv:2304.03805 [cs.LG]
  (or arXiv:2304.03805v1 [cs.LG] for this version)
  https://doi.org/10.48550/arXiv.2304.03805
arXiv-issued DOI via DataCite

Submission history

From: Snehanshu Saha [view email]
[v1] Fri, 7 Apr 2023 18:20:38 UTC (256 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • TeX Source
  • Other Formats
license icon view license
Current browse context:
cs.LG
< prev   |   next >
new | recent | 2023-04
Change to browse by:
cs

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号