Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > stat > arXiv:2506.12771

Help | Advanced Search

Statistics > Methodology

arXiv:2506.12771 (stat)
[Submitted on 15 Jun 2025 ]

Title: A Residual Prediction Test for the Well-Specification of Linear Instrumental Variable Models

Title: 线性工具变量模型充分性的残差预测检验

Authors:Cyrill Scheidegger, Malte Londschien, Peter Bühlmann
Abstract: The linear instrumental variable (IV) model is widely applied in observational studies. The corresponding assumptions are critical for valid causal inference, and hence, it is important to have tools to assess the model's well-specification. The classical Sargan-Hansen J-test is limited to the overidentified setting, where the number of instruments is larger than the number of endogenous variables. Here, we propose a novel and simple test for the well-specification of the linear IV model under the assumption that the structural error is mean independent of the instruments. Importantly, assuming mean independence allows the construction of such a test even in the just-identified setting. We use the idea of residual prediction tests: if the residuals from two-stage least squares can be predicted from the instruments better than randomly, this signals misspecification. We construct a test statistic based on sample splitting and a user-chosen machine learning method. We show asymptotic type I error control. Furthermore, by relying on machine learning tools, our test has good power for detecting alternatives from a broad class of scenarios. We also address heteroskedasticity- and cluster-robust inference. The test is implemented in the R package RPIV and in the ivmodels software package for Python.
Abstract: 线性工具变量(IV)模型在观察性研究中被广泛应用。相应的假设对于有效的因果推断至关重要,因此,拥有评估模型良好设定的工具非常重要。经典的Sargan-Hansen J检验仅限于过度识别的情况,即工具变量的数量大于内生变量的数量。在这里,我们提出了一种新的且简单的方法,在假设结构误差均值独立于工具变量的情况下,用于检验线性IV模型的良好设定。重要的是,假设均值独立性允许即使在恰好识别的情况下构建这样的检验。我们使用残差预测检验的思想:如果两阶段最小二乘法得到的残差可以比随机方式更好地从工具变量预测出来,则表明模型设定存在误设。我们基于样本拆分和用户选择的机器学习方法构造了一个检验统计量。我们展示了渐近第一类错误控制。此外,通过依赖机器学习工具,我们的检验在检测广泛类别替代情况时具有良好的功效。我们还讨论了异方差性和聚类稳健推断。该检验已在R包RPIV和Python的ivmodels软件包中实现。
Subjects: Methodology (stat.ME)
Cite as: arXiv:2506.12771 [stat.ME]
  (or arXiv:2506.12771v1 [stat.ME] for this version)
  https://doi.org/10.48550/arXiv.2506.12771
arXiv-issued DOI via DataCite

Submission history

From: Cyrill Scheidegger [view email]
[v1] Sun, 15 Jun 2025 08:42:48 UTC (71 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • HTML (experimental)
  • TeX Source
view license
Current browse context:
stat.ME
< prev   |   next >
new | recent | 2025-06
Change to browse by:
stat

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号