Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > stat > arXiv:2407.01883

Help | Advanced Search

Statistics > Methodology

arXiv:2407.01883 (stat)
[Submitted on 2 Jul 2024 (v1) , last revised 27 Mar 2025 (this version, v2)]

Title: Robust Linear Mixed Models using Hierarchical Gamma-Divergence

Title: 基于层次伽玛散度的鲁棒线性混合模型

Authors:Shonosuke Sugasawa, Francis K. C. Hui, Alan H. Welsh
Abstract: Linear mixed models (LMMs) are a popular class of methods for analyzing longitudinal and clustered data. However, such models can be sensitive to outliers, and this can lead to biased inference on model parameters and inaccurate prediction of random effects if the data are contaminated. We propose a new approach to robust estimation and inference for LMMs using a hierarchical gamma-divergence, which offers an automated, data-driven approach to downweight the effects of outliers occurring in both the error and the random effects, using normalized powered density weights. For estimation and inference, we develop a computationally scalable minorization-maximization algorithm for the resulting objective function, along with a clustered bootstrap method for uncertainty quantification and a Hyvarinen score criterion for selecting a tuning parameter controlling the degree of robustness. Under suitable regularity conditions, we show the resulting robust estimates can be asymptotically controlled even under a heavy level of (covariate-dependent) contamination. Simulation studies demonstrate hierarchical gamma-divergence consistently outperforms several currently available methods for robustifying LMMs. We also illustrate the proposed method using data from a multi-center AIDS cohort study.
Abstract: 线性混合模型(LMMs)是一类用于分析纵向数据和聚类数据的流行方法。然而,这些模型可能对异常值敏感,如果数据受到污染,则可能导致模型参数的推断偏差以及随机效应预测的不准确性。我们提出了一种新的基于层次伽玛散度的鲁棒估计和推理方法,该方法通过使用标准化的幂密度权重,提供了自动化且数据驱动的方法来降低误差和随机效应中异常值的影响。为了估计和推理,我们开发了一个计算上可扩展的最小化-最大化算法来处理目标函数,并结合了聚类自助法来进行不确定性量化,以及一种Hyvarinen评分标准来选择控制鲁棒性的程度的调整参数。在适当的正则条件假设下,我们证明即使在重度(协变量依赖型)污染条件下,所得到的鲁棒估计也可以渐近控制。模拟研究显示层次伽玛散度在稳健化LMMs方面始终优于几种当前可用的方法。我们还利用多中心艾滋病队列研究的数据展示了所提出的方法。
Comments: 36 pages (main) + 14 pages (supplement)
Subjects: Methodology (stat.ME)
Cite as: arXiv:2407.01883 [stat.ME]
  (or arXiv:2407.01883v2 [stat.ME] for this version)
  https://doi.org/10.48550/arXiv.2407.01883
arXiv-issued DOI via DataCite

Submission history

From: Shonosuke Sugasawa [view email]
[v1] Tue, 2 Jul 2024 02:05:33 UTC (282 KB)
[v2] Thu, 27 Mar 2025 08:32:43 UTC (1,858 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • HTML (experimental)
  • TeX Source
view license
Current browse context:
stat.ME
< prev   |   next >
new | recent | 2024-07
Change to browse by:
stat

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号