Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > cs > arXiv:2506.17442

Help | Advanced Search

Computer Science > Artificial Intelligence

arXiv:2506.17442 (cs)
[Submitted on 20 Jun 2025 ]

Title: Keeping Medical AI Healthy: A Review of Detection and Correction Methods for System Degradation

Title: 保持医疗人工智能健康:系统退化检测与纠正方法综述

Authors:Hao Guan, David Bates, Li Zhou
Abstract: Artificial intelligence (AI) is increasingly integrated into modern healthcare, offering powerful support for clinical decision-making. However, in real-world settings, AI systems may experience performance degradation over time, due to factors such as shifting data distributions, changes in patient characteristics, evolving clinical protocols, and variations in data quality. These factors can compromise model reliability, posing safety concerns and increasing the likelihood of inaccurate predictions or adverse outcomes. This review presents a forward-looking perspective on monitoring and maintaining the "health" of AI systems in healthcare. We highlight the urgent need for continuous performance monitoring, early degradation detection, and effective self-correction mechanisms. The paper begins by reviewing common causes of performance degradation at both data and model levels. We then summarize key techniques for detecting data and model drift, followed by an in-depth look at root cause analysis. Correction strategies are further reviewed, ranging from model retraining to test-time adaptation. Our survey spans both traditional machine learning models and state-of-the-art large language models (LLMs), offering insights into their strengths and limitations. Finally, we discuss ongoing technical challenges and propose future research directions. This work aims to guide the development of reliable, robust medical AI systems capable of sustaining safe, long-term deployment in dynamic clinical settings.
Abstract: 人工智能(AI)正日益融入现代医疗保健,为临床决策提供强大的支持。 然而,在现实环境中,由于数据分布的变化、患者特征的改变、临床协议的演变以及数据质量的波动等因素,AI系统可能会随时间推移出现性能下降。 这些因素可能影响模型的可靠性,带来安全问题,并增加预测不准确或不良结果的可能性。 本综述从前瞻性角度探讨了在医疗领域监控和维护AI系统“健康”的方法。 我们强调了持续性能监控、早期性能退化检测和有效自我校正机制的迫切需求。 本文首先回顾了数据层面和模型层面常见的性能退化原因。 随后总结了检测数据和模型漂移的关键技术,并深入分析了根本原因。 进一步回顾了纠正策略,包括模型重新训练到测试时适应等方法。 我们的调查涵盖了传统机器学习模型和最先进的大型语言模型(LLMs),并对其优势和局限性提供了见解。 最后,我们讨论了当前的技术挑战,并提出了未来的研究方向。 本研究旨在指导开发出能够在动态临床环境中实现安全、长期部署的可靠、稳健的医疗AI系统。
Comments: 15 pages, 5 figures
Subjects: Artificial Intelligence (cs.AI) ; Emerging Technologies (cs.ET); Machine Learning (cs.LG)
Cite as: arXiv:2506.17442 [cs.AI]
  (or arXiv:2506.17442v1 [cs.AI] for this version)
  https://doi.org/10.48550/arXiv.2506.17442
arXiv-issued DOI via DataCite

Submission history

From: Hao Guan [view email]
[v1] Fri, 20 Jun 2025 19:22:07 UTC (564 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • HTML (experimental)
  • TeX Source
  • Other Formats
view license
Current browse context:
cs.AI
< prev   |   next >
new | recent | 2025-06
Change to browse by:
cs
cs.ET
cs.LG

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号