Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > cs > arXiv:2407.02994

Help | Advanced Search

Computer Science > Databases

arXiv:2407.02994 (cs)
[Submitted on 3 Jul 2024 (v1) , last revised 17 Jul 2025 (this version, v5)]

Title: MedPix 2.0: A Comprehensive Multimodal Biomedical Data set for Advanced AI Applications with Retrieval Augmented Generation and Knowledge Graphs

Title: MedPix 2.0:用于具有检索增强生成和知识图谱的先进人工智能应用的综合多模态生物医学数据集

Authors:Irene Siragusa, Salvatore Contino, Massimo La Ciura, Rosario Alicata, Roberto Pirrone
Abstract: The increasing interest in developing Artificial Intelligence applications in the medical domain, suffers from the lack of high-quality data set, mainly due to privacy-related issues. In addition, the recent increase in Vision Language Models (VLM) leads to the need for multimodal medical data sets, where clinical reports and findings are attached to the corresponding medical scans. This paper illustrates the entire workflow for building the MedPix 2.0 data set. Starting with the well-known multimodal data set MedPix\textsuperscript{\textregistered}, mainly used by physicians, nurses, and healthcare students for Continuing Medical Education purposes, a semi-automatic pipeline was developed to extract visual and textual data followed by a manual curing procedure in which noisy samples were removed, thus creating a MongoDB database. Along with the data set, we developed a Graphical User Interface aimed at navigating efficiently the MongoDB instance and obtaining the raw data that can be easily used for training and/or fine-tuning VLMs. To enforce this point, in this work, we first recall DR-Minerva, a Retrieve Augmented Generation-based VLM model trained upon MedPix 2.0. DR-Minerva predicts the body part and the modality used to scan its input image. We also propose the extension of DR-Minerva with a Knowledge Graph that uses Llama 3.1 Instruct 8B, and leverages MedPix 2.0. The resulting architecture can be queried in a end-to-end manner, as a medical decision support system. MedPix 2.0 is available on GitHub.
Abstract: 在医学领域开发人工智能应用的兴趣日益增加,但由于隐私相关问题,缺乏高质量的数据集。此外,视觉语言模型(VLM)的近期增长导致需要多模态医学数据集,其中临床报告和发现与相应的医学扫描相关联。本文说明了构建MedPix 2.0数据集的整个工作流程。从著名的多模态数据集MedPix\textsuperscript{\textregistered}开始,该数据集主要由医生、护士和医疗学生用于继续医学教育,开发了一个半自动流程来提取视觉和文本数据,随后进行人工校正过程,其中删除了噪声样本,从而创建了一个MongoDB数据库。除了数据集外,我们还开发了一个图形用户界面,旨在高效地浏览MongoDB实例并获取可以轻松用于训练和/或微调VLM的原始数据。为了强调这一点,在本工作中,我们首先回顾DR-Minerva,一个基于检索增强生成的VLM模型,该模型在MedPix 2.0上进行训练。DR-Minerva预测其输入图像的部位和使用的扫描方式。我们还提出了将DR-Minerva扩展为使用Llama 3.1 Instruct 8B并利用MedPix 2.0的知识图谱。该结果架构可以以端到端的方式查询,作为医疗决策支持系统。MedPix 2.0可在GitHub上获得。
Subjects: Databases (cs.DB) ; Artificial Intelligence (cs.AI); Machine Learning (cs.LG)
Cite as: arXiv:2407.02994 [cs.DB]
  (or arXiv:2407.02994v5 [cs.DB] for this version)
  https://doi.org/10.48550/arXiv.2407.02994
arXiv-issued DOI via DataCite
Journal reference: ITADATA/2024/09
Related DOI: https://doi.org/10.1007/s41019-025-00297-8
DOI(s) linking to related resources

Submission history

From: Salvatore Contino [view email]
[v1] Wed, 3 Jul 2024 10:49:21 UTC (8,363 KB)
[v2] Wed, 8 Jan 2025 13:35:45 UTC (15,594 KB)
[v3] Wed, 9 Apr 2025 16:57:40 UTC (8,168 KB)
[v4] Wed, 30 Apr 2025 11:41:49 UTC (10,503 KB)
[v5] Thu, 17 Jul 2025 12:30:16 UTC (8,169 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • HTML (experimental)
  • TeX Source
view license
Current browse context:
cs.DB
< prev   |   next >
new | recent | 2024-07
Change to browse by:
cs
cs.AI
cs.LG

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号