Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > cs > arXiv:2510.00024v1

Help | Advanced Search

Computer Science > Social and Information Networks

arXiv:2510.00024v1 (cs)
[Submitted on 24 Sep 2025 ]

Title: EpidemIQs: Prompt-to-Paper LLM Agents for Epidemic Modeling and Analysis

Title: EpidemIQs:流行病建模与分析的提示到论文大语言模型代理

Authors:Mohammad Hossein Samaei, Faryad Darabi Sahneh, Lee W. Cohnstaedt, Caterina Scoglio
Abstract: Large Language Models (LLMs) offer new opportunities to automate complex interdisciplinary research domains. Epidemic modeling, characterized by its complexity and reliance on network science, dynamical systems, epidemiology, and stochastic simulations, represents a prime candidate for leveraging LLM-driven automation. We introduce \textbf{EpidemIQs}, a novel multi-agent LLM framework that integrates user inputs and autonomously conducts literature review, analytical derivation, network modeling, mechanistic modeling, stochastic simulations, data visualization and analysis, and finally documentation of findings in a structured manuscript. We introduced two types of agents: a scientist agent for planning, coordination, reflection, and generation of final results, and a task-expert agent to focus exclusively on one specific duty serving as a tool to the scientist agent. The framework consistently generated complete reports in scientific article format. Specifically, using GPT 4.1 and GPT 4.1 mini as backbone LLMs for scientist and task-expert agents, respectively, the autonomous process completed with average total token usage 870K at a cost of about \$1.57 per study, achieving a 100\% completion success rate through our experiments. We evaluate EpidemIQs across different epidemic scenarios, measuring computational cost, completion success rate, and AI and human expert reviews of generated reports. We compare EpidemIQs to the single-agent LLM, which has the same system prompts and tools, iteratively planning, invoking tools, and revising outputs until task completion. The comparison shows consistently higher performance of the proposed framework across five different scenarios. EpidemIQs represents a step forward in accelerating scientific research by significantly reducing costs and turnaround time of discovery processes, and enhancing accessibility to advanced modeling tools.
Abstract: 大型语言模型(LLMs)为自动化复杂的跨学科研究领域提供了新的机遇。 流行病建模以其复杂性以及对网络科学、动力系统、流行病学和随机模拟的依赖性,是利用LLM驱动自动化的理想候选者。 我们引入了\textbf{流行病学},这是一种新颖的多智能体LLM框架,它整合用户输入并自主进行文献综述、分析推导、网络建模、机制建模、随机模拟、数据可视化和分析,并最终以结构化论文的形式记录研究成果。 我们引入了两种类型的智能体:一个科学家智能体用于规划、协调、反思和生成最终结果,以及一个任务专家智能体,专门专注于一项特定职责,作为科学家智能体的工具。 该框架持续生成完整的科学文章格式报告。 具体而言,分别使用GPT 4.1和GPT 4.1 mini作为科学家智能体和任务专家智能体的骨干LLM,自主过程平均总token使用量为870K,每项研究的成本约为1.57美元,通过我们的实验实现了100%的完成成功率。 我们在不同的流行病情景中评估EpidemIQs,测量计算成本、完成成功率以及AI和人类专家对生成报告的评审。 我们比较EpidemIQs与单智能体LLM,后者具有相同的系统提示和工具,迭代规划、调用工具并修改输出直到任务完成。 比较结果显示,在五个不同的情景中,所提出的框架表现出更高的性能。 EpidemIQs通过显著降低发现过程的成本和周转时间,并提高对先进建模工具的可及性,标志着加速科学研究的一个进步。
Subjects: Social and Information Networks (cs.SI) ; Artificial Intelligence (cs.AI)
Cite as: arXiv:2510.00024 [cs.SI]
  (or arXiv:2510.00024v1 [cs.SI] for this version)
  https://doi.org/10.48550/arXiv.2510.00024
arXiv-issued DOI via DataCite

Submission history

From: Mohammad Hossein Samaei [view email]
[v1] Wed, 24 Sep 2025 18:54:56 UTC (29,023 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • HTML (experimental)
  • TeX Source
license icon view license
Current browse context:
cs.SI
< prev   |   next >
new | recent | 2025-10
Change to browse by:
cs
cs.AI

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号