Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > cs > arXiv:2506.06355

Help | Advanced Search

Computer Science > Computers and Society

arXiv:2506.06355 (cs)
[Submitted on 2 Jun 2025 ]

Title: LLMs as World Models: Data-Driven and Human-Centered Pre-Event Simulation for Disaster Impact Assessment

Title: 作为世界模型的大型语言模型:以数据驱动和以人为本的灾害影响评估事前模拟

Authors:Lingyao Li, Dawei Li, Zhenhui Ou, Xiaoran Xu, Jingxiao Liu, Zihui Ma, Runlong Yu, Min Deng
Abstract: Efficient simulation is essential for enhancing proactive preparedness for sudden-onset disasters such as earthquakes. Recent advancements in large language models (LLMs) as world models show promise in simulating complex scenarios. This study examines multiple LLMs to proactively estimate perceived earthquake impacts. Leveraging multimodal datasets including geospatial, socioeconomic, building, and street-level imagery data, our framework generates Modified Mercalli Intensity (MMI) predictions at zip code and county scales. Evaluations on the 2014 Napa and 2019 Ridgecrest earthquakes using USGS ''Did You Feel It? (DYFI)'' reports demonstrate significant alignment, as evidenced by a high correlation of 0.88 and a low RMSE of 0.77 as compared to real reports at the zip code level. Techniques such as RAG and ICL can improve simulation performance, while visual inputs notably enhance accuracy compared to structured numerical data alone. These findings show the promise of LLMs in simulating disaster impacts that can help strengthen pre-event planning.
Abstract: 高效的模拟对于增强对地震等突发灾害的主动应对准备至关重要。大型语言模型(LLM)作为世界模型的最新进展,在模拟复杂场景方面显示出巨大潜力。本研究考察了多种LLM,以主动估算感知到的地震影响。利用包括地理空间、社会经济、建筑和街景图像数据在内的多模态数据集,我们的框架生成了邮政编码和地区级别的修订麦加利烈度(MMI)预测。通过美国地质调查局(USGS)的“你感受到地震了吗?”(DYFI)报告对2014年纳帕地震和2019年里奇克雷斯特地震的评估表明,与邮政编码级别的实际报告相比,相关性高达0.88,均方根误差(RMSE)仅为0.77,显示出高度一致性。像RAG和ICL这样的技术可以提高模拟性能,而视觉输入相较于结构化数值数据,能够显著提升准确性。这些发现展示了LLM在模拟灾害影响方面的潜力,这有助于加强事前规划。
Subjects: Computers and Society (cs.CY) ; Computational Engineering, Finance, and Science (cs.CE); Computation and Language (cs.CL); Computer Vision and Pattern Recognition (cs.CV)
Cite as: arXiv:2506.06355 [cs.CY]
  (or arXiv:2506.06355v1 [cs.CY] for this version)
  https://doi.org/10.48550/arXiv.2506.06355
arXiv-issued DOI via DataCite

Submission history

From: Lingyao Li [view email]
[v1] Mon, 2 Jun 2025 22:07:53 UTC (2,257 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • HTML (experimental)
  • TeX Source
license icon view license
Current browse context:
cs.CV
< prev   |   next >
new | recent | 2025-06
Change to browse by:
cs
cs.CE
cs.CL
cs.CY

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号