Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > cs > arXiv:2402.07877v1

Help | Advanced Search

Computer Science > Artificial Intelligence

arXiv:2402.07877v1 (cs)
[Submitted on 12 Feb 2024 (this version) , latest version 23 Apr 2025 (v4) ]

Title: WildfireGPT: Tailored Large Language Model for Wildfire Analysis

Title: WildfireGPT:用于野火分析的定制大型语言模型

Authors:Yangxinyu Xie, Tanwi Mallick, Joshua David Bergerson, John K. Hutchison, Duane R. Verner, Jordan Branham, M. Ross Alexander, Robert B. Ross, Yan Feng, Leslie-Anne Levy, Weijie Su
Abstract: The recent advancement of large language models (LLMs) represents a transformational capability at the frontier of artificial intelligence (AI) and machine learning (ML). However, LLMs are generalized models, trained on extensive text corpus, and often struggle to provide context-specific information, particularly in areas requiring specialized knowledge such as wildfire details within the broader context of climate change. For decision-makers and policymakers focused on wildfire resilience and adaptation, it is crucial to obtain responses that are not only precise but also domain-specific, rather than generic. To that end, we developed WildfireGPT, a prototype LLM agent designed to transform user queries into actionable insights on wildfire risks. We enrich WildfireGPT by providing additional context such as climate projections and scientific literature to ensure its information is current, relevant, and scientifically accurate. This enables WildfireGPT to be an effective tool for delivering detailed, user-specific insights on wildfire risks to support a diverse set of end users, including researchers, engineers, urban planners, emergency managers, and infrastructure operators.
Abstract: 大型语言模型(LLMs)的最新进展代表了人工智能(AI)和机器学习(ML)前沿的一项变革性能力。然而,LLMs是通用模型,训练于广泛的文本语料库上,往往难以提供特定上下文的信息,特别是在需要专业知识的领域,例如气候变化背景下关于野火的细节。 对于专注于野火韧性和适应性的决策者和政策制定者来说,获取不仅精确而且领域特定的响应,而非通用信息,至关重要。 为此,我们开发了WildfireGPT,这是一种原型LLM代理,旨在将用户的查询转化为关于野火风险的可操作见解。我们通过提供额外的上下文信息,如气候预测和科学文献,来丰富WildfireGPT,以确保其信息的时效性、相关性和科学准确性。 这使得WildfireGPT成为一种有效的工具,能够为各种最终用户提供关于野火风险的详细且特定于用户的信息,支持包括研究人员、工程师、城市规划师、应急管理专家以及基础设施运营商在内的多样化用户群体。
Subjects: Artificial Intelligence (cs.AI)
Cite as: arXiv:2402.07877 [cs.AI]
  (or arXiv:2402.07877v1 [cs.AI] for this version)
  https://doi.org/10.48550/arXiv.2402.07877
arXiv-issued DOI via DataCite

Submission history

From: Yangxinyu Xie [view email]
[v1] Mon, 12 Feb 2024 18:41:55 UTC (1,415 KB)
[v2] Wed, 28 Aug 2024 19:01:23 UTC (6,256 KB)
[v3] Fri, 28 Mar 2025 17:14:39 UTC (11,571 KB)
[v4] Wed, 23 Apr 2025 03:30:33 UTC (6,256 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • HTML (experimental)
  • TeX Source
license icon view license
Current browse context:
cs.AI
< prev   |   next >
new | recent | 2024-02
Change to browse by:
cs

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号