Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > cs > arXiv:2402.07877

Help | Advanced Search

Computer Science > Artificial Intelligence

arXiv:2402.07877 (cs)
[Submitted on 12 Feb 2024 (v1) , last revised 23 Apr 2025 (this version, v4)]

Title: WildfireGPT: Tailored Large Language Model for Wildfire Analysis

Title: WildfireGPT:用于野火分析的定制大型语言模型

Authors:Yangxinyu Xie, Bowen Jiang, Tanwi Mallick, Joshua David Bergerson, John K. Hutchison, Duane R. Verner, Jordan Branham, M. Ross Alexander, Robert B. Ross, Yan Feng, Leslie-Anne Levy, Weijie Su, Camillo J. Taylor
Abstract: Recent advancement of large language models (LLMs) represents a transformational capability at the frontier of artificial intelligence. However, LLMs are generalized models, trained on extensive text corpus, and often struggle to provide context-specific information, particularly in areas requiring specialized knowledge, such as wildfire details within the broader context of climate change. For decision-makers focused on wildfire resilience and adaptation, it is crucial to obtain responses that are not only precise but also domain-specific. To that end, we developed WildfireGPT, a prototype LLM agent designed to transform user queries into actionable insights on wildfire risks. We enrich WildfireGPT by providing additional context, such as climate projections and scientific literature, to ensure its information is current, relevant, and scientifically accurate. This enables WildfireGPT to be an effective tool for delivering detailed, user-specific insights on wildfire risks to support a diverse set of end users, including but not limited to researchers and engineers, for making positive impact and decision making.
Abstract: 近期大型语言模型(LLMs)的进步代表了人工智能前沿的一项变革性能力。然而,LLMs是通用模型,经过广泛的文本语料库训练,往往难以提供特定上下文的信息,特别是在需要专门知识的领域,比如在气候变化的大背景下关于野火的详细信息。 对于专注于野火韧性和适应性的决策者来说,获得不仅精确而且领域特定的响应至关重要。 为此,我们开发了WildfireGPT,这是一种原型LLM代理,旨在将用户查询转化为关于野火风险的实际见解。 我们通过提供额外的上下文,如气候预测和科学文献,来丰富WildfireGPT,以确保其信息是最新的、相关的且科学准确的。 这使得WildfireGPT能够成为一种有效的工具,为用户提供详细的、特定于用户的关于野火风险的见解,以支持包括但不限于研究人员和工程师在内的多样化终端用户,从而产生积极影响并辅助决策。
Comments: restoring content for arXiv:2402.07877v2 which was replaced in error
Subjects: Artificial Intelligence (cs.AI)
Cite as: arXiv:2402.07877 [cs.AI]
  (or arXiv:2402.07877v4 [cs.AI] for this version)
  https://doi.org/10.48550/arXiv.2402.07877
arXiv-issued DOI via DataCite

Submission history

From: Yangxinyu Xie [view email]
[v1] Mon, 12 Feb 2024 18:41:55 UTC (1,415 KB)
[v2] Wed, 28 Aug 2024 19:01:23 UTC (6,256 KB)
[v3] Fri, 28 Mar 2025 17:14:39 UTC (11,571 KB)
[v4] Wed, 23 Apr 2025 03:30:33 UTC (6,256 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • HTML (experimental)
  • TeX Source
license icon view license
Current browse context:
cs.AI
< prev   |   next >
new | recent | 2024-02
Change to browse by:
cs

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号