Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > cs > arXiv:2505.21569

Help | Advanced Search

Computer Science > Machine Learning

arXiv:2505.21569 (cs)
[Submitted on 27 May 2025 (v1) , last revised 18 Jun 2025 (this version, v2)]

Title: ChemHAS: Hierarchical Agent Stacking for Enhancing Chemistry Tools

Title: ChemHAS:用于增强化学工具的分层代理堆叠

Authors:Zhucong Li, Bowei Zhang, Jin Xiao, Zhijian Zhou, Fenglei Cao, Jiaqing Liang, Yuan Qi
Abstract: Large Language Model (LLM)-based agents have demonstrated the ability to improve performance in chemistry-related tasks by selecting appropriate tools. However, their effectiveness remains limited by the inherent prediction errors of chemistry tools. In this paper, we take a step further by exploring how LLMbased agents can, in turn, be leveraged to reduce prediction errors of the tools. To this end, we propose ChemHAS (Chemical Hierarchical Agent Stacking), a simple yet effective method that enhances chemistry tools through optimizing agent-stacking structures from limited data. ChemHAS achieves state-of-the-art performance across four fundamental chemistry tasks, demonstrating that our method can effectively compensate for prediction errors of the tools. Furthermore, we identify and characterize four distinct agent-stacking behaviors, potentially improving interpretability and revealing new possibilities for AI agent applications in scientific research. Our code and dataset are publicly available at https: //anonymous.4open.science/r/ChemHAS-01E4/README.md.
Abstract: 基于大型语言模型(LLM)的智能体在通过选择合适的工具来提升化学相关任务性能方面已经展现出能力。然而,它们的有效性仍然受到化学工具固有预测错误的限制。本文进一步探索了如何利用这些基于LLM的智能体反过来减少工具的预测错误。为此,我们提出了ChemHAS(化学分层智能体堆叠),这是一种简单但有效的方法,它通过从有限的数据中优化智能体堆叠结构来增强化学工具。ChemHAS在四个基础化学任务中实现了最先进的性能,表明我们的方法能够有效地弥补工具的预测错误。此外,我们还识别并刻画了四种不同的智能体堆叠行为,这可能提高可解释性,并揭示AI智能体在科学研究中应用的新可能性。我们的代码和数据集可在https://anonymous.4open.science/r/ChemHAS-01E4/README.md公开获取。
Comments: 9 pages
Subjects: Machine Learning (cs.LG) ; Artificial Intelligence (cs.AI); Computation and Language (cs.CL)
Cite as: arXiv:2505.21569 [cs.LG]
  (or arXiv:2505.21569v2 [cs.LG] for this version)
  https://doi.org/10.48550/arXiv.2505.21569
arXiv-issued DOI via DataCite

Submission history

From: Bowei Zhang [view email]
[v1] Tue, 27 May 2025 06:22:57 UTC (1,172 KB)
[v2] Wed, 18 Jun 2025 03:05:54 UTC (1,172 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • HTML (experimental)
  • TeX Source
license icon view license
Current browse context:
cs.AI
< prev   |   next >
new | recent | 2025-05
Change to browse by:
cs
cs.CL
cs.LG

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号