Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > cs > arXiv:2510.12702

Help | Advanced Search

Computer Science > Software Engineering

arXiv:2510.12702 (cs)
[Submitted on 14 Oct 2025 ]

Title: Beyond Postconditions: Can Large Language Models infer Formal Contracts for Automatic Software Verification?

Title: 超越后条件:大型语言模型能否推断出形式化契约以实现自动软件验证?

Authors:Cedric Richter, Heike Wehrheim
Abstract: Automatic software verifiers have become increasingly effective at the task of checking software against (formal) specifications. Yet, their adoption in practice has been hampered by the lack of such specifications in real world code. Large Language Models (LLMs) have shown promise in inferring formal postconditions from natural language hints embedded in code such as function names, comments or documentation. Using the generated postconditions as specifications in a subsequent verification, however, often leads verifiers to suggest invalid inputs, hinting at potential issues that ultimately turn out to be false alarms. To address this, we revisit the problem of specification inference from natural language in the context of automatic software verification. In the process, we introduce NL2Contract, the task of employing LLMs to translate informal natural language into formal functional contracts, consisting of postconditions as well as preconditions. We introduce metrics to validate and compare different NL2Contract approaches, using soundness, bug discriminative power of the generated contracts and their usability in the context of automatic software verification as key metrics. We evaluate NL2Contract with different LLMs and compare it to the task of postcondition generation nl2postcond. Our evaluation shows that (1) LLMs are generally effective at generating functional contracts sound for all possible inputs, (2) the generated contracts are sufficiently expressive for discriminating buggy from correct behavior, and (3) verifiers supplied with LLM inferred functional contracts produce fewer false alarms than when provided with postconditions alone. Further investigations show that LLM inferred preconditions generally align well with developers intentions which allows us to use automatic software verifiers to catch real-world bugs.
Abstract: 自动软件验证工具在检查软件是否符合(形式化)规范的任务上已经变得越来越有效。 然而,在实际应用中,由于现实世界代码中缺乏这样的规范,其采用受到了阻碍。 大型语言模型(LLMs)在从嵌入在代码中的自然语言提示(如函数名、注释或文档)中推断出形式化后置条件方面表现出潜力。 然而,将生成的后置条件作为后续验证的规范,通常会导致验证工具建议无效输入,这表明可能存在最终被证明是误报的问题。 为了解决这个问题,我们重新审视了在自动软件验证背景下从自然语言进行规范推断的问题。 在此过程中,我们引入了NL2Contract,即利用LLMs将非形式化的自然语言翻译成形式化的功能契约的任务,包括后置条件以及前置条件。 我们引入了度量标准来验证和比较不同的NL2Contract方法,使用正确性、生成契约的错误区分能力以及它们在自动软件验证背景下的可用性作为关键度量标准。 我们使用不同的LLMs对NL2Contract进行了评估,并将其与后置条件生成任务nl2postcond进行了比较。 我们的评估显示,(1)LLMs通常能够为所有可能的输入生成正确的功能契约,(2)生成的契约足够表达以区分有错误的行为和正确的行为,以及(3)提供LLMs推断的功能契约的验证工具比仅提供后置条件时产生的误报更少。 进一步的研究表明,LLMs推断的前置条件通常与开发人员的意图一致,这使我们能够使用自动软件验证工具来捕捉现实世界中的错误。
Comments: under submission
Subjects: Software Engineering (cs.SE) ; Artificial Intelligence (cs.AI); Programming Languages (cs.PL)
Cite as: arXiv:2510.12702 [cs.SE]
  (or arXiv:2510.12702v1 [cs.SE] for this version)
  https://doi.org/10.48550/arXiv.2510.12702
arXiv-issued DOI via DataCite

Submission history

From: Cedric Richter [view email]
[v1] Tue, 14 Oct 2025 16:37:39 UTC (106 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • TeX Source
license icon view license
Current browse context:
cs.SE
< prev   |   next >
new | recent | 2025-10
Change to browse by:
cs
cs.AI
cs.PL

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号