Computer Science > Software Engineering
            [Submitted on 14 Oct 2025
            
            
            
            ]
          
          Title: Beyond Postconditions: Can Large Language Models infer Formal Contracts for Automatic Software Verification?
Title: 超越后条件:大型语言模型能否推断出形式化契约以实现自动软件验证?
Abstract: Automatic software verifiers have become increasingly effective at the task of checking software against (formal) specifications. Yet, their adoption in practice has been hampered by the lack of such specifications in real world code. Large Language Models (LLMs) have shown promise in inferring formal postconditions from natural language hints embedded in code such as function names, comments or documentation. Using the generated postconditions as specifications in a subsequent verification, however, often leads verifiers to suggest invalid inputs, hinting at potential issues that ultimately turn out to be false alarms. To address this, we revisit the problem of specification inference from natural language in the context of automatic software verification. In the process, we introduce NL2Contract, the task of employing LLMs to translate informal natural language into formal functional contracts, consisting of postconditions as well as preconditions. We introduce metrics to validate and compare different NL2Contract approaches, using soundness, bug discriminative power of the generated contracts and their usability in the context of automatic software verification as key metrics. We evaluate NL2Contract with different LLMs and compare it to the task of postcondition generation nl2postcond. Our evaluation shows that (1) LLMs are generally effective at generating functional contracts sound for all possible inputs, (2) the generated contracts are sufficiently expressive for discriminating buggy from correct behavior, and (3) verifiers supplied with LLM inferred functional contracts produce fewer false alarms than when provided with postconditions alone. Further investigations show that LLM inferred preconditions generally align well with developers intentions which allows us to use automatic software verifiers to catch real-world bugs.
          Current browse context: 
        
          cs.SE
          
          
          
          
          
          
            
            
            
          
        References & Citations
Bibliographic and Citation Tools
            Bibliographic Explorer (What is the Explorer?)
          
        
            Connected Papers (What is Connected Papers?)
          
        
            Litmaps (What is Litmaps?)
          
        
            scite Smart Citations (What are Smart Citations?)
          
        Code, Data and Media Associated with this Article
            alphaXiv (What is alphaXiv?)
          
        
            CatalyzeX Code Finder for Papers (What is CatalyzeX?)
          
        
            DagsHub (What is DagsHub?)
          
        
            Gotit.pub (What is GotitPub?)
          
        
            Hugging Face (What is Huggingface?)
          
        
            Papers with Code (What is Papers with Code?)
          
        
            ScienceCast (What is ScienceCast?)
          
        Demos
Recommenders and Search Tools
              Influence Flower (What are Influence Flowers?)
            
          
              CORE Recommender (What is CORE?)
            
          
              IArxiv Recommender
              (What is IArxiv?)
            
          arXivLabs: experimental projects with community collaborators
arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.
Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.
Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.
 
               
  