Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > cs > arXiv:2510.05115v1

Help | Advanced Search

Computer Science > Artificial Intelligence

arXiv:2510.05115v1 (cs)
[Submitted on 28 Sep 2025 ]

Title: Optimization Modeling via Semantic Anchored Alignment

Title: 通过语义锚定对齐的优化建模

Authors:Yansen Zhang, Qingcan Kang, Yujie Chen, Yufei Wang, Xiongwei Han, Tao Zhong, Mingxuan Yuan, Chen Ma
Abstract: Large language models (LLMs) have opened new paradigms in optimization modeling by enabling the generation of executable solver code from natural language descriptions. Despite this promise, existing approaches typically remain solver-driven: they rely on single-pass forward generation and apply limited post-hoc fixes based on solver error messages, leaving undetected semantic errors that silently produce syntactically correct but logically flawed models. To address this challenge, we propose SAC-Opt, a backward-guided correction framework that grounds optimization modeling in problem semantics rather than solver feedback. At each step, SAC-Opt aligns the original semantic anchors with those reconstructed from the generated code and selectively corrects only the mismatched components, driving convergence toward a semantically faithful model. This anchor-driven correction enables fine-grained refinement of constraint and objective logic, enhancing both fidelity and robustness without requiring additional training or supervision. Empirical results on seven public datasets demonstrate that SAC-Opt improves average modeling accuracy by 7.8\%, with gains of up to 21.9\% on the ComplexLP dataset. These findings highlight the importance of semantic-anchored correction in LLM-based optimization workflows to ensure faithful translation from problem intent to solver-executable code.
Abstract: 大型语言模型(LLMs)通过从自然语言描述中生成可执行求解器代码,为优化建模开启了新的范式。 尽管具有这种潜力,现有的方法通常仍然以求解器为导向:它们依赖于单次正向生成,并基于求解器错误信息应用有限的后期修复,导致未被检测到的语义错误,这些错误会无声地产生语法正确但逻辑有误的模型。 为了解决这一挑战,我们提出了SAC-Opt,这是一种基于反向引导的纠正框架,将优化建模建立在问题语义上,而不是求解器反馈上。 在每一步中,SAC-Opt将原始语义锚点与从生成代码中重建的语义锚点对齐,并仅选择性地纠正不匹配的组件,从而推动模型向语义忠实的方向收敛。 这种基于锚点的纠正方法实现了约束和目标逻辑的细粒度优化,提高了保真度和鲁棒性,而无需额外的训练或监督。 在七个公共数据集上的实验结果表明,SAC-Opt平均建模准确率提高了7.8%,在ComplexLP数据集上的提升高达21.9%。 这些发现突显了在基于LLM的优化工作流中,语义锚定纠正的重要性,以确保从问题意图到求解器可执行代码的忠实转换。
Subjects: Artificial Intelligence (cs.AI) ; Computation and Language (cs.CL); Programming Languages (cs.PL)
Cite as: arXiv:2510.05115 [cs.AI]
  (or arXiv:2510.05115v1 [cs.AI] for this version)
  https://doi.org/10.48550/arXiv.2510.05115
arXiv-issued DOI via DataCite

Submission history

From: Yansen Zhang [view email]
[v1] Sun, 28 Sep 2025 12:25:31 UTC (255 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • HTML (experimental)
  • TeX Source
license icon view license
Current browse context:
cs.AI
< prev   |   next >
new | recent | 2025-10
Change to browse by:
cs
cs.CL
cs.PL

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号