Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > cs > arXiv:2510.04081

Help | Advanced Search

Computer Science > Computation and Language

arXiv:2510.04081 (cs)
[Submitted on 5 Oct 2025 ]

Title: Scaling Code-Assisted Chain-of-Thoughts and Instructions for Model Reasoning

Title: 缩放代码辅助思维链和模型推理的指令

Authors:Honglin Lin, Qizhi Pei, Xin Gao, Zhuoshi Pan, Yu Li, Juntao Li, Conghui He, Lijun Wu
Abstract: Reasoning capability is pivotal for Large Language Models (LLMs) to solve complex tasks, yet achieving reliable and scalable reasoning remains challenging. While Chain-of-Thought (CoT) prompting has become a mainstream approach, existing methods often suffer from uncontrolled generation, insufficient quality, and limited diversity in reasoning paths. Recent efforts leverage code to enhance CoT by grounding reasoning in executable steps, but such methods are typically constrained to predefined mathematical problems, hindering scalability and generalizability. In this work, we propose Caco (Code-Assisted Chain-of-ThOught), a novel framework that automates the synthesis of high-quality, verifiable, and diverse instruction-CoT reasoning data through code-driven augmentation. Unlike prior work, Caco first fine-tunes a code-based CoT generator on existing math and programming solutions in a unified code format, then scales the data generation to a large amount of diverse reasoning traces. Crucially, we introduce automated validation via code execution and rule-based filtering to ensure logical correctness and structural diversity, followed by reverse-engineering filtered outputs into natural language instructions and language CoTs to enrich task adaptability. This closed-loop process enables fully automated, scalable synthesis of reasoning data with guaranteed executability. Experiments on our created Caco-1.3M dataset demonstrate that Caco-trained models achieve strong competitive performance on mathematical reasoning benchmarks, outperforming existing strong baselines. Further analysis reveals that Caco's code-anchored verification and instruction diversity contribute to superior generalization across unseen tasks. Our work establishes a paradigm for building self-sustaining, trustworthy reasoning systems without human intervention.
Abstract: 推理能力对于大型语言模型(LLMs)解决复杂任务至关重要,但实现可靠且可扩展的推理仍然具有挑战性。 尽管思维链(CoT)提示已成为主流方法,但现有方法通常存在生成不可控、质量不足以及推理路径多样性有限的问题。 最近的努力通过将推理建立在可执行步骤上,利用代码来增强CoT,但此类方法通常局限于预定义的数学问题,阻碍了可扩展性和泛化能力。 在本工作中,我们提出了Caco(Code-Assisted Chain-of-ThOught),一种新的框架,通过代码驱动的增强自动合成高质量、可验证和多样化的指令-CoT推理数据。 与之前的工作不同,Caco首先在一个统一的代码格式下对基于代码的CoT生成器进行微调,然后将数据生成扩展到大量多样化的推理轨迹。 关键的是,我们引入了通过代码执行和基于规则的过滤进行的自动化验证,以确保逻辑正确性和结构多样性,随后将过滤后的输出逆向工程为自然语言指令和语言CoTs,以丰富任务适应性。 这种闭环过程实现了保证可执行性的推理数据的完全自动化、可扩展的合成。 我们在创建的Caco-1.3M数据集上的实验表明,Caco训练的模型在数学推理基准测试中表现出强大的竞争力,优于现有的强基线。 进一步分析表明,Caco的代码锚定验证和指令多样性有助于在未见过的任务上实现优越的泛化能力。 我们的工作建立了一个无需人工干预的构建自我维持、可信推理系统的范式。
Comments: Accepted by NeurIPS2025
Subjects: Computation and Language (cs.CL) ; Programming Languages (cs.PL)
Cite as: arXiv:2510.04081 [cs.CL]
  (or arXiv:2510.04081v1 [cs.CL] for this version)
  https://doi.org/10.48550/arXiv.2510.04081
arXiv-issued DOI via DataCite

Submission history

From: Honglin Lin [view email]
[v1] Sun, 5 Oct 2025 07:59:24 UTC (1,325 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • HTML (experimental)
  • TeX Source
view license
Current browse context:
cs.CL
< prev   |   next >
new | recent | 2025-10
Change to browse by:
cs
cs.PL

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号