Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > cs > arXiv:2510.00508v1

Help | Advanced Search

Computer Science > Computation and Language

arXiv:2510.00508v1 (cs)
[Submitted on 1 Oct 2025 ]

Title: Copy-Paste to Mitigate Large Language Model Hallucinations

Title: 复制粘贴以减轻大型语言模型的幻觉

Authors:Yongchao Long, Xian Wu, Yingying Zhang, Xianbin Wen, Yuxi Zhou, Shenda Hong
Abstract: While Retrieval-Augmented Generation (RAG) enables large language models (LLMs) to generate contextually grounded responses, contextual faithfulness remains challenging as LLMs may not consistently trust provided context, leading to hallucinations that undermine reliability. We observe an inverse correlation between response copying degree and context-unfaithful hallucinations on RAGTruth, suggesting that higher copying degrees reduce hallucinations by fostering genuine contextual belief. We propose CopyPasteLLM, obtained through two-stage high-copying response preference training. We design three prompting methods to enhance copying degree, demonstrating that high-copying responses achieve superior contextual faithfulness and hallucination control. These approaches enable a fully automated pipeline that transforms generated responses into high-copying preference data for training CopyPasteLLM. On FaithEval, ConFiQA and PubMedQA, CopyPasteLLM achieves best performance in both counterfactual and original contexts, remarkably with 12.2% to 24.5% accuracy improvements on FaithEval over the best baseline, while requiring only 365 training samples -- 1/50th of baseline data. To elucidate CopyPasteLLM's effectiveness, we propose the Context-Parameter Copying Capturing algorithm. Interestingly, this reveals that CopyPasteLLM recalibrates reliance on internal parametric knowledge rather than external knowledge during generation. All codes are available at https://github.com/longyongchao/CopyPasteLLM
Abstract: 虽然检索增强生成(RAG)使大型语言模型(LLMs)能够生成上下文相关的响应,但上下文忠实性仍然具有挑战性,因为LLMs可能不会始终信任提供的上下文,从而导致损害可靠性的幻觉。我们在RAGTruth上观察到响应复制程度与不忠实于上下文的幻觉之间存在负相关关系,表明更高的复制程度通过促进真实的上下文信念来减少幻觉。我们通过两阶段高复制响应偏好训练获得了CopyPasteLLM。我们设计了三种提示方法来提高复制程度,证明高复制响应在上下文忠实性和幻觉控制方面表现更优。这些方法使得一个完全自动化的流程成为可能,该流程将生成的响应转换为用于训练CopyPasteLLM的高复制偏好数据。在FaithEval、ConFiQA和PubMedQA上,CopyPasteLLM在反事实和原始上下文中均表现出最佳性能,在FaithEval上的准确率比最佳基线提高了12.2%至24.5%,而仅需要365个训练样本——是基线数据的1/50。为了阐明CopyPasteLLM的有效性,我们提出了上下文-参数复制捕捉算法。有趣的是,这表明CopyPasteLLM在生成过程中重新校准了对内部参数知识的依赖,而不是外部知识。所有代码均可在https://github.com/longyongchao/CopyPasteLLM获得。
Subjects: Computation and Language (cs.CL) ; Artificial Intelligence (cs.AI)
Cite as: arXiv:2510.00508 [cs.CL]
  (or arXiv:2510.00508v1 [cs.CL] for this version)
  https://doi.org/10.48550/arXiv.2510.00508
arXiv-issued DOI via DataCite

Submission history

From: Yongchao Long [view email]
[v1] Wed, 1 Oct 2025 04:40:04 UTC (2,341 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • HTML (experimental)
  • TeX Source
license icon view license
Current browse context:
cs.CL
< prev   |   next >
new | recent | 2025-10
Change to browse by:
cs
cs.AI

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号