Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > cs > arXiv:2411.00459v6

Help | Advanced Search

Computer Science > Cryptography and Security

arXiv:2411.00459v6 (cs)
[Submitted on 1 Nov 2024 (v1) , last revised 2 Aug 2025 (this version, v6)]

Title: Defense Against Prompt Injection Attack by Leveraging Attack Techniques

Title: 利用攻击技术防御提示注入攻击

Authors:Yulin Chen, Haoran Li, Zihao Zheng, Yangqiu Song, Dekai Wu, Bryan Hooi
Abstract: With the advancement of technology, large language models (LLMs) have achieved remarkable performance across various natural language processing (NLP) tasks, powering LLM-integrated applications like Microsoft Copilot. However, as LLMs continue to evolve, new vulnerabilities, especially prompt injection attacks arise. These attacks trick LLMs into deviating from the original input instructions and executing the attacker's instructions injected in data content, such as retrieved results. Recent attack methods leverage LLMs' instruction-following abilities and their inabilities to distinguish instructions injected in the data content, and achieve a high attack success rate (ASR). When comparing the attack and defense methods, we interestingly find that they share similar design goals, of inducing the model to ignore unwanted instructions and instead to execute wanted instructions. Therefore, we raise an intuitive question: Could these attack techniques be utilized for defensive purposes? In this paper, we invert the intention of prompt injection methods to develop novel defense methods based on previous training-free attack methods, by repeating the attack process but with the original input instruction rather than the injected instruction. Our comprehensive experiments demonstrate that our defense techniques outperform existing training-free defense approaches, achieving state-of-the-art results.
Abstract: 随着技术的进步,大型语言模型(LLMs)在各种自然语言处理(NLP)任务中取得了显著的性能,推动了像Microsoft Copilot这样的LLM集成应用的发展。 然而,随着LLMs的不断发展,新的漏洞出现,特别是提示注入攻击。 这些攻击欺骗LLMs偏离原始输入指令,并执行数据内容中注入的攻击者指令,例如检索结果。 最近的攻击方法利用LLMs的指令遵循能力以及它们无法区分数据内容中注入的指令的能力,实现了高攻击成功率(ASR)。 在比较攻击和防御方法时,我们发现它们具有相似的设计目标,即诱导模型忽略不需要的指令并执行需要的指令。 因此,我们提出一个直观的问题:这些攻击技术能否用于防御目的? 在本文中,我们反转提示注入方法的意图,通过重复攻击过程但使用原始输入指令而非注入指令,基于之前的无训练攻击方法开发新的防御方法。 我们的全面实验表明,我们的防御技术优于现有的无训练防御方法,达到了最先进的结果。
Comments: ACL 2025 Main
Subjects: Cryptography and Security (cs.CR)
Cite as: arXiv:2411.00459 [cs.CR]
  (or arXiv:2411.00459v6 [cs.CR] for this version)
  https://doi.org/10.48550/arXiv.2411.00459
arXiv-issued DOI via DataCite

Submission history

From: Yulin Chen [view email]
[v1] Fri, 1 Nov 2024 09:14:21 UTC (697 KB)
[v2] Mon, 23 Dec 2024 08:25:54 UTC (689 KB)
[v3] Tue, 25 Feb 2025 16:17:31 UTC (691 KB)
[v4] Fri, 18 Jul 2025 05:44:32 UTC (581 KB)
[v5] Tue, 22 Jul 2025 08:54:36 UTC (581 KB)
[v6] Sat, 2 Aug 2025 13:44:03 UTC (581 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • HTML (experimental)
  • TeX Source
  • Other Formats
view license
Current browse context:
cs.CR
< prev   |   next >
new | recent | 2024-11
Change to browse by:
cs

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号