Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > cs > arXiv:2509.12724

Help | Advanced Search

Computer Science > Computer Vision and Pattern Recognition

arXiv:2509.12724 (cs)
[Submitted on 16 Sep 2025 ]

Title: Defense-to-Attack: Bypassing Weak Defenses Enables Stronger Jailbreaks in Vision-Language Models

Title: 防御到攻击:绕过弱防御在视觉-语言模型中实现更强的越狱

Authors:Yunhan Zhao, Xiang Zheng, Xingjun Ma
Abstract: Despite their superb capabilities, Vision-Language Models (VLMs) have been shown to be vulnerable to jailbreak attacks. While recent jailbreaks have achieved notable progress, their effectiveness and efficiency can still be improved. In this work, we reveal an interesting phenomenon: incorporating weak defense into the attack pipeline can significantly enhance both the effectiveness and the efficiency of jailbreaks on VLMs. Building on this insight, we propose Defense2Attack, a novel jailbreak method that bypasses the safety guardrails of VLMs by leveraging defensive patterns to guide jailbreak prompt design. Specifically, Defense2Attack consists of three key components: (1) a visual optimizer that embeds universal adversarial perturbations with affirmative and encouraging semantics; (2) a textual optimizer that refines the input using a defense-styled prompt; and (3) a red-team suffix generator that enhances the jailbreak through reinforcement fine-tuning. We empirically evaluate our method on four VLMs and four safety benchmarks. The results demonstrate that Defense2Attack achieves superior jailbreak performance in a single attempt, outperforming state-of-the-art attack methods that often require multiple tries. Our work offers a new perspective on jailbreaking VLMs.
Abstract: 尽管具有出色的能力,视觉-语言模型(VLMs)已被证明容易受到越狱攻击。 虽然最近的越狱方法取得了显著进展,但其有效性和效率仍有提升空间。 在本工作中,我们揭示了一个有趣的现象:在攻击流程中引入弱防御可以显著提高对VLMs的越狱效果和效率。 基于这一见解,我们提出了Defense2Attack,这是一种新颖的越狱方法,通过利用防御模式来指导越狱提示设计,从而绕过VLMs的安全防护机制。 具体而言,Defense2Attack包含三个关键组件: (1) 一个视觉优化器,将具有肯定和鼓励语义的通用对抗扰动嵌入;(2) 一个文本优化器,使用防御风格的提示来优化输入;(3) 一个红队后缀生成器,通过强化学习微调增强越狱效果。 我们在四个VLMs和四个安全基准上对我们的方法进行了实证评估。 结果表明,Defense2Attack在一次尝试中就实现了优越的越狱性能,优于通常需要多次尝试的最先进攻击方法。 我们的工作为越狱VLMs提供了一个新的视角。
Comments: This work has been submitted to the IEEE for possible publication
Subjects: Computer Vision and Pattern Recognition (cs.CV) ; Artificial Intelligence (cs.AI)
Cite as: arXiv:2509.12724 [cs.CV]
  (or arXiv:2509.12724v1 [cs.CV] for this version)
  https://doi.org/10.48550/arXiv.2509.12724
arXiv-issued DOI via DataCite

Submission history

From: Yunhan Zhao [view email]
[v1] Tue, 16 Sep 2025 06:25:58 UTC (673 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • HTML (experimental)
  • TeX Source
view license
Current browse context:
cs.CV
< prev   |   next >
new | recent | 2025-09
Change to browse by:
cs
cs.AI

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号