Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > cs > arXiv:2404.02138

Help | Advanced Search

Computer Science > Cryptography and Security

arXiv:2404.02138 (cs)
[Submitted on 2 Apr 2024 (v1) , last revised 7 Feb 2025 (this version, v4)]

Title: Topic-Based Watermarks for Large Language Models

Title: 基于主题的水印用于大型语言模型

Authors:Alexander Nemecek, Yuzhou Jiang, Erman Ayday
Abstract: The indistinguishability of Large Language Model (LLM) output from human-authored content poses significant challenges, raising concerns about potential misuse of AI-generated text and its influence on future AI model training. Watermarking algorithms offer a viable solution by embedding detectable signatures into generated text. However, existing watermarking methods often entail trade-offs among attack robustness, generation quality, and additional overhead such as specialized frameworks or complex integrations. We propose a lightweight, topic-guided watermarking scheme for LLMs that partitions the vocabulary into topic-aligned token subsets. Given an input prompt, the scheme selects a relevant topic-specific token list, effectively "green-listing" semantically aligned tokens to embed robust marks while preserving the text's fluency and coherence. Experimental results across multiple LLMs and state-of-the-art benchmarks demonstrate that our method achieves comparable perplexity to industry-leading systems, including Google's SynthID-Text, yet enhances watermark robustness against paraphrasing and lexical perturbation attacks while introducing minimal performance overhead. Our approach avoids reliance on additional mechanisms beyond standard text generation pipelines, facilitating straightforward adoption, suggesting a practical path toward globally consistent watermarking of AI-generated content.
Abstract: 大型语言模型(LLM)输出与人工撰写的文本之间的不可区分性带来了重大挑战,引发了对AI生成文本潜在滥用及其对未来AI模型训练影响的担忧。 水印算法通过在生成的文本中嵌入可检测的签名提供了一个可行的解决方案。 然而,现有的水印方法通常在攻击鲁棒性、生成质量以及额外开销(如专用框架或复杂集成)之间存在权衡。 我们提出了一种轻量级、主题引导的LLM水印方案,该方案将词汇表划分为主题对齐的标记子集。 给定一个输入提示,该方案选择一个相关的主题特定标记列表,有效地“绿色列出”语义对齐的标记以嵌入鲁棒的水印,同时保持文本的流畅性和连贯性。 在多个LLM和最先进的基准测试中的实验结果表明,我们的方法在困惑度方面与行业领先系统(包括Google的SynthID-Text)相当,同时在对抗改写和词汇扰动攻击时增强了水印的鲁棒性,且引入了最小的性能开销。 我们的方法避免了对标准文本生成流程之外的额外机制的依赖,便于直接采用,表明了实现AI生成内容全球一致水印的实用路径。
Comments: Algorithms and new evaluations, 8 pages
Subjects: Cryptography and Security (cs.CR) ; Computation and Language (cs.CL); Machine Learning (cs.LG)
Cite as: arXiv:2404.02138 [cs.CR]
  (or arXiv:2404.02138v4 [cs.CR] for this version)
  https://doi.org/10.48550/arXiv.2404.02138
arXiv-issued DOI via DataCite

Submission history

From: Alexander Nemecek [view email]
[v1] Tue, 2 Apr 2024 17:49:40 UTC (1,661 KB)
[v2] Tue, 16 Apr 2024 07:28:05 UTC (1,661 KB)
[v3] Mon, 19 Aug 2024 17:16:08 UTC (2,171 KB)
[v4] Fri, 7 Feb 2025 22:45:20 UTC (162 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • HTML (experimental)
  • TeX Source
license icon view license
Current browse context:
cs
< prev   |   next >
new | recent | 2024-04
Change to browse by:
cs.CL
cs.CR
cs.LG

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号