Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > cs > arXiv:2404.02138v2

Help | Advanced Search

Computer Science > Cryptography and Security

arXiv:2404.02138v2 (cs)
[Submitted on 2 Apr 2024 (v1) , revised 16 Apr 2024 (this version, v2) , latest version 7 Feb 2025 (v4) ]

Title: Topic-based Watermarks for LLM-Generated Text

Title: 基于主题的LLM生成文本水印

Authors:Alexander Nemecek, Yuzhou Jiang, Erman Ayday
Abstract: Recent advancements of large language models (LLMs) have resulted in indistinguishable text outputs comparable to human-generated text. Watermarking algorithms are potential tools that offer a way to differentiate between LLM- and human-generated text by embedding detectable signatures within LLM-generated output. However, current watermarking schemes lack robustness against known attacks against watermarking algorithms. In addition, they are impractical considering an LLM generates tens of thousands of text outputs per day and the watermarking algorithm needs to memorize each output it generates for the detection to work. In this work, focusing on the limitations of current watermarking schemes, we propose the concept of a "topic-based watermarking algorithm" for LLMs. The proposed algorithm determines how to generate tokens for the watermarked LLM output based on extracted topics of an input prompt or the output of a non-watermarked LLM. Inspired from previous work, we propose using a pair of lists (that are generated based on the specified extracted topic(s)) that specify certain tokens to be included or excluded while generating the watermarked output of the LLM. Using the proposed watermarking algorithm, we show the practicality of a watermark detection algorithm. Furthermore, we discuss a wide range of attacks that can emerge against watermarking algorithms for LLMs and the benefit of the proposed watermarking scheme for the feasibility of modeling a potential attacker considering its benefit vs. loss.
Abstract: 最近大型语言模型(LLMs)的进展已导致生成的文本输出与人类生成的文本难以区分。 水印算法是潜在的工具,可以通过在LLM生成的输出中嵌入可检测的标识符,来区分LLM生成和人类生成的文本。 然而,当前的水印方案在面对已知的水印算法攻击时缺乏鲁棒性。 此外,考虑到LLM每天生成数以万计的文本输出,水印算法需要记住每个生成的输出以进行检测,这使得它们不切实际。 在这项工作中,针对当前水印方案的局限性,我们提出了“基于主题的水印算法”的概念。 该算法根据输入提示的提取主题或非水印LLM的输出来确定如何为水印LLM输出生成标记。 受之前工作的启发,我们提出使用一对列表(这些列表基于指定的提取主题生成),这些列表在生成水印LLM输出时指定了某些标记应包含或排除。 使用所提出的水印算法,我们展示了水印检测算法的实用性。 此外,我们讨论了可能针对LLM水印算法出现的各种攻击,以及所提出的水印方案在建模潜在攻击者时的可行性优势与其收益与损失之间的权衡。
Comments: 11 pages
Subjects: Cryptography and Security (cs.CR) ; Computation and Language (cs.CL); Machine Learning (cs.LG)
Cite as: arXiv:2404.02138 [cs.CR]
  (or arXiv:2404.02138v2 [cs.CR] for this version)
  https://doi.org/10.48550/arXiv.2404.02138
arXiv-issued DOI via DataCite

Submission history

From: Alexander Nemecek [view email]
[v1] Tue, 2 Apr 2024 17:49:40 UTC (1,661 KB)
[v2] Tue, 16 Apr 2024 07:28:05 UTC (1,661 KB)
[v3] Mon, 19 Aug 2024 17:16:08 UTC (2,171 KB)
[v4] Fri, 7 Feb 2025 22:45:20 UTC (162 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • HTML (experimental)
  • Other Formats
view license
Current browse context:
cs.CR
< prev   |   next >
new | recent | 2024-04
Change to browse by:
cs
cs.CL
cs.LG

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号