Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > cs > arXiv:2501.00786

Help | Advanced Search

Computer Science > Cryptography and Security

arXiv:2501.00786 (cs)
[Submitted on 1 Jan 2025 ]

Title: Shifting-Merging: Secure, High-Capacity and Efficient Steganography via Large Language Models

Title: 移位合并:通过大型语言模型实现的安全、高容量和高效隐写术

Authors:Minhao Bai, Jinshuai Yang, Kaiyi Pang, Yongfeng Huang, Yue Gao
Abstract: In the face of escalating surveillance and censorship within the cyberspace, the sanctity of personal privacy has come under siege, necessitating the development of steganography, which offers a way to securely hide messages within innocent-looking texts. Previous methods alternate the texts to hide private massages, which is not secure. Large Language Models (LLMs) provide high-quality and explicit distribution, which is an available mathematical tool for secure steganography methods. However, existing attempts fail to achieve high capacity, time efficiency and correctness simultaneously, and their strongly coupling designs leave little room for refining them to achieve better performance. To provide a secure, high-capacity and efficient steganography method, we introduce ShiMer. Specifically, ShiMer pseudorandomly shifts the probability interval of the LLM's distribution to obtain a private distribution, and samples a token according to the private bits. ShiMer produced steganographic texts are indistinguishable in quality from the normal texts directly generated by the language model. To further enhance the capacity of ShiMer, we design a reordering algorithm to minimize the occurrence of interval splitting during decoding phase. Experimental results indicate that our method achieves the highest capacity and efficiency among existing secure steganography techniques.
Abstract: 在面对网络空间中日益加剧的监控和审查时,个人隐私的神圣性受到了威胁,这需要开发隐写术,它提供了一种在看似无害的文本中安全隐藏信息的方法。 以前的方法交替文本以隐藏私人消息,这是不安全的。 大型语言模型(LLMs)提供了高质量且明确的分布,这是用于安全隐写术方法的可用数学工具。 然而,现有的尝试无法同时实现高容量、时间效率和正确性,其强耦合的设计使得进一步优化以获得更好性能的空间有限。 为了提供一种安全、高容量和高效的隐写术方法,我们引入了ShiMer。 具体来说,ShiMer伪随机地移动LLM分布的概率区间以获得私有分布,并根据私有位采样一个标记。 ShiMer生成的隐写文本在质量上与语言模型直接生成的正常文本无法区分。 为了进一步提高ShiMer的容量,我们设计了一个重排序算法,以最小化解码阶段期间区间分割的发生。 实验结果表明,我们的方法在现有安全隐写术技术中实现了最高的容量和效率。
Subjects: Cryptography and Security (cs.CR)
Cite as: arXiv:2501.00786 [cs.CR]
  (or arXiv:2501.00786v1 [cs.CR] for this version)
  https://doi.org/10.48550/arXiv.2501.00786
arXiv-issued DOI via DataCite

Submission history

From: Minhao Bai [view email]
[v1] Wed, 1 Jan 2025 09:51:15 UTC (382 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • HTML (experimental)
  • TeX Source
  • Other Formats
view license
Current browse context:
cs.CR
< prev   |   next >
new | recent | 2025-01
Change to browse by:
cs

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号