Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > cs > arXiv:2509.14579v1

Help | Advanced Search

Computer Science > Sound

arXiv:2509.14579v1 (cs)
[Submitted on 18 Sep 2025 (this version) , latest version 20 Sep 2025 (v2) ]

Title: Cross-Lingual F5-TTS: Towards Language-Agnostic Voice Cloning and Speech Synthesis

Title: 跨语言F5-TTS:迈向语言无关的语音克隆和语音合成

Authors:Qingyu Liu, Yushen Chen, Zhikang Niu, Chunhui Wang, Yunting Yang, Bowen Zhang, Jian Zhao, Pengcheng Zhu, Kai Yu, Xie Chen
Abstract: Flow-matching-based text-to-speech (TTS) models have shown high-quality speech synthesis. However, most current flow-matching-based TTS models still rely on reference transcripts corresponding to the audio prompt for synthesis. This dependency prevents cross-lingual voice cloning when audio prompt transcripts are unavailable, particularly for unseen languages. The key challenges for flow-matching-based TTS models to remove audio prompt transcripts are identifying word boundaries during training and determining appropriate duration during inference. In this paper, we introduce Cross-Lingual F5-TTS, a framework that enables cross-lingual voice cloning without audio prompt transcripts. Our method preprocesses audio prompts by forced alignment to obtain word boundaries, enabling direct synthesis from audio prompts while excluding transcripts during training. To address the duration modeling challenge, we train speaking rate predictors at different linguistic granularities to derive duration from speaker pace. Experiments show that our approach matches the performance of F5-TTS while enabling cross-lingual voice cloning.
Abstract: 基于流匹配的文本到语音(TTS)模型已经展示了高质量的语音合成效果。 然而,大多数当前基于流匹配的TTS模型仍然依赖于与音频提示对应的参考转录文本进行合成。 这种依赖性在音频提示转录文本不可用时,特别是对于未见过的语言,会阻碍跨语言语音克隆。 基于流匹配的TTS模型去除音频提示转录文本的关键挑战在于训练期间识别单词边界以及在推理期间确定适当的持续时间。 在本文中,我们引入了 跨语言F5-TTS,一种无需音频提示转录文本即可实现跨语言语音克隆的框架。 我们的方法通过强制对齐预处理音频提示以获得单词边界,在训练期间排除转录文本的同时,实现从音频提示直接合成。 为了解决持续时间建模的挑战,我们在不同的语言粒度上训练说话速度预测器,以从说话者节奏中推导出持续时间。 实验表明,我们的方法在性能上与F5-TTS相当,同时实现了跨语言语音克隆。
Comments: 5 pages, 2 figures
Subjects: Sound (cs.SD)
Cite as: arXiv:2509.14579 [cs.SD]
  (or arXiv:2509.14579v1 [cs.SD] for this version)
  https://doi.org/10.48550/arXiv.2509.14579
arXiv-issued DOI via DataCite

Submission history

From: Qingyu Liu [view email]
[v1] Thu, 18 Sep 2025 03:27:35 UTC (1,121 KB)
[v2] Sat, 20 Sep 2025 07:03:49 UTC (1,121 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • HTML (experimental)
  • TeX Source
view license
Current browse context:
cs.SD
< prev   |   next >
new | recent | 2025-09
Change to browse by:
cs

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号