kNN Retrieval for Simple and Effective Zero-Shot Multi-speaker Text-to-Speech

Karl El Hajal, Ajinkya Kulkarni, Enno Hermann, Mathew Magimai.-Doss
Abstract

While recent zero-shot multi-speaker text-to-speech (TTS) models achieve impressive results, they typically rely on extensive transcribed speech datasets from numerous speakers and intricate training pipelines. Meanwhile, self-supervised learning (SSL) speech features have emerged as effective intermediate representations for TTS. Further, SSL features from different speakers that are linearly close share phonetic information while maintaining individual speaker identity. In this study, we introduce kNN-TTS, a simple and effective framework for zero-shot multi-speaker TTS using retrieval methods which leverage the linear relationships between SSL features. Objective and subjective evaluations show that our models, trained on transcribed speech from a single speaker only, achieve performance comparable to state-of-the-art models that are trained on significantly larger training datasets. The low training data requirements mean that kNN-TTS is well suited for the development of multi-speaker TTS systems for low-resource domains and languages. We also introduce an interpolation parameter which enables fine-grained voice morphing.

Zero-shot Multi-speaker Examples from LibriSpeech test-clean:

Speaker Ground Truth GlowkNN-TTS GradkNN-TTS HierSpeech++ XTTS YourTTS
7127
7729
6829
8555

Voice Morphing Examples (varying λ to blend source and target voices):

Speaker Model λ = 0 λ = 0.25 λ = 0.50 λ = 0.75 λ = 1 λ = 1.25 λ = 1.50 λ = 1.75 λ = 2
Speaker 7127 GlowkNN-TTS
Speaker 7729 GlowkNN-TTS
Speaker 6829 GlowkNN-TTS
Speaker 8555 GlowkNN-TTS

Bonus Material

Speaker Model λ = 0 λ = 0.25 λ = 0.50 λ = 0.75 λ = 1 λ = 1.25 λ = 1.50 λ = 1.75 λ = 2
Whispered
(Thorsten Emo)
GlowkNN-TTS
Angry
(Thorsten Emo)
GlowkNN-TTS
AniSpeech 0 GradkNN-TTS
AniSpeech 14 GradkNN-TTS
AniSpeech 21 GradkNN-TTS
AniSpeech 23 GradkNN-TTS
AniSpeech 154 GlowkNN-TTS