Speaker embeddings achieve promising results on many speaker verification tasks. Phonetic information, as an important component of speech, is rarely considered in the extraction of speaker embeddings. In this paper, we introduce phonetic information to the speaker embedding extraction based on the x-vector architecture. Two methods using phonetic vectors and multi-task learning are proposed. On the Fisher dataset, our best system outperforms the original x-vector approach by 20% in EER, and by 15%, 15% in minDCF08 and minDCF10, respectively. Experiments conducted on NIST SRE10 further demonstrate the effectiveness of the proposed methods.
|Number of pages||5|
|Journal||Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH|
|State||Published - 2018|
|Event||19th Annual Conference of the International Speech Communication, INTERSPEECH 2018 - Hyderabad, India|
Duration: Sep 2 2018 → Sep 6 2018
Bibliographical noteFunding Information:
The work is supported by National Natural Science Foundation of China under Grant No. 61370034, No. 61403224 and No. 61273268.
© 2018 International Speech Communication Association. All rights reserved.
- Multi-task learning
- Phonetic information
- Phonetic vectors
- Speaker embedding
- Speaker verification
ASJC Scopus subject areas
- Language and Linguistics
- Human-Computer Interaction
- Signal Processing
- Modeling and Simulation