Abstract
Speaker embeddings achieve promising results on many speaker verification tasks. Phonetic information, as an important component of speech, is rarely considered in the extraction of speaker embeddings. In this paper, we introduce phonetic information to the speaker embedding extraction based on the x-vector architecture. Two methods using phonetic vectors and multi-task learning are proposed. On the Fisher dataset, our best system outperforms the original x-vector approach by 20% in EER, and by 15%, 15% in minDCF08 and minDCF10, respectively. Experiments conducted on NIST SRE10 further demonstrate the effectiveness of the proposed methods.
Original language | English |
---|---|
Pages (from-to) | 2247-2251 |
Number of pages | 5 |
Journal | Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH |
Volume | 2018-September |
DOIs | |
State | Published - 2018 |
Event | 19th Annual Conference of the International Speech Communication, INTERSPEECH 2018 - Hyderabad, India Duration: Sep 2 2018 → Sep 6 2018 |
Bibliographical note
Publisher Copyright:© 2018 International Speech Communication Association. All rights reserved.
Funding
The work is supported by National Natural Science Foundation of China under Grant No. 61370034, No. 61403224 and No. 61273268.
Funders | Funder number |
---|---|
National Natural Science Foundation of China (NSFC) | 61403224, 61273268, 61370034 |
Keywords
- Multi-task learning
- Phonetic information
- Phonetic vectors
- Speaker embedding
- Speaker verification
ASJC Scopus subject areas
- Language and Linguistics
- Human-Computer Interaction
- Signal Processing
- Software
- Modeling and Simulation