Speaker embedding extraction with phonetic information

Yi Liu, Liang He, Jia Liu, Michael T. Johnson

Research output: Contribution to journalConference articlepeer-review

25 Scopus citations


Speaker embeddings achieve promising results on many speaker verification tasks. Phonetic information, as an important component of speech, is rarely considered in the extraction of speaker embeddings. In this paper, we introduce phonetic information to the speaker embedding extraction based on the x-vector architecture. Two methods using phonetic vectors and multi-task learning are proposed. On the Fisher dataset, our best system outperforms the original x-vector approach by 20% in EER, and by 15%, 15% in minDCF08 and minDCF10, respectively. Experiments conducted on NIST SRE10 further demonstrate the effectiveness of the proposed methods.

Original languageEnglish
Pages (from-to)2247-2251
Number of pages5
JournalProceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
StatePublished - 2018
Event19th Annual Conference of the International Speech Communication, INTERSPEECH 2018 - Hyderabad, India
Duration: Sep 2 2018Sep 6 2018

Bibliographical note

Funding Information:
The work is supported by National Natural Science Foundation of China under Grant No. 61370034, No. 61403224 and No. 61273268.

Publisher Copyright:
© 2018 International Speech Communication Association. All rights reserved.


  • Multi-task learning
  • Phonetic information
  • Phonetic vectors
  • Speaker embedding
  • Speaker verification

ASJC Scopus subject areas

  • Language and Linguistics
  • Human-Computer Interaction
  • Signal Processing
  • Software
  • Modeling and Simulation


Dive into the research topics of 'Speaker embedding extraction with phonetic information'. Together they form a unique fingerprint.

Cite this