Automatic lip-synchronized video-self-modeling intervention for voice disorders

Ju Shen, Changpeng Ti, Sen Ching S. Cheung, Rita R. Patel

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

4 Scopus citations

Abstract

Video self-modeling (VSM) is a behavioral intervention technique in which a learner models a target behavior by watching a video of him- or herself. In the field of speech language pathology, the approach of VSM has been successfully used for treatment of language in children with Autism and in individuals with fluency disorder of stuttering. Technical challenges remain in creating VSM contents that depict previously unseen behaviors. In this paper, we propose a novel system that synthesizes new video sequences for VSM treatment of patients with voice disorders. Starting with a video recording of a voice-disorder patient, the proposed system replaces the coarse speech with a clean, healthier speech that bears resemblance to the patient's original voice. The replacement speech is synthesized using either a text-to-speech engine or selecting from a database of clean speeches based on a voice similarity metric. To realign the replacement speech with the original video, a novel audiovisual algorithm that combines audio segmentation with lip-state detection is proposed to identify corresponding time markers in the audio and video tracks. Lip synchronization is then accomplished by using an adaptive video re-sampling scheme that minimizes the amount of motion jitter and preserves the spatial sharpness. Experimental evaluations on a dataset with 31 subjects demonstrate the effectiveness of the proposed techniques.

Original languageEnglish
Title of host publication2012 IEEE 14th International Conference on e-Health Networking, Applications and Services, Healthcom 2012
Pages244-249
Number of pages6
DOIs
StatePublished - 2012
Event2012 IEEE 14th International Conference on e-Health Networking, Applications and Services, Healthcom 2012 - Beijing, China
Duration: Oct 10 2012Oct 13 2012

Publication series

Name2012 IEEE 14th International Conference on e-Health Networking, Applications and Services, Healthcom 2012

Conference

Conference2012 IEEE 14th International Conference on e-Health Networking, Applications and Services, Healthcom 2012
Country/TerritoryChina
CityBeijing
Period10/10/1210/13/12

Keywords

  • audio-visual lip synchronization
  • video self modeling
  • voice disorders

ASJC Scopus subject areas

  • Biomedical Engineering
  • Health Informatics
  • Health Information Management

Fingerprint

Dive into the research topics of 'Automatic lip-synchronized video-self-modeling intervention for voice disorders'. Together they form a unique fingerprint.

Cite this