Abstract
Studies of speech sensorimotor learning often manipulate auditory feedback by modifying isolated acoustic parameters such as formant frequency or fundamental frequency using near real-time resynthesis of a participant's speech. An alternative approach is to engage a participant in a total remapping of the sensorimotor working space using a virtual vocal tract. To support this approach for studying speech sensorimotor learning we have developed a system to control an articulatory synthesizer using electromagnetic articulography data. Articulator movement data from the NDI Wave System are streamed to a Maeda articulatory synthesizer. The resulting synthesized speech provides auditory feedback to the participant. This approach allows the experimenter to generate novel articulatory-acoustic mappings. Moreover, the acoustic output of the synthesizer can be perturbed using acoustic resynthesis methods. Since no robust speech-acoustic signal is required from the participant, this system will allow for the study of sensorimotor learning in any individuals, even those with severe speech disorders. In the current work we present preliminary results that demonstrate that typically-functioning participants can use a virtual vocal tract to produce diphthongs within a novel articulatory-acoustic workspace. Once sufficient baseline performance is established, perturbations to auditory feedback (formant shifting) can elicit compensatory and adaptive articulatory responses.
Original language | English |
---|---|
Article number | 060099 |
Journal | Proceedings of Meetings on Acoustics |
Volume | 19 |
DOIs | |
State | Published - 2013 |
Event | 21st International Congress on Acoustics, ICA 2013 - 165th Meeting of the Acoustical Society of America - Montreal, QC, Canada Duration: Jun 2 2013 → Jun 7 2013 |
ASJC Scopus subject areas
- Acoustics and Ultrasonics