Generation of CG Animations Based on Articulatory Features for Pronunciation Training

Abstract

We describe a system for pronunciation training that dynamically generates CG animations to express pronunciation visually from speech based on articulatory features. The system specifically displays the results of phoneme recognition and CG animations of articulatory movements of both learners and a teacher that are estimated from their speech. Learners can thus notice their mispronunciation movements and find the correct method of pronunciation by comparing their incorrect pronunciation movements with the correct ones on the animations. We conducted an experiment to evaluate the effectiveness of the animated pronunciations and we acquired a correctness of 93% for articulatory features with our proposed system. As a result, we clarified that CG animations could adequately visualize the teacher's articulatory movements and those of learners. Further, the improvement to the pronunciation score with the proposed system was double that with the existing system. These results verified that the new system was an effective training system.

Journal

Related Projects

See more

Details 詳細情報について

Report a problem

Back to top