-
- Iribe Yurie
- Toyohashi University of Technology
-
- Mori Takuro
- Toyohashi University of Technology
-
- Katsurada Kouichi
- Toyohashi University of Technology
-
- Nitta Tsuneo
- Toyohashi University of Technology Waseda University
Abstract
We describe a system for pronunciation training that dynamically generates CG animations to express pronunciation visually from speech based on articulatory features. The system specifically displays the results of phoneme recognition and CG animations of articulatory movements of both learners and a teacher that are estimated from their speech. Learners can thus notice their mispronunciation movements and find the correct method of pronunciation by comparing their incorrect pronunciation movements with the correct ones on the animations. We conducted an experiment to evaluate the effectiveness of the animated pronunciations and we acquired a correctness of 93% for articulatory features with our proposed system. As a result, we clarified that CG animations could adequately visualize the teacher's articulatory movements and those of learners. Further, the improvement to the pronunciation score with the proposed system was double that with the existing system. These results verified that the new system was an effective training system.
Journal
-
- The Journal of Information and Systems in Education
-
The Journal of Information and Systems in Education 11 (1), 1-13, 2012
Japanese Society for Information and Systems in Education
- Tweet
Keywords
Details 詳細情報について
-
- CRID
- 1390282680228907776
-
- NII Article ID
- 130003377565
-
- ISSN
- 21863679
- 1348236X
-
- Text Lang
- en
-
- Data Source
-
- JaLC
- Crossref
- CiNii Articles
- KAKEN
-
- Abstract License Flag
- Disallowed