2P1-G03 Segmenting Sound Signals and Articulatory Movement using Recurrent Neural Network toward Phoneme Acquisition
抄録
This paper proposes a computational model for phoneme acquisition by infants. Infants perceive speech not as discrete phoneme sequences but as continuous acoustic signals. One of critical problems in phoneme acquisition is the design for segmenting these continuous speech. The key idea to solve this problem is that articulatory mechanisms such as the vocal tract help human beings to perceive sound units corresponding to phonemes. To segment acoustic signal with articulatory movement, our system was implemented by using a physical vocal tract model, called the Maeda model, and applying a segmenting method using Recurrent Neural Network with Parametric Bias (RNNPB). This method determines segmentation boundaries in a sequence using the prediction error of the RNNPB model, and the PB values obtained by the method can be encoded as kind of phonemes. Experimental results demonstrated that our system could self-organize the same phonemes in different continuous sounds. This suggests that our model reflects the process of phoneme acquisition.
収録刊行物
-
- ロボティクス・メカトロニクス講演会講演概要集
-
ロボティクス・メカトロニクス講演会講演概要集 2008 (0), _2P1-G03_1-_2P1-G03_4, 2008
一般社団法人 日本機械学会
- Tweet
詳細情報 詳細情報について
-
- CRID
- 1390001205933987456
-
- NII論文ID
- 110008696556
-
- ISSN
- 24243124
-
- 本文言語コード
- en
-
- データソース種別
-
- JaLC
- Crossref
- CiNii Articles
-
- 抄録ライセンスフラグ
- 使用不可