A Classification Method of Spoken Words in Continuous Speech for Many Speakers
Search this Article
Speech wave is converted into a time series of short time spectra by 20-channel filter bank and is segmented into four groups: silence, unvoiced-non-fricative, unvoiced-nonplosive, and voiced group. The unvoiced groups are classified into a unit of phoneme by heuristic algorithms and voiced group by Bayes rule. To normalize the variation of reference patterns among speakers, vowel patterns are learned by the non-supervised learning method. The optimum matching between a just recognized phoneme string and a phoneme string of a given word in the word dictionary is performed by utilizing the phoneme similarity matrix and Dynamic Programming. According to the results tested upon 1,500 samples of isolated digits, spoken by 20 male speakers, about 97% were correctly recognized and, in case of the system adapting for each speaker, 98% correctly recognized.
- Information processing in Japan
Information processing in Japan 17, 6-13, 1977
Information Processing Society of Japan (IPSJ)