A Classification Method of Spoken Words in Continuous Speech for Many Speakers

Access this Article

  • CiNii Fulltext PDF

    Open Access

Search this Article

Abstract

Speech wave is converted into a time series of short time spectra by 20-channel filter bank and is segmented into four groups: silence, unvoiced-non-fricative, unvoiced-nonplosive, and voiced group. The unvoiced groups are classified into a unit of phoneme by heuristic algorithms and voiced group by Bayes rule. To normalize the variation of reference patterns among speakers, vowel patterns are learned by the non-supervised learning method. The optimum matching between a just recognized phoneme string and a phoneme string of a given word in the word dictionary is performed by utilizing the phoneme similarity matrix and Dynamic Programming. According to the results tested upon 1,500 samples of isolated digits, spoken by 20 male speakers, about 97% were correctly recognized and, in case of the system adapting for each speaker, 98% correctly recognized.

Journal

Information processing in Japan   [List of Volumes]

Information processing in Japan 17, 6-13, 1977  [Table of Contents]

Information Processing Society of Japan (IPSJ)

Codes

  • NII Article ID (NAID) :
    110002672341
  • NII NACSIS-CAT ID (NCID) :
    AA00674393
  • Text Lang :
    ENG
  • Databases :
    NII-ELS