HMMに基づく音声合成のための話者補間 Speaker interpolation for HMM-based speech synthesis system

Search this Article

Author(s)

Abstract

This paper describes an approach to voice characteristics conversion for an HMM-based text-to-speech synthesis system using speaker interpolation. Although most text-to-speech synthesis systems which synthesize speech by concatenating speech units can synthesize speech with acceptable quality, they still cannot synthesize speech with various voice quality such as speaker individualities and emotions ; In order to control speaker individualities and emotions, therefore, they need a large database, which records speech units with various voice characteristics in synthesis phase. On the other hand, our system synthesize speech with untrained speaker's voice quality by interpolating HMM parameters among some representative speakers' HMM sets. Accordingly, our system can synthesize speech with various voice quality without large database in synthesis phase. An HMM interpolation technique is derived from a probabilistic similarity measure for HMMs, and used to synthesize speech with untrained speaker's voice quality by interpolating HMM parameters among some representative speakers' HMM sets. The results of subjective experiments show that we can gradually change the voice quality of synthesized speech from one's to the other's by changing the interpolation ratio.

Journal

  • Journal of the Acoustical Society of Japan (E)

    Journal of the Acoustical Society of Japan (E) 21(4), 199-206, 2000-07

    The Acoustical Society of Japan (ASJ)

References:  18

Cited by:  15

Codes

  • NII Article ID (NAID)
    110003106260
  • NII NACSIS-CAT ID (NCID)
    AA00256597
  • Text Lang
    ENG
  • Article Type
    Journal Article
  • ISSN
    03882861
  • NDL Article ID
    5446106
  • NDL Source Classification
    ZM35(科学技術--物理学)
  • NDL Call No.
    Z53-X48
  • Data Source
    CJP  CJPref  NDL  NII-ELS 
Page Top