HMMに基づく音声合成のための話者補間 Speaker interpolation for HMM-based speech synthesis system
Access this Article
Search this Article
This paper describes an approach to voice characteristics conversion for an HMM-based text-to-speech synthesis system using speaker interpolation. Although most text-to-speech synthesis systems which synthesize speech by concatenating speech units can synthesize speech with acceptable quality, they still cannot synthesize speech with various voice quality such as speaker individualities and emotions ; In order to control speaker individualities and emotions, therefore, they need a large database, which records speech units with various voice characteristics in synthesis phase. On the other hand, our system synthesize speech with untrained speaker's voice quality by interpolating HMM parameters among some representative speakers' HMM sets. Accordingly, our system can synthesize speech with various voice quality without large database in synthesis phase. An HMM interpolation technique is derived from a probabilistic similarity measure for HMMs, and used to synthesize speech with untrained speaker's voice quality by interpolating HMM parameters among some representative speakers' HMM sets. The results of subjective experiments show that we can gradually change the voice quality of synthesized speech from one's to the other's by changing the interpolation ratio.