Simple Designing Methods of Corpus-Based Visual Speech Synthesis

IR HANDLE Open Access

Abstract

This paper describes simple designing methods of corpus-based visual speech synthesis. Our approach needs only a synchronous real image and speech database. Visual speech is synthesized by concatenating real image segments and speech segments selected from the database. In order to automatically perform all processes, e.g. feature extraction, segment selection and segment concatenation, we simply design two types of visual speech synthesis. One is synthesizing visual speech using synchronous real image and speech segments selected with only speech information. The other is using speech segment selection and image segment selection with features extracted from the database without processes by hand. We performed objective and subjective experiments to evaluate these designing methods. As a result, the latter method can synthesize visual speech more naturally than the former method.

Details 詳細情報について

Report a problem

Back to top