Abstract
This paper describes simple designing methods of corpus-based visual speech synthesis. Our approach needs only a synchronous real image and speech database. Visual speech is synthesized by concatenating real image segments and speech segments selected from the database. In order to automatically perform all processes, e.g. feature extraction, segment selection and segment concatenation, we simply design two types of visual speech synthesis. One is synthesizing visual speech using synchronous real image and speech segments selected with only speech information. The other is using speech segment selection and image segment selection with features extracted from the database without processes by hand. We performed objective and subjective experiments to evaluate these designing methods. As a result, the latter method can synthesize visual speech more naturally than the former method.
- Tweet
Details 詳細情報について
-
- CRID
- 1050295834376481920
-
- NII Article ID
- 120006659203
-
- ISSN
- 10184074
-
- HANDLE
- 10061/7964
-
- Text Lang
- en
-
- Article Type
- conference paper
-
- Data Source
-
- IRDB
- CiNii Articles