Combining Multiple Acoustic Models in GMM Spaces for Robust Speech Recognition

Access this Article

Author(s)

    • KANG Byung Ok
    • SW Content Research Laboratory, ETRI|School of Electronics Engineering, Chungbuk National University
    • KWON Oh-Wook
    • School of Electronics Engineering, Chungbuk National University

Abstract

We propose a new method to combine multiple acoustic models in Gaussian mixture model (GMM) spaces for robust speech recognition. Even though large vocabulary continuous speech recognition (LVCSR) systems are recently widespread, they often make egregious recognition errors resulting from unavoidable mismatch of speaking styles or environments between the training and real conditions. To handle this problem, a multi-style training approach has been used conventionally to train a large acoustic model by using a large speech database with various kinds of speaking styles and environment noise. But, in this work, we combine multiple sub-models trained for different speaking styles or environment noise into a large acoustic model by maximizing the log-likelihood of the sub-model states sharing the same phonetic context and position. Then the combined acoustic model is used in a new target system, which is robust to variation in speaking style and diverse environment noise. Experimental results show that the proposed method significantly outperforms the conventional methods in two tasks: Non-native English speech recognition for second-language learning systems and noise-robust point-of-interest (POI) recognition for car navigation systems.

Journal

  • IEICE Transactions on Information and Systems

    IEICE Transactions on Information and Systems E99.D(3), 724-730, 2016

    The Institute of Electronics, Information and Communication Engineers

Codes

Page Top