Model Shrinkage for Discriminative Language Models

この論文にアクセスする

この論文をさがす

著者

抄録

This paper describes a technique for overcoming the model shrinkage problem in automatic speech recognition (ASR), which allows application developers and users to control the model size with less degradation of accuracy. Recently, models for ASR systems tend to be large and this can constitute a bottleneck for developers and users without special knowledge of ASR with respect to introducing the ASR function. Specifically, discriminative language models (DLMs) are usually designed in a high-dimensional parameter space, although DLMs have gained increasing attention as an approach for improving recognition accuracy. Our proposed method can be applied to linear models including DLMs, in which the score of an input sample is given by the inner product of its features and the model parameters, but our proposed method can shrink models in an easy computation by obtaining simple statistics, which are square sums of feature values appearing in a data set. Our experimental results show that our proposed method can shrink a DLM with little degradation in accuracy and perform properly whether or not the data for obtaining the statistics are the same as the data for training the model.

収録刊行物

  • IEICE transactions on information and systems

    IEICE transactions on information and systems 95(5), 1465-1474, 2012-05-01

    一般社団法人 電子情報通信学会

参考文献:  30件中 1-30件 を表示

各種コード

  • NII論文ID(NAID)
    10030943201
  • NII書誌ID(NCID)
    AA10826272
  • 本文言語コード
    ENG
  • 資料種別
    ART
  • ISSN
    09168532
  • データ提供元
    CJP書誌  J-STAGE 
ページトップへ