転移学習と知識蒸留による少数学習データのためのDNN学習法

書誌事項

タイトル別名
  • The DNN Learning Method for Few Training Data via Knowledge Transfer
  • テンイ ガクシュウ ト チシキ ジョウリュウ ニ ヨル ショウスウ ガクシュウ データ ノ タメ ノ DNN ガクシュウホウ

この論文をさがす

抄録

<p>Deep Neural Network (DNN) models have a great deal of parameters. It allows DNN to obtain good performance, however it also causes some problems. The first one is that learning of huge parameters requires enormous learning data for training DNN. The second one is that high-spec devices are requested because learning of huge parameters is computational complexity. These problems prevent the installation of DNN for any real tasks. To solve these problems, we propose a new learning method of DNN by combining transfer learning and knowledge distillation. The characteristic point of our proposed method is that we learn the DNN parameters by applying the techniques mentioned above simultaneously, i.e., we transfer the feature map of teacher DNN to student DNN, which is smaller than teacher DNN.</p>

収録刊行物

参考文献 (7)*注記

もっと見る

詳細情報 詳細情報について

問題の指摘

ページトップへ