Improved Meta-learning by Parameter Adjustment via Latent Variables and Probabilistic Inference

DOI

Bibliographic Information

Other Title
  • 潜在変数を導入したパラメータ調整に基づくメタ学習法の改良法

Abstract

<p>Standard deep neural networks require large training data and fail to achieve good performance in the small data regime. To overcome this limitation, meta-learning approaches have recently been explored. The goal of meta-learning methods is to empower models to automatically learn across-task knowledge usually referred to meta-knowledge, so that task-specific knowledge of new tasks can be obtained using only few data. Among them, Model-Agnostic Meta-Learning or MAML is one of the best approaches, showing high performances in many settings. However, MAML does not consider varying effectiveness of meta-knowledge to each task, since learning rate is set constant across tasks. In this paper, we propose a model that adjusts learning rate for each task by introducing latent variables and applying probabilistic inference. We demonstrate that this approach improves the performance of MAML on few-shot image classification benchmark dataset, and confirm that learning rate is adaptively adjusted by visualizing latent variables.</p>

Journal

Details 詳細情報について

  • CRID
    1390566775143095296
  • NII Article ID
    130007857364
  • DOI
    10.11517/pjsai.jsai2020.0_4i3gs202
  • Text Lang
    ja
  • Data Source
    • JaLC
    • CiNii Articles
  • Abstract License Flag
    Disallowed

Report a problem

Back to top