Improved Meta-learning by Parameter Adjustment via Latent Variables and Probabilistic Inference
-
- SHIMIZU Eiki
- Waseda University
-
- AOKI Shogo
- Waseda University
-
- MIKAWA Kenta
- Shonan Institute of Technology
-
- GOTO Masayuki
- Waseda University
Bibliographic Information
- Other Title
-
- 潜在変数を導入したパラメータ調整に基づくメタ学習法の改良法
Abstract
<p>Standard deep neural networks require large training data and fail to achieve good performance in the small data regime. To overcome this limitation, meta-learning approaches have recently been explored. The goal of meta-learning methods is to empower models to automatically learn across-task knowledge usually referred to meta-knowledge, so that task-specific knowledge of new tasks can be obtained using only few data. Among them, Model-Agnostic Meta-Learning or MAML is one of the best approaches, showing high performances in many settings. However, MAML does not consider varying effectiveness of meta-knowledge to each task, since learning rate is set constant across tasks. In this paper, we propose a model that adjusts learning rate for each task by introducing latent variables and applying probabilistic inference. We demonstrate that this approach improves the performance of MAML on few-shot image classification benchmark dataset, and confirm that learning rate is adaptively adjusted by visualizing latent variables.</p>
Journal
-
- Proceedings of the Annual Conference of JSAI
-
Proceedings of the Annual Conference of JSAI JSAI2020 (0), 4I3GS202-4I3GS202, 2020
The Japanese Society for Artificial Intelligence
- Tweet
Keywords
Details 詳細情報について
-
- CRID
- 1390566775143095296
-
- NII Article ID
- 130007857364
-
- Text Lang
- ja
-
- Data Source
-
- JaLC
- CiNii Articles
-
- Abstract License Flag
- Disallowed