Modeling of autonomous problem solving process by dynamic construction of task models in multiple tasks environment

この論文にアクセスする

この論文をさがす

著者

    • OHIGASHI Yu
    • Graduate School of Information Science, Hokkaido University
    • OMORI Takashi
    • Tamagawa University Research Institute, Tamagawa University

抄録

Traditional reinforcement learning (RL) supposes a complex but single task to be solved. When a RL agent faces a task similar to a learned one, the agent must relearn the task from the beginning because it doesn't reuse the past learned results. This is the problem of quick action learning, which is the foundation of decision making in the real world. In this paper, we suppose agents that can solve a set of tasks similar to each other in a multiple tasks environment, where we encounter various problems one after another, and propose a technique of action learning that can quickly solve similar tasks by reusing previously learned knowledge. In our method, a model-based RL uses a task model constructed by combining primitive local predictors for predicting task and environmental dynamics. To evaluate the proposed method, we performed a computer simulation using a simple ping-pong game with variations.

収録刊行物

  • Neural networks : the official journal of the International Neural Network Society  

    Neural networks : the official journal of the International Neural Network Society 19(8), 1169-1180, 2006-10-01 

参考文献:  15件

参考文献を見るにはログインが必要です。ユーザIDをお持ちでない方は新規登録してください。

各種コード

ページトップへ