Learning Finite State Machines With Self-Clustering Recurrent Networks

  • Zheng Zeng
    Department of Electrical Engineering, 116-81, California Institute of Technology, Pasadena, CA 91125 USA
  • Rodney M. Goodman
    Department of Electrical Engineering, 116-81, California Institute of Technology, Pasadena, CA 91125 USA
  • Padhraic Smyth
    Jet Propulsion Laboratory, 238-420, California Institute of Technology, Pasadena, CA 91109 USA

抄録

<jats:p> Recent work has shown that recurrent neural networks have the ability to learn finite state automata from examples. In particular, networks using second-order units have been successful at this task. In studying the performance and learning behavior of such networks we have found that the second-order network model attempts to form clusters in activation space as its internal representation of states. However, these learned states become unstable as longer and longer test input strings are presented to the network. In essence, the network “forgets” where the individual states are in activation space. In this paper we propose a new method to force such a network to learn stable states by introducing discretization into the network and using a pseudo-gradient learning rule to perform training. The essence of the learning rule is that in doing gradient descent, it makes use of the gradient of a sigmoid function as a heuristic hint in place of that of the hard-limiting function, while still using the discretized value in the feedback update path. The new structure uses isolated points in activation space instead of vague clusters as its internal representation of states. It is shown to have similar capabilities in learning finite state automata as the original network, but without the instability problem. The proposed pseudo-gradient learning rule may also be used as a basis for training other types of networks that have hard-limiting threshold activation functions. </jats:p>

収録刊行物

被引用文献 (10)*注記

もっと見る

詳細情報 詳細情報について

問題の指摘

ページトップへ