Function Approximation Using LVQ

  • KYU Shon MIN
    Department of Electrical and Electronic Systems Engineering Graduate School of Information Science and Electrical Engineering Kyushu University
  • MURATA Junichi
    Department of Electrical and Electronic Systems Engineering Graduate School of Information Science and Electrical Engineering Kyushu University
  • HIRASAWA Kotaro
    Department of Electrical and Electronic Systems Engineering Graduate School of Information Science and Electrical Engineering Kyushu University

この論文をさがす

抄録

Neural networks with local activation functions, for example RBFNs (Radial Basis Function Networks), have a merit of excellent generalization abilities. When this type of network is used in function approximation, it is very important to determine the proper division of the input space into local regions to each of which a local activation function is assigned. In RBFNs, this is equivalent to determination of the locations and the numbers of its RBFs, which is generally done based on the distribution of input data. But, in function approximation, the output information (the value of the function to be approximated) must be considered in determination of the local regions. A new method is proposed that uses LVQ network to approximate functions based on the output information. It divides the input space into regions with a prototype vector at the center of each region. The ordinary LVQ, however, outputs discrete values only, and therefore can not deal with continuous functions. In this paper, a technique is proposed to solve this problem. Examples are provided to show the effectiveness of the proposed method.

収録刊行物

参考文献 (7)*注記

もっと見る

詳細情報 詳細情報について

問題の指摘

ページトップへ