Learning in neural networks based on a generalized fluctuation theorem

Access this Article


Information maximization has been investigated as a possible mechanism of learning governing the self-organization that occurs within the neural systems of animals. Within the general context of models of neural systems bidirectionally interacting with environments, however, the role of information maximization remains to be elucidated. For bidirectionally interacting physical systems, universal laws describing the fluctuation they exhibit and the information they possess have recently been discovered. These laws are termed fluctuation theorems. In the present study, we formulate a theory of learning in neural networks bidirectionally interacting with environments based on the principle of information maximization. Our formulation begins with the introduction of a generalized fluctuation theorem, employing an interpretation appropriate for the present application, which differs from the original thermodynamic interpretation. We analytically and numerically demonstrate that the learning mechanism presented in our theory allows neural networks to efficiently explore their environments and optimally encode information about them.


  • Physical Review E - Statistical, Nonlinear, and Soft Matter Physics

    Physical Review E - Statistical, Nonlinear, and Soft Matter Physics 92(5), 2015-11-09

    American Physical Society


  • NII Article ID (NAID)
  • Text Lang
  • Article Type
    journal article
  • ISSN
  • Data Source
Page Top