Learning Bayesian networks using minimum free energy principle 自由エネルギー最小原理を用いたベイジアンネットワークの学習

この論文をさがす

著者

    • 磯崎, 隆司 イソザキ, タカシ

書誌事項

タイトル

Learning Bayesian networks using minimum free energy principle

タイトル別名

自由エネルギー最小原理を用いたベイジアンネットワークの学習

著者名

磯崎, 隆司

著者別名

イソザキ, タカシ

学位授与大学

電気通信大学

取得学位

博士 (工学)

学位授与番号

甲第588号

学位授与年月日

2010-03-24

注記・抄録

博士論文

2009

Bayesian networks (BNs) are representative causal models and are expressed as directedacyclic graphs (DAGs) in which random variables and their dependencies areassociated, respectively, with nodes and directed edges. Qualitative relations are expressedas their structures and quantitative relations are expressed as their parameters.Therefore, learning BNs require two steps of parameters and structures. Learning BNalgorithms are anticipated as causal mining tools from data.Although the maximum likelihood (ML) principle is widely used for learning, weoften suffer from shortages of the data size because BNs need many data for processesthat are used to deal with combined multivariate systems, and because ML estimationoften falls into overfitting to a small data size. The maximum entropy (ME) principle,in contrast, states that probability distributions should be states of maximizing theirentropies for no information. In fact, the mixture states of these two principles shouldbe realized for the actual available data size. Bayesian methods, which involve priordistributions, are effective for avoiding overfitting. This prior has hyperparameters thatcan be interpreted as prior imaginary instances, which can easily realize the ML andME principles with corresponding data size. However, learning performances of BNsare known to be highly sensitive to the values of hyperparameters, and it is difficult todecide the optimal values.We specifically examine Helmholtz free energies and the principle of minimizing themas a metaphor of the tradeoff between the ML and the ME principles, for use in an alternativeapproach to learning. The minimum free energy (MFE) principle originatesfrom thermodynamics, which maintains balances between minimum internal energiesand maximum entropies under a given temperature in thermodynamical systems. Consequently,the author proposes an approach from a thermodynamical view for learningBNs, which is especially effective even for insufficient data. The “Data Temperature”assumption is important; it provides a meaning of temperature in use of free energiesfor statistical sciences. Internal energies, entropies, and temperature are defined andapplied for learning parameters and structures of BNs. This approach can treat the MLand the ME principles in a unified manner of the MFE principle with varying data size.In experiments of parameter learning with real-world datasets, our approach is superiorto the Bayesian method with some values of hyperparameters recommended in recentstudies, and shows non-sensitivity to the selection of hyperparameters involved in ourmethod. In simulations and experiments using real-world datasets for structure learning,the proposed method notably improves the performance of the PC algorithm, which isa representative structure learning algorithm in terms of the direction and existence ofedges for insufficient data.

15アクセス

各種コード

  • NII論文ID(NAID)
    500000523864
  • NII著者ID(NRID)
    • 8000000525710
  • 本文言語コード
    • eng
  • NDL書誌ID
    • 000011004047
  • データ提供元
    • 機関リポジトリ
    • NDL ONLINE
ページトップへ