Twofold Structure of Duality in Bayesian Model Averaging

この論文をさがす

抄録

Two Bayesian prediction problems in the context of model averaging are investigated by adopting dual Kullback-Leibler divergence losses, the e-divergence and the m-divergence losses. We show that the optimal predictors under the two losses are shown to satisfy interesting saddlepoint-type equalities. Actually, the optimal predictor under the e-divergence loss balances the log-likelihood ratio and the loss, while the optimal predictor under the m-divergence loss balances the Shannon entropy difference and the loss. These equalities also hold for the predictors maximizing the log-likelihood and the Shannon entropy respectively under the e-divergence loss and the m-divergence loss, showing that enlarging the log-likelihood and the Shannon entropy moderately will lead to the optimal predictors. In each divergence loss case we derive a robust predictor in the sense that its posterior risk is constant by minimizing a certain convex function. The Legendre transformation induced by this convex function implies that there is inherent duality in each Bayesian prediction problem.

収録刊行物

参考文献 (43)*注記

もっと見る

関連プロジェクト

もっと見る

詳細情報 詳細情報について

問題の指摘

ページトップへ