Tools for statistical inference : observed data and data augmentation methods
著者
書誌事項
Tools for statistical inference : observed data and data augmentation methods
(Lecture notes in statistics, 67)
Springer-Verlag, c1991
- : us
- : gw
大学図書館所蔵 全50件
  青森
  岩手
  宮城
  秋田
  山形
  福島
  茨城
  栃木
  群馬
  埼玉
  千葉
  東京
  神奈川
  新潟
  富山
  石川
  福井
  山梨
  長野
  岐阜
  静岡
  愛知
  三重
  滋賀
  京都
  大阪
  兵庫
  奈良
  和歌山
  鳥取
  島根
  岡山
  広島
  山口
  徳島
  香川
  愛媛
  高知
  福岡
  佐賀
  長崎
  熊本
  大分
  宮崎
  鹿児島
  沖縄
  韓国
  中国
  タイ
  イギリス
  ドイツ
  スイス
  フランス
  ベルギー
  オランダ
  スウェーデン
  ノルウェー
  アメリカ
注記
Includes bibliographical references and index
内容説明・目次
- 巻冊次
-
: us ISBN 9780387975252
内容説明
From the reviews: The purpose of the book under review is to give a survey of methods for the Bayesian or likelihood-based analysis of data. The author distinguishes between two types of methods: the observed data methods and the data augmentation ones. The observed data methods are applied directly to the likelihood or posterior density of the observed data. The data augmentation methods make use of the special "missing" data structure of the problem. They rely on an augmentation of the data which simplifies the likelihood or posterior density. #Zentralblatt fur Mathematik#
目次
I. Introduction.- A. Problems.- B. Techniques.- References.- II. Observed Data Techniques-Normal Approximation.- A. Likelihood/Posterior Density.- B. Maximum Likelihood.- C. Normal Based Inference.- D. The Delta Method.- E. Significance Levels.- References.- III. Observed Data Techniques.- A. Numerical Integration.- B. Litplace Expansion.- 1. Moments.- 2. Marginalization.- C. Monte Carlo Methods.- 1. Monte Carlo.- 2. Composition.- 3. Importance Sampling.- References.- IV. The EM Algorithm.- A. Introduction.- B. Theory.- C. EM in the Exponential Family.- D. Standard Errors.- 1. Direct Computation.- 2. Missing Information Principle.- 3. Louis' Method.- 4. Simulation.- 5. Using EM Iterates.- E. Monte Carlo Implementation of the E-Step.- F. Acceleration of EM.- References.- V. Data Augmentation.- A. Introduction.- B. Predictive Distribution.- C. HPD Region Computations.- 1. Calculating the Content.- 2. Calculating the Boundary.- D. Implementation.- E. Theory.- F. Poor Man's Data Augmentation.- 1. PMDA#1 65.- 2. PMDA Exact.- 3. PMDA #2.- G. SIR.- H. General Imputation Methods.- 1. Introduction.- 2. Hot Deck 72.- 3. Simple Residual.- 4. Normal and Adjusted Normal.- 5. Nonignorable Nonresponse.- a. Mixture Model-I.- b. Mixture Model-II.- c. Selection Model-I.- d. Selection Model-II.- I. Data Augmentation via Importance Sampling.- 1. General Comments.- 2. Censored Regression.- J. Sampling in the Context of Multinomial Data.- 1. Dirichlet Sampling.- 2. Latent Class Analysis.- References.- VI. The Gibbs Sampler.- A. Introduction.- 1. Chained Data Augmentation.- 2. The Gibbs Sampler.- 3. Historical Comments.- B. Examples.- 1. Rat Growth Data.- 2. Poisson Process.- 3. Generalized Linear Models.- C. The Griddy Gibbs Sampler.- 1. Example.- 2. Adaptive Grid.- References.
- 巻冊次
-
: gw ISBN 9783540975250
内容説明
The goal of this book is to provide a unified presentation of a variety of algorithms for likelihood and Bayesian inference. Two types of methods are considered: observed data and data augmentation methods. The observed data methods, which are applied directly to the likelihood or posterior inference, include maximum likelihood, Laplace expansion, Monte Carlo and importance sampling. The data augmentation methods rely on an augmentation of the data which simplifies the likelihood or posterior inference. These include EM, Louis' modification of the EM, poor man's data augmentation, SIR and the Gibbs sampler.
目次
- Observed data techniques - normal approximation
- observed data techniques
- the EM algorithm
- data augmentation
- the Gibbs sampler.
「Nielsen BookData」 より