Tools for statistical inference : observed data and data augmentation methods

Bibliographic Information

Tools for statistical inference : observed data and data augmentation methods

Martin A. Tanner

(Lecture notes in statistics, 67)

Springer-Verlag, c1991

  • : us
  • : gw

Available at  / 50 libraries

Search this Book/Journal

Note

Includes bibliographical references and index

Description and Table of Contents

Volume

: us ISBN 9780387975252

Description

From the reviews: The purpose of the book under review is to give a survey of methods for the Bayesian or likelihood-based analysis of data. The author distinguishes between two types of methods: the observed data methods and the data augmentation ones. The observed data methods are applied directly to the likelihood or posterior density of the observed data. The data augmentation methods make use of the special "missing" data structure of the problem. They rely on an augmentation of the data which simplifies the likelihood or posterior density. #Zentralblatt fur Mathematik#

Table of Contents

I. Introduction.- A. Problems.- B. Techniques.- References.- II. Observed Data Techniques-Normal Approximation.- A. Likelihood/Posterior Density.- B. Maximum Likelihood.- C. Normal Based Inference.- D. The Delta Method.- E. Significance Levels.- References.- III. Observed Data Techniques.- A. Numerical Integration.- B. Litplace Expansion.- 1. Moments.- 2. Marginalization.- C. Monte Carlo Methods.- 1. Monte Carlo.- 2. Composition.- 3. Importance Sampling.- References.- IV. The EM Algorithm.- A. Introduction.- B. Theory.- C. EM in the Exponential Family.- D. Standard Errors.- 1. Direct Computation.- 2. Missing Information Principle.- 3. Louis' Method.- 4. Simulation.- 5. Using EM Iterates.- E. Monte Carlo Implementation of the E-Step.- F. Acceleration of EM.- References.- V. Data Augmentation.- A. Introduction.- B. Predictive Distribution.- C. HPD Region Computations.- 1. Calculating the Content.- 2. Calculating the Boundary.- D. Implementation.- E. Theory.- F. Poor Man's Data Augmentation.- 1. PMDA#1 65.- 2. PMDA Exact.- 3. PMDA #2.- G. SIR.- H. General Imputation Methods.- 1. Introduction.- 2. Hot Deck 72.- 3. Simple Residual.- 4. Normal and Adjusted Normal.- 5. Nonignorable Nonresponse.- a. Mixture Model-I.- b. Mixture Model-II.- c. Selection Model-I.- d. Selection Model-II.- I. Data Augmentation via Importance Sampling.- 1. General Comments.- 2. Censored Regression.- J. Sampling in the Context of Multinomial Data.- 1. Dirichlet Sampling.- 2. Latent Class Analysis.- References.- VI. The Gibbs Sampler.- A. Introduction.- 1. Chained Data Augmentation.- 2. The Gibbs Sampler.- 3. Historical Comments.- B. Examples.- 1. Rat Growth Data.- 2. Poisson Process.- 3. Generalized Linear Models.- C. The Griddy Gibbs Sampler.- 1. Example.- 2. Adaptive Grid.- References.
Volume

: gw ISBN 9783540975250

Description

The goal of this book is to provide a unified presentation of a variety of algorithms for likelihood and Bayesian inference. Two types of methods are considered: observed data and data augmentation methods. The observed data methods, which are applied directly to the likelihood or posterior inference, include maximum likelihood, Laplace expansion, Monte Carlo and importance sampling. The data augmentation methods rely on an augmentation of the data which simplifies the likelihood or posterior inference. These include EM, Louis' modification of the EM, poor man's data augmentation, SIR and the Gibbs sampler.

Table of Contents

  • Observed data techniques - normal approximation
  • observed data techniques
  • the EM algorithm
  • data augmentation
  • the Gibbs sampler.

by "Nielsen BookData"

Related Books: 1-1 of 1

Details

Page Top