Discretization and MCMC convergence assessment
Author(s)
Bibliographic Information
Discretization and MCMC convergence assessment
(Lecture notes in statistics, 135)
Springer, c1998
Available at / 47 libraries
-
Library, Research Institute for Mathematical Sciences, Kyoto University数研
C||Discretization-198040728
-
No Libraries matched.
- Remove all filters.
Note
"References": p. [175]-183
Includes indexes
Description and Table of Contents
Description
The exponential increase in the use of MCMC methods and the corre sponding applications in domains of even higher complexity have caused a growing concern about the available convergence assessment methods and the realization that some of these methods were not reliable enough for all-purpose analyses. Some researchers have mainly focussed on the con vergence to stationarity and the estimation of rates of convergence, in rela tion with the eigenvalues of the transition kernel. This monograph adopts a different perspective by developing (supposedly) practical devices to assess the mixing behaviour of the chain under study and, more particularly, it proposes methods based on finite (state space) Markov chains which are obtained either through a discretization of the original Markov chain or through a duality principle relating a continuous state space Markov chain to another finite Markov chain, as in missing data or latent variable models. The motivation for the choice of finite state spaces is that, although the resulting control is cruder, in the sense that it can often monitor con vergence for the discretized version alone, it is also much stricter than alternative methods, since the tools available for finite Markov chains are universal and the resulting transition matrix can be estimated more accu rately. Moreover, while some setups impose a fixed finite state space, other allow for possible refinements in the discretization level and for consecutive improvements in the convergence monitoring.
Table of Contents
1 Markov Chain Monte Carlo Methods.- 1.1 Motivations.- 1.2 Metropolis-Hastings algorithms.- 1.3 The Gibbs sampler.- 1.4 Perfect sampling.- 1.5 Convergence results from a Duality Principle.- 2 Convergence Control of MCMC Algorithms.- 2.1 Introduction.- 2.2 Convergence assessments for single chains.- 2.2.1 Graphical evaluations.- 2.2.2 Binary approximation.- 2.3 Convergence assessments based on parallel chains.- 2.3.1 Introduction.- 2.3.2 Between-within variance criterion.- 2.3.3 Distance to the stationary distribution.- 2.4 Coupling techniques.- 2.4.1 Coupling theory.- 2.4.2 Coupling diagnoses.- 3 Linking Discrete and Continuous Chains.- 3.1 Introduction.- 3.2 Rao-Blackwellization.- 3.3 Riemann sum control variates.- 3.3.1 Merging numerical and Monte Carlo methods.- 3.3.2 Rao-Blackwellized Riemann sums.- 3.3.3 Control variates.- 3.4 A mixture example.- 4 Valid Discretization via Renewal Theory.- 4.1 Introduction.- 4.2 Renewal theory and small sets.- 4.2.1 Definitions.- 4.2.2 Renewal for Metropolis-Hastings algorithms.- 4.2.3 Splitting the kernel.- 4.2.4 Splitting in practice.- 4.3 Discretization of a continuous Markov chain.- 4.4 Convergence assessment through the divergence criterion.- 4.4.1 The divergence criterion.- 4.4.2 A finite example.- 4.4.3 Stopping rules.- 4.4.4 Extension to continuous state spaces.- 4.4.5 From divergence estimation to convergence control.- 4.5 Illustration for the benchmark examples.- 4.5.1 Pump Benchmark.- 4.5.2 Cauchy Benchmark.- 4.5.3 Multinomial Benchmark.- 4.5.4 Comments.- 4.6 Renewal theory for variance estimation.- 4.6.1 Estimation of the asymptotic variance.- 4.6.2 Illustration for the Cauchy Benchmark.- 4.6.3 Finite Markov chains.- 5 Control by the Central Limit Theorem.- 5.1 Introduction.- 5.2 CLT and Renewal Theory.- 5.2.1 Renewal times.- 5.2.2 CLT for finite Markov chains.- 5.2.3 More general CLTs.- 5.3 Two control methods with parallel chains.- 5.3.1 CLT and Berry-Esseen bounds for finite chains.- 5.3.2 Convergence assessment by normality monitoring.- 5.3.3 Convergence assessment by variance comparison.- 5.3.4 A finite example.- 5.4 Extension to continuous state chains.- 5.4.1 Automated normality control for continuous chains.- 5.4.2 Variance comparison.- 5.4.3 A continuous state space example.- 5.5 Illustration for the benchmark examples.- 5.5.1 Cauchy Benchmark.- 5.5.2 Multinomial Benchmark.- 5.6 Testing normality on the latent variables.- 6 Convergence Assessment in Latent Variable Models: DNA Applications.- 6.1 Introduction.- 6.2 Hidden Markov model and associated Gibbs sampler.- 6.2.1M1-M0hidden Markov model.- 6.2.2 MCMC implementation.- 6.3 Analysis of thebIL67bacteriophage genome: first convergence diagnostics.- 6.3.1 Estimation results.- 6.3.2 Assessing convergence with CODA.- 6.4 Coupling from the past for theM1-M0model.- 6.4.1 The CFTP method.- 6.4.2 The monotone CFTP method for theM1-M0DNA model.- 6.4.3 Application to thebIL67bacteriophage.- 6.5 Control by the Central Limit Theorem.- 6.5.1 Normality control for the parameters with parallel chains.- 6.5.2 Testing normality of the hidden state chain.- 7 Convergence Assessment in Latent Variable Models: Application to the Longitudinal Modelling of a Marker of HIV Progression.- 7.1 Introduction.- 7.2 Hierarchical Model.- 7.2.1 Longitudinal disease process.- 7.2.2 Model of marker variability.- 7.2.3 Implementation.- 7.3 Analysis of the San Francisco Men's Health Study.- 7.3.1 Data description.- 7.3.2 Results.- 7.4 Convergence assessment.- 8 Estimation of Exponential Mixtures.- 8.1 Exponential mixtures.- 8.1.1 Motivations.- 8.1.2 Reparameterization of an exponential mixture.- 8.1.3 Unknown number of components.- 8.1.4 MCMC implementation.- 8.2 Convergence evaluation.- 8.2.1 Graphical assessment.- 8.2.2 Normality check.- 8.2.3 Riemann control variates.- References.- Author Index.
by "Nielsen BookData"