書誌事項

Learning in graphical models

edited by Michael I. Jordan

(NATO ASI series, Series D, Behavioural and social sciences ; no. 89)

Kluwer Academic Publishers, c1998

  • : hbk. : alk. paper

大学図書館所蔵 件 / 12

この図書・雑誌をさがす

注記

"Published in cooperation with NATO Scientific Affairs Division."

Includes bibliographical references and index

内容説明・目次

内容説明

In the past decade, a number of different research communities within the computational sciences have studied learning in networks, starting from a number of different points of view. There has been substantial progress in these different communities and surprising convergence has developed between the formalisms. The awareness of this convergence and the growing interest of researchers in understanding the essential unity of the subject underlies the current volume. Two research communities which have used graphical or network formalisms to particular advantage are the belief network community and the neural network community. Belief networks arose within computer science and statistics and were developed with an emphasis on prior knowledge and exact probabilistic calculations. Neural networks arose within electrical engineering, physics and neuroscience and have emphasised pattern recognition and systems modelling problems. This volume draws together researchers from these two communities and presents both kinds of networks as instances of a general unified graphical formalism. The book focuses on probabilistic methods for learning and inference in graphical models, algorithm analysis and design, theory and applications. Exact methods, sampling methods and variational methods are discussed in detail. Audience: A wide cross-section of computationally oriented researchers, including computer scientists, statisticians, electrical engineers, physicists and neuroscientists.

目次

  • Preface
  • M.I. Jordan. Part I: Inference. Introduction to Inference for Bayesian Networks
  • R. Cowell. Advanced Inference in Bayesian Networks
  • R. Cowell. Inference in Bayesian Networks Using Nested Junction Trees
  • U. Kjaerulff. Bucket Elimination: A Unifying Framework for Probabilistic Inference
  • R. Dechter. An Introduction to Variational Methods for Graphical Models
  • M.I. Jordan, et al. Improving the Mean Field Approximation via the Use of Mixture Distributions
  • T.S. Jaakkola, M.I. Jordan. Introduction to Monte Carlo Methods
  • D.J.C. MacKay. Suppressing Random Walks in Markov Chain Monte Carlo Using Ordered Overrelaxation
  • R.M. Neal. Part II: Independence. Chain Graphs and Symmetric Associations
  • T.S. Richardson. The Multiinformation Function as a Tool for Measuring Stochastic Dependence
  • M. Studeny, J. Vejnarova. Part III: Foundations for Learning. A Tutorial on Learning with Bayesian Networks
  • D. Heckerman. A View of the EM Algorithm that Justifies Incremental, Sparse, and Other Variants
  • R.M. Neal, G.E. Hinton. Part IV: Learning from Data. Latent Variable Models
  • C.M. Bishop. Stochastic Algorithms for Exploratory Data Analysis: Data Clustering and Data Visualization
  • J.M. Buhmann. Learning Bayesian Networks with Local Structure
  • N. Friedman, M. Goldszmidt. Asymptotic Model Selection for Directed Networks with Hidden Variables
  • D. Geiger, et al. A Hierarchical Community of Experts
  • G.E. Hinton, et al. An Information-Theoretic Analysis of Hard and Soft Assignment Methods for Clustering
  • M.J. Kearns, et al. Learning Hybrid Bayesian Networks from Data
  • S. Monti, G.F. Cooper. A Mean Field Learning Algorithm for UnsupervisedNatural Networks
  • L. Saul, M.I. Jordan. Edge Exclusion Tests for Graphical Gaussian Models
  • P.W.F. Smith, J. Whittaker. Hepatitis B: A Case Study in MCMC
  • D.J. Spiegelhalter, et al. Prediction with Gaussian Processes: From Linear Regression to Linear Prediction and Beyond
  • C.K.I. Williams. Subject Index.

「Nielsen BookData」 より

関連文献: 1件中  1-1を表示

詳細情報

ページトップへ