Learning from data : artificial intelligence and statistics V
著者
書誌事項
Learning from data : artificial intelligence and statistics V
(Lecture notes in statistics, 112)
Springer-Verlag, c1996
大学図書館所蔵 全58件
  青森
  岩手
  宮城
  秋田
  山形
  福島
  茨城
  栃木
  群馬
  埼玉
  千葉
  東京
  神奈川
  新潟
  富山
  石川
  福井
  山梨
  長野
  岐阜
  静岡
  愛知
  三重
  滋賀
  京都
  大阪
  兵庫
  奈良
  和歌山
  鳥取
  島根
  岡山
  広島
  山口
  徳島
  香川
  愛媛
  高知
  福岡
  佐賀
  長崎
  熊本
  大分
  宮崎
  鹿児島
  沖縄
  韓国
  中国
  タイ
  イギリス
  ドイツ
  スイス
  フランス
  ベルギー
  オランダ
  スウェーデン
  ノルウェー
  アメリカ
注記
"Fifth International Workshop on Artificial Intelligence and Statistics, which was held at Ft. Lauderdale, Florida in January 1995"--Pref
Includes index
内容説明・目次
内容説明
Ten years ago Bill Gale of AT&T Bell Laboratories was primary organizer of the first Workshop on Artificial Intelligence and Statistics. In the early days of the Workshop series it seemed clear that researchers in AI and statistics had common interests, though with different emphases, goals, and vocabularies. In learning and model selection, for example, a historical goal of AI to build autonomous agents probably contributed to a focus on parameter-free learning systems, which relied little on an external analyst's assumptions about the data. This seemed at odds with statistical strategy, which stemmed from a view that model selection methods were tools to augment, not replace, the abilities of a human analyst. Thus, statisticians have traditionally spent considerably more time exploiting prior information of the environment to model data and exploratory data analysis methods tailored to their assumptions. In statistics, special emphasis is placed on model checking, making extensive use of residual analysis, because all models are 'wrong', but some are better than others. It is increasingly recognized that AI researchers and/or AI programs can exploit the same kind of statistical strategies to good effect. Often AI researchers and statisticians emphasized different aspects of what in retrospect we might now regard as the same overriding tasks.
目次
I Causality.- 1 Two Algorithms for Inducing Structural Equation Models from Data.- 2 Using Causal Knowledge to Learn More Useful Decision Rules from Data.- 3 A Causal Calculus for Statistical Research.- 4 Likelihood-based Causal Inference.- II Inference and Decision Making.- 5 Ploxoma: Testbed for Uncertain Inference.- 6 Solving Influence Diagrams Using Gibbs Sampling.- 7 Modeling and Monitoring Dynamic Systems by Chain Graphs.- 8 Propagation of Gaussian Belief Functions.- 9 On Test Selection Strategies for Belief Networks.- 10 Representing and Solving Asymmetric Decision Problems Using Valuation Networks.- 11 A Hill-Climbing Approach for Optimizing Classification Trees.- III Search Control in Model Hunting.- 12 Learning Bayesian Networks is NP-Complete.- 13 Heuristic Search for Model Structure: The Benefits of Restraining Greed.- 14 Learning Possibilistic Networks from Data.- 15 Detecting Imperfect Patterns in Event Streams Using Local Search.- 16 Structure Learning of Bayesian Networks by Hybrid Genetic Algorithms.- 17 An Axiomatization of Loglinear Models with an Application to the Model-Search Problem.- 18 Detecting Complex Dependencies in Categorical Data.- IV Classification.- 19 A Comparative Evaluation of Sequential Feature Selection Algorithms.- 20 Classification Using Bayes Averaging of Multiple, Relational Rule-Based Models.- 21 Picking the Best Expert from a Sequence.- 22 Hierarchical Clustering of Composite Objects with a Variable Number of Components.- 23 Searching for Dependencies in Bayesian Classifiers.- V General Learning Issues.- 24 Statistical Analysis fo Complex Systems in Biomedicine.- 25 Learning in Hybrid Noise Environments Using Statistical Queries.- 26 On the Statistical Comparison of Inductive Learning Methods.- 27 Dynamical Selection of Learning Algorithms.- 28 Learning Bayesian Networks Using Feature Selection.- 29 Data Representations in Learning.- VI EDA: Tools and Methods.- 30 Rule Induction as Exploratory Data Analysis.- 31 Non-Linear Dimensionality Reduction: A Comparative Performance Analysis.- 32 Omega-Stat: An Environment for Implementing Intelligent Modeling Strategies.- 33 Framework for a Generic Knowledge Discovery Toolkit.- 34 Control Representation in an EDA Assistant.- VII Decision and Regression Tree Induction.- 35 A Further Comparison of Simplification Methods for Decision-Tree Induction.- 36 Robust Linear Discriminant Trees.- 37 Tree Structured Interpretable Regression.- 38 An Exact Probability Metric for Decision Tree Splitting.- VIII Natural Language Processing.- 39 Two Applications of Statistical Modelling to Natural Language Processing.- 40 A Model for Part-of-Speech Prediction.- 41 Viewpoint-Based Measurement of Semantic Similarity Between Words.- 42 Part-of-Speech Tagging from "Small" Data Sets.
「Nielsen BookData」 より