Model selection and error estimation in a nutshell
著者
書誌事項
Model selection and error estimation in a nutshell
(Modeling and optimization in science and technologies / series editors Srikanta Patnaik, Ishwar K. Sethi, Xiaolong Li, v. 15)
Springer, c2020
大学図書館所蔵 件 / 全1件
-
該当する所蔵館はありません
- すべての絞り込み条件を解除する
注記
Includes bibliographical references
"This Springer imprint is published by the registered company Springer Nature Switzerland AG ... Cham, Switzerland"--T.p. verso
内容説明・目次
内容説明
How can we select the best performing data-driven model? How can we rigorously estimate its generalization error? Statistical learning theory answers these questions by deriving non-asymptotic bounds on the generalization error of a model or, in other words, by upper bounding the true error of the learned model based just on quantities computed on the available data. However, for a long time, Statistical learning theory has been considered only an abstract theoretical framework, useful for inspiring new learning approaches, but with limited applicability to practical problems. The purpose of this book is to give an intelligible overview of the problems of model selection and error estimation, by focusing on the ideas behind the different statistical learning theory approaches and simplifying most of the technical aspects with the purpose of making them more accessible and usable in practice. The book starts by presenting the seminal works of the 80's and includes the most recent results. It discusses open problems and outlines future directions for research.
目次
Introduction.- The "Five W" of MS & EE.- Preliminaries.- Resampling Methods.- Complexity-Based Methods.- Compression Bound.- Algorithmic Stability Theory.- PAC-Bayes Theory.- Differential Privacy Theory.- Conclusions & Further Readings.
「Nielsen BookData」 より