Support vector machines
著者
書誌事項
Support vector machines
(Information science and statistics / series editors M. Jordan ... [et al.])
Springer, c2008
大学図書館所蔵 件 / 全16件
-
該当する所蔵館はありません
- すべての絞り込み条件を解除する
この図書・雑誌をさがす
内容説明・目次
内容説明
Every mathematical discipline goes through three periods of development: the naive, the formal, and the critical. David Hilbert The goal of this book is to explain the principles that made support vector machines (SVMs) a successful modeling and prediction tool for a variety of applications. We try to achieve this by presenting the basic ideas of SVMs together with the latest developments and current research questions in a uni?ed style. In a nutshell, we identify at least three reasons for the success of SVMs: their ability to learn well with only a very small number of free parameters, their robustness against several types of model violations and outliers, and last but not least their computational e?ciency compared with several other methods. Although there are several roots and precursors of SVMs, these methods gained particular momentum during the last 15 years since Vapnik (1995, 1998) published his well-known textbooks on statistical learning theory with aspecialemphasisonsupportvectormachines. Sincethen,the?eldofmachine learninghaswitnessedintenseactivityinthestudyofSVMs,whichhasspread moreandmoretootherdisciplinessuchasstatisticsandmathematics. Thusit seems fair to say that several communities are currently working on support vector machines and on related kernel-based methods. Although there are many interactions between these communities, we think that there is still roomforadditionalfruitfulinteractionandwouldbegladifthistextbookwere found helpful in stimulating further research. Many of the results presented in this book have previously been scattered in the journal literature or are still under review. As a consequence, these results have been accessible only to a relativelysmallnumberofspecialists,sometimesprobablyonlytopeoplefrom one community but not the others.
目次
Preface.- Introduction.- Loss functions and their risks.- Surrogate loss functions.- Kernels and reproducing kernel Hilbert spaces.- Infinite samples versions of support vector machines.- Basic statistical analysis of SVMs.- Advanced statistical analysis of SVMs.- Support vector machines for classification.- Support vector machines for regression.- Robustness.- Computational aspects.- Data mining.- Appendix.- Notation and symbols.- Abbreviations.- Author index.- Subject index.- References.
「Nielsen BookData」 より