Neural network learning : theoretical foundations
著者
書誌事項
Neural network learning : theoretical foundations
Cambridge University Press, 1999
大学図書館所蔵 全41件
  青森
  岩手
  宮城
  秋田
  山形
  福島
  茨城
  栃木
  群馬
  埼玉
  千葉
  東京
  神奈川
  新潟
  富山
  石川
  福井
  山梨
  長野
  岐阜
  静岡
  愛知
  三重
  滋賀
  京都
  大阪
  兵庫
  奈良
  和歌山
  鳥取
  島根
  岡山
  広島
  山口
  徳島
  香川
  愛媛
  高知
  福岡
  佐賀
  長崎
  熊本
  大分
  宮崎
  鹿児島
  沖縄
  韓国
  中国
  タイ
  イギリス
  ドイツ
  スイス
  フランス
  ベルギー
  オランダ
  スウェーデン
  ノルウェー
  アメリカ
注記
Includes bibliographical references (p. 365-378) and indexes
内容説明・目次
内容説明
This book describes theoretical advances in the study of artificial neural networks. It explores probabilistic models of supervised learning problems, and addresses the key statistical and computational questions. Research on pattern classification with binary-output networks is surveyed, including a discussion of the relevance of the Vapnik-Chervonenkis dimension, and calculating estimates of the dimension for several neural network models. A model of classification by real-output networks is developed, and the usefulness of classification with a 'large margin' is demonstrated. The authors explain the role of scale-sensitive versions of the Vapnik-Chervonenkis dimension in large margin classification, and in real prediction. They also discuss the computational complexity of neural network learning, describing a variety of hardness results, and outlining two efficient constructive learning algorithms. The book is self-contained and is intended to be accessible to researchers and graduate students in computer science, engineering, and mathematics.
目次
- 1. Introduction
- Part I. Pattern Recognition with Binary-output Neural Networks: 2. The pattern recognition problem
- 3. The growth function and VC-dimension
- 4. General upper bounds on sample complexity
- 5. General lower bounds
- 6. The VC-dimension of linear threshold networks
- 7. Bounding the VC-dimension using geometric techniques
- 8. VC-dimension bounds for neural networks
- Part II. Pattern Recognition with Real-output Neural Networks: 9. Classification with real values
- 10. Covering numbers and uniform convergence
- 11. The pseudo-dimension and fat-shattering dimension
- 12. Bounding covering numbers with dimensions
- 13. The sample complexity of classification learning
- 14. The dimensions of neural networks
- 15. Model selection
- Part III. Learning Real-Valued Functions: 16. Learning classes of real functions
- 17. Uniform convergence results for real function classes
- 18. Bounding covering numbers
- 19. The sample complexity of learning function classes
- 20. Convex classes
- 21. Other learning problems
- Part IV. Algorithmics: 22. Efficient learning
- 23. Learning as optimisation
- 24. The Boolean perceptron
- 25. Hardness results for feed-forward networks
- 26. Constructive learning algorithms for two-layered networks.
「Nielsen BookData」 より