Neural network learning : theoretical foundations
Author(s)
Bibliographic Information
Neural network learning : theoretical foundations
Cambridge University Press, 2009, c1999
- : hbk
- : pbk
Available at 7 libraries
  Aomori
  Iwate
  Miyagi
  Akita
  Yamagata
  Fukushima
  Ibaraki
  Tochigi
  Gunma
  Saitama
  Chiba
  Tokyo
  Kanagawa
  Niigata
  Toyama
  Ishikawa
  Fukui
  Yamanashi
  Nagano
  Gifu
  Shizuoka
  Aichi
  Mie
  Shiga
  Kyoto
  Osaka
  Hyogo
  Nara
  Wakayama
  Tottori
  Shimane
  Okayama
  Hiroshima
  Yamaguchi
  Tokushima
  Kagawa
  Ehime
  Kochi
  Fukuoka
  Saga
  Nagasaki
  Kumamoto
  Oita
  Miyazaki
  Kagoshima
  Okinawa
  Korea
  China
  Thailand
  United Kingdom
  Germany
  Switzerland
  France
  Belgium
  Netherlands
  Sweden
  Norway
  United States of America
Note
"This digitally printed version 2009"--T.p. verso
Includes bibliographical references (p. 365-378) and indexes
Description and Table of Contents
Description
This book describes theoretical advances in the study of artificial neural networks. It explores probabilistic models of supervised learning problems, and addresses the key statistical and computational questions. Research on pattern classification with binary-output networks is surveyed, including a discussion of the relevance of the Vapnik-Chervonenkis dimension, and calculating estimates of the dimension for several neural network models. A model of classification by real-output networks is developed, and the usefulness of classification with a 'large margin' is demonstrated. The authors explain the role of scale-sensitive versions of the Vapnik-Chervonenkis dimension in large margin classification, and in real prediction. They also discuss the computational complexity of neural network learning, describing a variety of hardness results, and outlining two efficient constructive learning algorithms. The book is self-contained and is intended to be accessible to researchers and graduate students in computer science, engineering, and mathematics.
Table of Contents
- 1. Introduction
- Part I. Pattern Recognition with Binary-output Neural Networks: 2. The pattern recognition problem
- 3. The growth function and VC-dimension
- 4. General upper bounds on sample complexity
- 5. General lower bounds
- 6. The VC-dimension of linear threshold networks
- 7. Bounding the VC-dimension using geometric techniques
- 8. VC-dimension bounds for neural networks
- Part II. Pattern Recognition with Real-output Neural Networks: 9. Classification with real values
- 10. Covering numbers and uniform convergence
- 11. The pseudo-dimension and fat-shattering dimension
- 12. Bounding covering numbers with dimensions
- 13. The sample complexity of classification learning
- 14. The dimensions of neural networks
- 15. Model selection
- Part III. Learning Real-Valued Functions: 16. Learning classes of real functions
- 17. Uniform convergence results for real function classes
- 18. Bounding covering numbers
- 19. The sample complexity of learning function classes
- 20. Convex classes
- 21. Other learning problems
- Part IV. Algorithmics: 22. Efficient learning
- 23. Learning as optimisation
- 24. The Boolean perceptron
- 25. Hardness results for feed-forward networks
- 26. Constructive learning algorithms for two-layered networks.
by "Nielsen BookData"