Artificial neural networks : approximation and learning theory
著者
書誌事項
Artificial neural networks : approximation and learning theory
Blackwell, 1992
大学図書館所蔵 件 / 全36件
-
該当する所蔵館はありません
- すべての絞り込み条件を解除する
注記
Includes bibliographical references and index
内容説明・目次
内容説明
The recent re-emergence of network-based approaches to artificial intelligence has been accomplished by a virtual explosion of research. This research spans a range of disciplines - cognitive science, computer science, biology, neuroscience, electrical engineering, psychology, econometrics, philosophy, etc. which is, perhaps, wider than any other contemporary endeavour. Of all the contributing disciplines, the relatively universal language of mathematics provides some of the most powerful tools for answering fundamental questions about the capabilities and limitations of these `artificial neural networks'. In this collection, Halbert White and his colleagues present a rigorous mathematical analysis of the approximation and learning capabilities of the leading class of single hidden layer feedforward networks. Drawing together work previously scattered in space and time, the book gives a unified view of network learning not available in any other single location, and forges fundamental links between network learning and modern mathematical statistics.
目次
- Part 1 Approximation theory: there exists a neural network that does not make avoidable mistakes, A.R. Gallant and H. White
- multilayer feedforward networks are universal approximators, K. Hornik, et al
- universal approximation using feedforward networks with non-sigmoid hidden layer activation functions, M. Stinchcombe and H. White
- approximating and learning unknown mappings using multilayer feedwork networks with bounded weights, M. Stinchcombe and H. White
- universal approximation of an unknown mapping and its derivatives, K. Hornik, et al. Part 2 Learning and statistics: neural network learning and statistics, H. White
- learning in artificial neural networks, H. White
- some asymptotic results for learning in single hidden layer feedforward networks, H. White
- connectionist nonparametric regression, H. White
- nonparametric estimation of conditional quantiles using neural networks
- on learning the derivatives of an unknown mapping with multilayer feedforward netowrks, A.R. Gallant and H. White
- consequences and detection of misspecified nonlinear regression models, H. White
- maximum likelihood estimation of misspecified models, H. White
- some results for sieve estimation with dependent observations, H. White and J. Wooldridge.
「Nielsen BookData」 より