Analogue imprecision in MLP training
著者
書誌事項
Analogue imprecision in MLP training
(Progress in neural processing, 4)
World Scientific, c1996
大学図書館所蔵 件 / 全3件
-
該当する所蔵館はありません
- すべての絞り込み条件を解除する
注記
Includes bibliographical references and index
内容説明・目次
内容説明
Hardware inaccuracy and imprecision are important considerations when implementing neural algorithms. This book presents a study of synaptic weight noise as a typical fault model for analogue VLSI realisations of MLP neural networks and examines the implications for learning and network performance. The aim of the book is to present a study of how including an imprecision model into a learning scheme as a“fault tolerance hint” can aid understanding of accuracy and precision requirements for a particular implementation. In addition the study shows how such a scheme can give rise to significant performance enhancement.
目次
- Neural network performance metrics
- noise in neural implementations
- simulation requirements and environment
- fault tolerance
- generalisation ability
- learning trajectory and speed.
「Nielsen BookData」 より