Bibliographic Information

Analogue imprecision in MLP training

Peter J. Edwards, Alan F. Murray

(Progress in neural processing, 4)

World Scientific, c1996

Available at  / 3 libraries

Search this Book/Journal

Note

Includes bibliographical references and index

Description and Table of Contents

Description

Hardware inaccuracy and imprecision are important considerations when implementing neural algorithms. This book presents a study of synaptic weight noise as a typical fault model for analogue VLSI realisations of MLP neural networks and examines the implications for learning and network performance. The aim of the book is to present a study of how including an imprecision model into a learning scheme as a“fault tolerance hint” can aid understanding of accuracy and precision requirements for a particular implementation. In addition the study shows how such a scheme can give rise to significant performance enhancement.

Table of Contents

  • Neural network performance metrics
  • noise in neural implementations
  • simulation requirements and environment
  • fault tolerance
  • generalisation ability
  • learning trajectory and speed.

by "Nielsen BookData"

Related Books: 1-1 of 1

Details

Page Top