New soft computing techniques for system modelling, pattern classification and image processing

著者

    • Rutkowski, Leszek

書誌事項

New soft computing techniques for system modelling, pattern classification and image processing

Leszek Rutkowski

(Studies in fuzziness and soft computing, v. 143)

Springer, c2004

大学図書館所蔵 件 / 9

この図書・雑誌をさがす

注記

Includes bibliographical references (p. [343]-373)

内容説明・目次

内容説明

Science has made great progress in the twentieth century, with the establishment of proper disciplines in the fields of physics, computer science, molecular biology, and many others. At the same time, there have also emerged many engineering ideas that are interdisciplinary in nature, beyond the realm of such orthodox disciplines. These in clude, for example, artificial intelligence, fuzzy logic, artificial neural networks, evolutional computation, data mining, and so on. In or der to generate new technology that is truly human-friendly in the twenty-first century, integration of various methods beyond specific disciplines is required. Soft computing is a key concept for the creation of such human friendly technology in our modern information society. Professor Rutkowski is a pioneer in this field, having devoted himself for many years to publishing a large variety of original work. The present vol ume, based mostly on his own work, is a milestone in the devel opment of soft computing, integrating various disciplines from the fields of information science and engineering. The book consists of three parts, the first of which is devoted to probabilistic neural net works. Neural excitation is stochastic, so it is natural to investi gate the Bayesian properties of connectionist structures developed by Professor Rutkowski. This new approach has proven to be par ticularly useful for handling regression and classification problems vi Preface in time-varying environments. Throughout this book, major themes are selected from theoretical subjects that are tightly connected with challenging applications.

目次

1 Introduction.- I Probabilistic Neural Networks in a Non-stationary Environment.- 2 Kernel Functions for Construction of Probabilistic Neural Networks.- 2.1 Introduction.- 2.2 Application of the Parzen kernel.- 2.3 Application of the orthogonal series.- 2.4 Concluding remarks.- 3 Introduction to Probabilistic Neural Networks.- 3.1 Introduction.- 3.2 Probabilistic neural networks for density estimation.- 3.3 General regression neural networks in a stationary environment.- 3.4 Probabilistic neural networks for pattern classification in a stationary environment.- 3.5 Concluding remarks.- 4 General Learning Procedure in a Time-Varying Environment.- 4.1 Introduction.- 4.2 Problem description.- 4.3 Presentation of the general learning procedure.- 4.4 Convergence of general learning procedure.- 4.4.1 Local properties.- 4.4.2 Global properties.- 4.4.3 Speed of convergence.- 4.5 Quasi-stationary environment.- 4.6 Problem of prediction.- 4.7 Concluding remarks.- 5 Generalized Regression Neural Networks in a Time-Varying Environment.- 5.1 Introduction.- 5.2 Problem description and presentation of the GRNN.- 5.3 Convergence of the GRNN in a time-varying environment.- 5.3.1 The GRNN based on Parzen kernels.- 5.3.2 The GRNN based on the orthogonal series.- 5.4 Speed of convergence.- 5.5 Modelling of systems with multiplicative non-stationarity.- 5.6 Modelling of systems with additive non-stationarity.- 5.7 Modelling of systems with non-stationarity of the "scale change" and "movable argument" type.- 5.8 Modelling of systems with a diminishing non-stationarity.- 5.9 Concluding remarks.- 6 Probabilistic Neural Networks for Pattern Classification in a Time-Varying Environment.- 6.1 Introduction.- 6.2 Problem description and presentation of classification rules.- 6.3 Asymptotic optymality of classification rules.- 6.4 Speed of convergence of classification rules.- 6.5 Classification procedures based on the Parzen kernels.- 6.6 Classification procedures based on the orthogonal series.- 6.7 Non-stationarity of the "movable argument" type.- 6.8 Classification in the case of a quasi-stationary environment.- 6.9 Simulation results.- 6.9.1 PNN for estimation of a time-varying probability density.- 6.9.2 PNN for classification in a time-varying environment.- 6.10 Concluding remarks.- II Soft Computing Techniques for Image Compression.- 7 Vector Quantization for Image Compression.- 7.1 Introduction.- 7.2 Preprocessing.- 7.3 Problem description.- 7.4 VQ algorithm based on neural network.- 7.5 Concluding remarks.- 8 The DPCM Technique.- 8.1 Introduction.- 8.2 Scalar case.- 8.3 Vector case.- 8.4 Application of neural network.- 8.5 Concluding remarks.- 9 The PVQ Scheme.- 9.1 Introduction.- 9.2 Description of the PVQ scheme.- 9.3 Concluding remarks.- 10 Design of the Predictor.- 10.1 Introduction.- 10.2 Optimal vector linear predictor.- 10.3 Linear predictor design from empirical data.- 10.4 Predictor based on neural networks.- 10.5 Concluding remarks.- 11 Design of the Code-book.- 11.1 Introduction.- 11.2 Competitive algorithms.- 11.3 Preprocessing.- 11.4 Selection of initial code-book.- 11.5 Concluding remarks.- 12 Design of the PVQ Schemes.- 12.1 Introduction.- 12.2 Open-loop design.- 12.3 Closed-loop design.- 12.4 Modified closed-loop design.- 12.5 Neural PVQ design.- 12.6 Concluding remarks.- III Recursive Least Squares Methods for Neural Network Learning and their Systolic Implementations.- 13 A Family of the RLS Learning Algorithms.- 13.1 Introduction.- 13.2 Notation.- 13.3 Problem description.- 13.4 RLS learning algorithms.- 13.4.1 Single layer neural network.- 13.4.2 Multi-layer neural networks.- 13.5 QQ-RLS learning algorithms.- 13.5.1 Single layer.- 13.5.2 Multi-layer neural network.- 13.6 UD-RLS learning algorithms.- 13.6.1 Single layer.- 13.6.2 Multi-layer neural networks.- 13.7 Simulation results.- 13.7.1 Performance evaluation.- 13.8 Concluding remarks.- 14 Systolic Implementations of the RLS Learning Algorithms.- 14.1 Introduction.- 14.2 Systolic architecture for the recall phase.- 14.3 Systolic architectures for the ETB RLS learning algorithms.- 14.4 Systolic architectures for the RLS learning algorithms.- 14.5 Performance evaluation of systolic architectures.- 14.5.1 The recall phase.- 14.5.2 The learning phase: the ETB RLS algorithm.- 14.5.3 The learning phase: the RLS algorithm.- 14.6 Concluding remarks.- References.

「Nielsen BookData」 より

関連文献: 1件中  1-1を表示

詳細情報

ページトップへ