Neural networks and intellect : using model-based concepts
著者
書誌事項
Neural networks and intellect : using model-based concepts
Oxford University Press, 2001
大学図書館所蔵 全16件
  青森
  岩手
  宮城
  秋田
  山形
  福島
  茨城
  栃木
  群馬
  埼玉
  千葉
  東京
  神奈川
  新潟
  富山
  石川
  福井
  山梨
  長野
  岐阜
  静岡
  愛知
  三重
  滋賀
  京都
  大阪
  兵庫
  奈良
  和歌山
  鳥取
  島根
  岡山
  広島
  山口
  徳島
  香川
  愛媛
  高知
  福岡
  佐賀
  長崎
  熊本
  大分
  宮崎
  鹿児島
  沖縄
  韓国
  中国
  タイ
  イギリス
  ドイツ
  スイス
  フランス
  ベルギー
  オランダ
  スウェーデン
  ノルウェー
  アメリカ
注記
Includes bibliographic references (p. 447-459) and index
内容説明・目次
内容説明
This work describes a new mathematical concept of modeling field theory and its applications to a variety of problems while offering a view of the relationships among mathematics, computational concepts in neural networks, semiotics, and concepts of mind in psychology and philosophy. The book is directed towards a diverse audience of students, teachers, researchers, and engineers working in the areas of neural networkss, artificial intelligence, cognitive science,
fuzzy systems, pattern recognition and machine/computer vision, data mining, robotics, target tracking, sensor fusion, spectrum analysis, time series analysis, and financial market forecasting. Mathematically inclined philosophers, semioticians, and psychologists will also find many areas of interest.
Modeling field neural networks utilize internal "world" models. The concept of internal models of the mind originated in artifical intelligence and cognitive psychology, but its roots date back to Plato and Aristotle. Intelligent systems based on rules utlize models in their final conceptual forms of rules. Like the Eide (Ideas) of Plato, rules lack adaptivity. In modeling field theory, the adaptive models are similar to the Forms of Aristotle and serve as the basis for learning. By
combining the a priori knowledge with learning, the most perplexing problems in field of neural networks and intelligent systems are addresses: fast learning and robust generalization. The new mathematics describes a basic instinct for learning and the related affective signals in the learning process. An
ability to perceive beauty is shown to be an essential property of adaptive system related to the instinct for learning. The combination of intuition with mathematics provides the foundation of a physical theory of mind.
The book reviews most of the mathematical concepts and engineering approaches to the development of intelligent systems discussed since the 1940s. The origin of the Aristotelian mathematics of mind is traced in Grossberg's ART neural network; and its essential component turns to be fuzzy logic. Among the topics disucssed are hierarchical and heterarchical organization of intelligent systems, statistical learning theory, genetic algorithms, complex adaptive systems, mathematical semiotics, the
dynamical nature of symbols, Godel theorems and intelligence, emotions and thinking, mathematics of emotional intellect, and consciousness. The author's striking conclusion is that philosphers of the past have been closer to the computational concepts emerging today than pattern recognition and AI
experts of just a few years ago.
目次
Chapters 1-7, 9, and 10 end with Notes, Bibliographical Notes, and Problems
Chapter 8 ends with Bibliographical Notes and Problems
Chapters 11 and 12 end with Notes and Bibliographical Notes
Preface
PART ONE: OVERVIEW: 2300 YEARS OF PHILOSOPHY, 100 YEARS OF MATHEMATICAL LOGIC, AND 50 YEARS OF COMPUTATIONAL INTELLIGENCE
1. Introduction: Concepts of Intelligence
1.1: Concepts of Intelligence in Mathematics, Psychology, and Philosophy
1.2: Probability, Hypothesis Choice, Pattern Recognition, and Complexity
1.3: Prediction, Tracking, and Dynamic Models
1.4: Preview: Intelligence, Internal Model, Symbol, Emotions, and Consciousness
2. Mathematical Concepts of Mind
2.1: Complexity, Aristotle, and Fuzzy Logic
2.2: Nearest Neighbors and Degenerate Geometries
2.3: Gradient Learning, Back Propagation, and Feedforward Neural Networks
2.4: Rule-Based Artificial Intelligence
2.5: Concept of Internal Model
2.6: Abductive Reasoning
2.7: Statistical Learning Theory and Support Vector Machines
2.8: AI Debates Past and Future
2.9: Society of Mind
2.10: Sensor Fusion and JDL Model
2.11: Hierarchical Organization
2.12: Semiotics
2.13: Evolutionary Computation, Genetic Algorithms, and CAS
2.14: Neural Field Theories
2.15: Intelligence, Learning, and Computability
3. Mathematical versus Metaphysical Concepts of Mind
3.1: Prolegomenon: Plato, Antisthenes, and Artifical Intelligence
3.2: Learning from Aristotle to Maimonides
3.3: Heresy of Occam and Scientific Method
3.4: Mathematics vs. Physics
3.5: Kant: Pure Spirit and Psychology
3.6: Freud vs. Jung: Psychology of Philosophy
3.7: Wither We Go From Here?
PART II: MODELING FIELD THEORY: NEW MATHEMATICAL THEORY OF INTELLIGENCE WITH EXAMPLES OF ENGINEERING APPLICATIONS
4. Modeling Field Theory
4.1: Internal Models, Uncertainties, and Similarities
4.2: Modeling Field Theory Dynamics
4.3: Bayesian MFT
4.4: Shannon-Einsteinian MFT
4.5: Modeling Field Theory Neural Architecture
4.6: Convergence
4.7: Learning of Structures, AIC, and SLT
4.8: Instinct of World Modeling: Knowledge Instinct
5. MLANS: Maximum Likelihood Adaptive Neural System for Grouping and Recognition
5.1: Grouping, Classification, and Models
5.2: Gaussian Mixture Model: Unsupervised Learning or Grouping
5.3: Combined Supervised and Unsupervised Learning
5.4: Structure Estimation
5.5: Wishart and Rician Mixture Models for Radar Image Classification
5.6: Convergence
5.7: MLANS, Physics, Biology, and Other Neural Networks
6. Einsteinian Neural Network
6.1: Images, Signals, and Spectra
6.2: Spectral Models
6.3: Neural Dynamics of ENN
6.4: Applications to Acoustic Transient Signals and Speech Recognition
6.5: Applications to Electromagnetic Wave Propagation in the Ionosphere
6.6: Summary
6.7: Appendix
7. Prediction, Tracking, and Dynamic Models
7.1: Prediction, Association, and Nonlinear Regression
7.2: Association and Tracking Using Bayesian MFT
7.3: Association and Tracking Using Shannon-Einsteinian MFT (SE-CAT)
7.4: Sensor Fusion MFT
7.5: Attention
8. Quantum Modeling Field Theory (QMFT)
8.1: Quantum Computing and Quantum Physics Notations
8.2: Gibbs Quantum Modeling Field System
8.3: Hamiltonian Quantum Modeling Field System
9. Fundamental Limitations on Learning
9.1: The Cramer-Rao Bound on Speed of Learning
9.2: Overlap Between Classes
9.3: CRB for MLANS
9.4: CRB for Concurrent Association and Tracking (CAT)
9.5: Summary: CRB for Intellect and Evolution?
9.6: Appendix: CRB Rule of Thumb for Tracking
10. Intelligent Systems Organization: MFT, Genetic Algorithms, and Kant
10.1: Kant, MFT, and Intelligent Systems
10.2: Emotional Machine (Toward Mathematics of Beauty)
10.3: Learning: Genetic Algorithms, MFT, and Semiosis
PART THREE: FUTURISTIC DIRECTIONS: FUN STUFF: MIND--PHYSICS + MATHEMATICS + CONJECTURES
11. Godel's Theorems, Mind, and Machine
11.1: Penrose and Computability of Mathematical Understanding
11.2: Logic and Mind
11.3: Godel, Turing, Penrose, and Putnam
11.4: Godel Theorem vs. Physics of Mind
12. Toward Physics of Consciousness
12.1: Phenomenology of Consciousness
12.2: Physics of Spiritual Substance: Future Directions
12.3: Epilogue
List of Symbols
Definitions
Bibliography
Index
「Nielsen BookData」 より