Simulation-based optimization : parametric optimization techniques and reinforcement learning
Author(s)
Bibliographic Information
Simulation-based optimization : parametric optimization techniques and reinforcement learning
(Operations research/computer science interface series, ORCS 25)
Kluwer academic publishers, c2010
- : pbk
Available at 2 libraries
  Aomori
  Iwate
  Miyagi
  Akita
  Yamagata
  Fukushima
  Ibaraki
  Tochigi
  Gunma
  Saitama
  Chiba
  Tokyo
  Kanagawa
  Niigata
  Toyama
  Ishikawa
  Fukui
  Yamanashi
  Nagano
  Gifu
  Shizuoka
  Aichi
  Mie
  Shiga
  Kyoto
  Osaka
  Hyogo
  Nara
  Wakayama
  Tottori
  Shimane
  Okayama
  Hiroshima
  Yamaguchi
  Tokushima
  Kagawa
  Ehime
  Kochi
  Fukuoka
  Saga
  Nagasaki
  Kumamoto
  Oita
  Miyazaki
  Kagoshima
  Okinawa
  Korea
  China
  Thailand
  United Kingdom
  Germany
  Switzerland
  France
  Belgium
  Netherlands
  Sweden
  Norway
  United States of America
Note
Includes bibliographical references and index
Description and Table of Contents
Description
Simulation-Based Optimization: Parametric Optimization Techniques and Reinforcement Learning introduces the evolving area of simulation-based optimization.
The book's objective is two-fold: (1) It examines the mathematical governing principles of simulation-based optimization, thereby providing the reader with the ability to model relevant real-life problems using these techniques. (2) It outlines the computational technology underlying these methods. Taken together these two aspects demonstrate that the mathematical and computational methods discussed in this book do work.
Broadly speaking, the book has two parts: (1) parametric (static) optimization and (2) control (dynamic) optimization. Some of the book's special features are:
*An accessible introduction to reinforcement learning and parametric-optimization techniques.
*A step-by-step description of several algorithms of simulation-based optimization.
*A clear and simple introduction to the methodology of neural networks.
*A gentle introduction to convergence analysis of some of the methods enumerated above.
*Computer programs for many algorithms of simulation-based optimization.
Table of Contents
List of Figures. List of Tables. Acknowledgements. Preface. 1. Background. 1.1. Why this book was written. 1.2. Simulation-based optimization and modern times. 1.3. How this book is organized. 2. Notation. 2.1. Chapter Overview. 2.2. Some basic conventions. 2.3. Vector notation. 2.4. Notation for matrices. 2.5. Notation for n-tuples. 2.6. Notation for sets. 2.7. Notation for sequences. 2.8. Notation for transformations. 2.9. Max, min and arg max. 2.10. Acronyms and abbreviations. 3. Probability theory: a refresher.3.1. Overview of this chapter. 3.2. Laws of probability. 3.3. Probability distributions. 3.4. Expected value of a random variable. 3.5. Standard deviation of a random variable. 3.6. Limit theorems. 3.7. Review questions. 4. Basic concepts underlying simulation. 4.1. Chapter overview. 4.2. Introductions. 4.3. Models. 4.4. Simulation modeling of random systems. 4.5. Concluding remarks. 4.6. Historical remarks. 4.7. Review questions. 5. Simulation optimization: an overview. 5.1. Chapter overview. 5.2. Stochastic parametric optimization. 5.3. Stochastic control optimization. 5.4. Historical remarks. 5.5. Review questions. 6. Response surfaces and neural nets. 6.1. Chapter overview. 6.2. RSM: an overview. 6.3. RSM: details. 6.4. Neuro-response surface methods. 6.5. Concluding remarks. 6.6. Bibliographic remarks. 6.7. Review questions. 7. Parametric optimization. 7.1. Chapter overview. 7.2. Continuous optimization. 7.3. Discrete optimization. 7.4. Hybrid solution spaces. 7.5. Concluding remarks. 7.6. Bibliographic remarks. 7.7. Review questions. 8. Dynamic programming. 8.1. Chapter overview. 8.2. Stochastic processes. 8.3. Markov processes, Markov chains and semi-Markov processes. 8.4. Markov decision problems. 8.5. How to solve an MDP using exhaustive enumeration. 8.6. Dynamic programming for average reward. 8.7. Dynamic programming and discounted reward. 8.8. The Bellman equation: an intuitive perspective. 8.9. Semi-Markov decision problems. 8.10. Modified policy iteration. 8.11. Miscellaneous topics related to MDPs and SMDPs. 8.12. Conclusions. 8.13. Bibliographic remarks. 8.14. Review questions. 9. Reinforcement learning. 9.1. Chapter overview. 9.2. The need for reinforcement learning. 9.3. Generating the TPM through straightforward counting. 9.4. Reinforcement learning: fundamentals. 9.5. Discounted reward reinforcement learning. 9.6. Average reward reinforcement learning. 9.7. Semi-Markov decision problems and RL. 9.8. RL algorithms and their DP counterparts. 9.9. Act
by "Nielsen BookData"