Control systems and reinforcement learning
Author(s)
Bibliographic Information
Control systems and reinforcement learning
Cambridge University Press, 2022
- : hardback
Available at 1 libraries
  Aomori
  Iwate
  Miyagi
  Akita
  Yamagata
  Fukushima
  Ibaraki
  Tochigi
  Gunma
  Saitama
  Chiba
  Tokyo
  Kanagawa
  Niigata
  Toyama
  Ishikawa
  Fukui
  Yamanashi
  Nagano
  Gifu
  Shizuoka
  Aichi
  Mie
  Shiga
  Kyoto
  Osaka
  Hyogo
  Nara
  Wakayama
  Tottori
  Shimane
  Okayama
  Hiroshima
  Yamaguchi
  Tokushima
  Kagawa
  Ehime
  Kochi
  Fukuoka
  Saga
  Nagasaki
  Kumamoto
  Oita
  Miyazaki
  Kagoshima
  Okinawa
  Korea
  China
  Thailand
  United Kingdom
  Germany
  Switzerland
  France
  Belgium
  Netherlands
  Sweden
  Norway
  United States of America
Note
Bibliography: p. 415-430
Inculdes index
Description and Table of Contents
Description
A high school student can create deep Q-learning code to control her robot, without any understanding of the meaning of 'deep' or 'Q', or why the code sometimes fails. This book is designed to explain the science behind reinforcement learning and optimal control in a way that is accessible to students with a background in calculus and matrix algebra. A unique focus is algorithm design to obtain the fastest possible speed of convergence for learning algorithms, along with insight into why reinforcement learning sometimes fails. Advanced stochastic process theory is avoided at the start by substituting random exploration with more intuitive deterministic probing for learning. Once these ideas are understood, it is not difficult to master techniques rooted in stochastic control. These topics are covered in the second part of the book, starting with Markov chain theory and ending with a fresh look at actor-critic methods for reinforcement learning.
Table of Contents
- 1. Introduction
- Part I. Fundamentals Without Noise: 2. Control crash course
- 3. Optimal control
- 4. ODE methods for algorithm design
- 5. Value function approximations
- Part II. Reinforcement Learning and Stochastic Control: 6. Markov chains
- 7. Stochastic control
- 8. Stochastic approximation
- 9. Temporal difference methods
- 10. Setting the stage, return of the actors
- A. Mathematical background
- B. Markov decision processes
- C. Partial observations and belief states
- References
- Glossary of Symbols and Acronyms
- Index.
by "Nielsen BookData"