Optimal control
Author(s)
Bibliographic Information
Optimal control
Wiley, c2012
3rd ed.
Available at 5 libraries
  Aomori
  Iwate
  Miyagi
  Akita
  Yamagata
  Fukushima
  Ibaraki
  Tochigi
  Gunma
  Saitama
  Chiba
  Tokyo
  Kanagawa
  Niigata
  Toyama
  Ishikawa
  Fukui
  Yamanashi
  Nagano
  Gifu
  Shizuoka
  Aichi
  Mie
  Shiga
  Kyoto
  Osaka
  Hyogo
  Nara
  Wakayama
  Tottori
  Shimane
  Okayama
  Hiroshima
  Yamaguchi
  Tokushima
  Kagawa
  Ehime
  Kochi
  Fukuoka
  Saga
  Nagasaki
  Kumamoto
  Oita
  Miyazaki
  Kagoshima
  Okinawa
  Korea
  China
  Thailand
  United Kingdom
  Germany
  Switzerland
  France
  Belgium
  Netherlands
  Sweden
  Norway
  United States of America
Note
Includes bibliographical references and index
Description and Table of Contents
Description
A NEW EDITION OF THE CLASSIC TEXT ON OPTIMAL CONTROL THEORY
As a superb introductory text and an indispensable reference, this new edition of Optimal Control will serve the needs of both the professional engineer and the advanced student in mechanical, electrical, and aerospace engineering. Its coverage encompasses all the fundamental topics as well as the major changes that have occurred in recent years. An abundance of computer simulations using MATLAB and relevant Toolboxes is included to give the reader the actual experience of applying the theory to real-world situations. Major topics covered include:
Static Optimization
Optimal Control of Discrete-Time Systems
Optimal Control of Continuous-Time Systems
The Tracking Problem and Other LQR Extensions
Final-Time-Free and Constrained Input Control
Dynamic Programming
Optimal Control for Polynomial Systems
Output Feedback and Structured Control
Robustness and Multivariable Frequency-Domain Techniques
Differential Games
Reinforcement Learning and Optimal Adaptive Control
Table of Contents
PREFACE xi 1 STATIC OPTIMIZATION 1
1.1 Optimization without Constraints / 1
1.2 Optimization with Equality Constraints / 4
1.3 Numerical Solution Methods / 15
Problems / 15
2 OPTIMAL CONTROL OF DISCRETE-TIME SYSTEMS 19
2.1 Solution of the General Discrete-Time Optimization Problem / 19
2.2 Discrete-Time Linear Quadratic Regulator / 32
2.3 Digital Control of Continuous-Time Systems / 53
2.4 Steady-State Closed-Loop Control and Suboptimal Feedback / 65
2.5 Frequency-Domain Results / 96
Problems / 102
3 OPTIMAL CONTROL OF CONTINUOUS-TIME SYSTEMS 110
3.1 The Calculus of Variations / 110
3.2 Solution of the General Continuous-Time Optimization Problem / 112
3.3 Continuous-Time Linear Quadratic Regulator / 135
3.4 Steady-State Closed-Loop Control and Suboptimal Feedback / 154
3.5 Frequency-Domain Results / 164
Problems / 167
4 THE TRACKING PROBLEM AND OTHER LQR EXTENSIONS 177
4.1 The Tracking Problem / 177
4.2 Regulator with Function of Final State Fixed / 183
4.3 Second-Order Variations in the Performance Index / 185
4.4 The Discrete-Time Tracking Problem / 190
4.5 Discrete Regulator with Function of Final State Fixed / 199
4.6 Discrete Second-Order Variations in the Performance Index / 206
Problems / 211
5 FINAL-TIME-FREE AND CONSTRAINED INPUT CONTROL 213
5.1 Final-Time-Free Problems / 213
5.2 Constrained Input Problems / 232
Problems / 257
6 DYNAMIC PROGRAMMING 260
6.1 Bellman's Principle of Optimality / 260
6.2 Discrete-Time Systems / 263
6.3 Continuous-Time Systems / 271
Problems / 283
7 OPTIMAL CONTROL FOR POLYNOMIAL SYSTEMS 287
7.1 Discrete Linear Quadratic Regulator / 287
7.2 Digital Control of Continuous-Time Systems / 292
Problems / 295
8 OUTPUT FEEDBACK AND STRUCTURED CONTROL 297
8.1 Linear Quadratic Regulator with Output Feedback / 297
8.2 Tracking a Reference Input / 313
8.3 Tracking by Regulator Redesign / 327
8.4 Command-Generator Tracker / 331
8.5 Explicit Model-Following Design / 338
8.6 Output Feedback in Game Theory and Decentralized Control / 343
Problems / 351
9 ROBUSTNESS AND MULTIVARIABLE FREQUENCY-DOMAIN TECHNIQUES 355
9.1 Introduction / 355
9.2 Multivariable Frequency-Domain Analysis / 357
9.3 Robust Output-Feedback Design / 380
9.4 Observers and the Kalman Filter / 383
9.5 LQG/Loop-Transfer Recovery / 408
9.6 H DESIGN / 430
Problems / 435
10 DIFFERENTIAL GAMES 438
10.1 Optimal Control Derived Using Pontryagin's Minimum Principle and the Bellman Equation / 439
10.2 Two-player Zero-sum Games / 444
10.3 Application of Zero-sum Games to H Control / 450
10.4 Multiplayer Non-zero-sum Games / 453
11 REINFORCEMENT LEARNING AND OPTIMAL ADAPTIVE CONTROL 461
11.1 Reinforcement Learning / 462
11.2 Markov Decision Processes / 464
11.3 Policy Evaluation and Policy Improvement / 474
11.4 Temporal Difference Learning and Optimal Adaptive Control / 489
11.5 Optimal Adaptive Control for Discrete-time Systems / 490
11.6 Integral Reinforcement Learning for Optimal Adaptive Control of Continuous-time Systems / 503
11.7 Synchronous Optimal Adaptive Control for Continuous-time Systems / 513
APPENDIX A REVIEW OF MATRIX ALGEBRA 518
A.1 Basic Definitions and Facts / 518
A.2 Partitioned Matrices / 519
A.3 Quadratic Forms and Definiteness / 521
A.4 Matrix Calculus / 523
A.5 The Generalized Eigenvalue Problem / 525
REFERENCES 527
INDEX 535
by "Nielsen BookData"