Other eBook Options:

- Covers the main topics of optimal control theory, including the Pontryagin principle, Bellman’s dynamic programming method, and theorems about the existence of optimal controls
- Presents diverse examples from flight mechanics, chemical and electrical engineering, production planning models, and the classical calculus of variations
- Describes control system models with and without time delays
- Presents proofs of the Pontryagin maximum principle
- Includes an introduction to Hamilton-Jacobi theory, tying together the viscosity solution and Subbotin approaches

**Nonlinear Optimal Control Theory** presents a deep, wide-ranging introduction to the mathematical theory of the optimal control of processes governed by ordinary differential equations and certain types of differential equations with memory. Many examples illustrate the mathematical issues that need to be addressed when using optimal control techniques in diverse areas.

Drawing on classroom-tested material from Purdue University and North Carolina State University, the book gives a unified account of bounded state problems governed by ordinary, integrodifferential, and delay systems. It also discusses Hamilton-Jacobi theory. By providing a sufficient and rigorous treatment of finite dimensional control problems, the book equips readers with the foundation to deal with other types of control problems, such as those governed by stochastic differential equations, partial differential equations, and differential games.

**Examples of Control Problems **Introduction

A Problem of Production Planning

Chemical Engineering

Flight Mechanics

Electrical Engineering

The Brachistochrone Problem

An Optimal Harvesting Problem

Vibration of a Nonlinear Beam

**Formulation of Control Problems **Introduction

Formulation of Problems Governed by Ordinary Differential Equations

Mathematical Formulation

Equivalent Formulations

Isoperimetric Problems and Parameter Optimization

Relationship with the Calculus of Variations

Hereditary Problems

**Relaxed Controls **Introduction

The Relaxed Problem; Compact Constraints

Weak Compactness of Relaxed Controls

Filippov’s Lemma

The Relaxed Problem; Non-Compact Constraints

The Chattering Lemma; Approximation to Relaxed Controls

**Existence Theorems; Compact Constraints **Introduction

Non-Existence and Non-Uniqueness of Optimal Controls

Existence of Relaxed Optimal Controls

Existence of Ordinary Optimal Controls

Classes of Ordinary Problems Having Solutions

Inertial Controllers

Systems Linear in the State Variable

**Existence Theorems; Non Compact Constraints **Introduction

Properties of Set Valued Maps

Facts from Analysis

Existence via the Cesari Property

Existence without the Cesari Property

Compact Constraints Revisited

**The Maximum Principle and Some of its Applications**Introduction

A Dynamic Programming Derivation of the Maximum Principle

Statement of Maximum Principle

An Example

Relationship with the Calculus of Variations

Systems Linear in the State Variable

Linear Systems

The Linear Time Optimal Problem

Linear Plant-Quadratic Criterion Problem

**Proof of the Maximum Principle **Introduction

Penalty Proof of Necessary Conditions in Finite Dimensions

The Norm of a Relaxed Control; Compact Constraints

Necessary Conditions for an Unconstrained Problem

The ε-Problem

The ε-Maximum Principle

The Maximum Principle; Compact Constraints

Proof of Theorem 6.3.9

Proof of Theorem 6.3.12

Proof of Theorem 6.3.17 and Corollary 6.3.19

Proof of Theorem 6.3.22

**Examples **Introduction

The Rocket Car

A Non-Linear Quadratic Example

A Linear Problem with Non-Convex Constraints

A Relaxed Problem

The Brachistochrone Problem

Flight Mechanics

An Optimal Harvesting Problem

Rotating Antenna Example

**Systems Governed by Integrodifferential Systems **Introduction

Problem Statement

Systems Linear in the State Variable

Linear Systems/The Bang-Bang Principle

Systems Governed by Integrodifferential Systems

Linear Plant Quadratic Cost Criterion

A Minimum Principle

**Hereditary Systems **Introduction

Problem Statement and Assumptions

Minimum Principle

Some Linear Systems

Linear Plant-Quadratic Cost

Infinite Dimensional Setting

**Bounded State Problems**Introduction

Statement of the Problem

ε-Optimality Conditions

Limiting Operations

The Bounded State Problem for Integrodifferential Systems

The Bounded State Problem for Ordinary Differential Systems

Further Discussion of the Bounded State Problem

Sufficiency Conditions

Nonlinear Beam Problem

**Hamilton-Jacobi Theory **Introduction

Problem Formulation and Assumptions

Continuity of the Value Function

The Lower Dini Derivate Necessary Condition

The Value as Viscosity Solution

Uniqueness

The Value Function as Verification Function

Optimal Synthesis

The Maximum Principle

**Bibliography **

**Index**

This book provides a thorough introduction to optimal control theory for nonlinear systems. … The book is enhanced by the inclusion of many examples, which are analyzed in detail using Pontryagin’s principle. … An important feature of the book is its systematic use of a relaxed control formulation of optimal control problems. …

—From the Foreword by Wendell Fleming

… more than a very useful research account and a handy reference to users of the theory-they also make it a pleasant and helpful study opportunity to students and other newcomers to the theory of optimal control.

—Zvi Artstein, in* Mathematical Reviews*