A Numerical Approach to Stochastic Optimal Control via Dynamic Programming
Abstract
This paper presents a strategy for finding optimal controls of nonlinear systems subject to random excitations. The method is capable to generate global control solutions when state and control constraints are present. The solution is global in the sense that controls for all initial conditions in a region of the state space are obtained. The approach is based on the Bellman's Principle of optimality, the cumulant neglect closure method and the Short-time Gaussian approximation. Nonlinear problems with non-smooth terms and range control bounds are considered as examples. The uncontrolled and controlled system responses are evaluated by creating a Markov chain with a control dependent transition probability matrix via the Generalized Cell Mapping method.