Learning-Based Model Predictive Control for Markov Decision Processes
Authors: | Negenborn Rudy, Delft University of Technology, Netherlands De Schutter Bart, Delft University of Technology, Netherlands Wiering Marco, Utrecht University, Netherlands Hellendoorn Hans, Delft University of Technology, Netherlands |
---|
Topic: | 1.2 Adaptive and Learning Systems |
---|
Session: | Optimal and Adaptive Control |
---|
Keywords: | Markov decision processes, predictive control, learning |
---|
Abstract
We propose the use of Model Predictive Control (MPC) for controlling systems described by Markov decision processes. First, we consider a straightforward MPC algorithm for Markov decision processes. Then, we propose value functions, a means to deal with issues arising in conventional MPC, e.g., computational requirements and sub-optimality of actions. We use reinforcement learning to let an MPC agent learn a value function incrementally. The agent incorporates experience from the interaction with the system in its decision making. Our approach initially relies on pure MPC. Over time, as experience increases, the learned value function is taken more and more into account. This speeds up the decision making, allows decisions to be made over an infinite instead of a finite horizon, and provides adequate control actions, even if the system and desired performance slowly vary over time.