What is constrained, optimal, model predictive control? MPC itself is a control scheme that makes use of past data, model physics, and future demands in order to better control a system. Optimal MPC references a control that calculates the global optimal input in order to meet an objective. Note there are many flavors of MPC that are not necessarily optimal such as Generalized Predictive Control(GPC). Constrained control is where the controller mathematically accounts for the limitations of the system ahead of time and optimizes around these constraints. For example, on/off systems cannot be driven past on. Another term for this is input saturation. Consequently, this control scheme creates the best compromise between desired targets taking into account the real world limitations of the system. The following sections will demonstrate how to implement constrained optimal MPC as well as contrast its performance against other control schemes
Transfer Function Example:
Consider a basic models such as a second order transfer function:
$$ H(s) = \frac{a \cdot s + b}{s^2 + c \cdot s + d} \quad\quad (1) $$ Where $ a,b,c,d $ represent optimally chosen model parameters. The optimal choice of these parameters is known as system identification. While this identification process is out of the scope of this blog post, note the Zallus Controller must do this in order to implement MPC of an arbitrary system. The next step is to discretize the system and this can be done through various methods such as ZOH, forward difference, or bilateral transform to name a few. I have a basic forward difference example in this blog post.
$$ \frac{Y(k)}{U(k)} = \frac{A \cdot z^{-1} + B \cdot z^{-2}}{1 -C \cdot z^{-1} -D \cdot z^{-2}} \quad\quad (2) $$ $$ \hat{y}(k+1) = C \cdot y(k) + D \cdot y(k-1) + A \cdot u(k) + B \cdot u(k-1) \quad\quad (3) $$
Equation $ (2) $ shows the transfer function in the Z domain where $ A,B,C,D $ represent the discretized model parameters. With algebra and applying the definition of the Z-transform one can convert this into a recursive predictor form as shown in equation $ (3) $ . Now with some structure in place, consider a basic example of a 2nd order transfer function with input constraints where the input can only be between 0 and 1:
$$ H(k) = \frac{0.6 \cdot z^{-1} + 0.2 \cdot z^{-2}}{1 -0.9 \cdot z^{-1} +0.05 \cdot z^{-2}} \ , \quad 0 < u(k) < 1 \quad\quad (4) $$ Next, if we formulate this discrete transfer function in the form of equation $ (3) $ , and then solve for the present input, one gets the following:
$$ u(k) = \frac{\hat{y}(k+1) + 0.9 \cdot y(k) -0.05 \cdot y(k-1) -0.2 \cdot u(k-1)}{0.6} \ , 0 < u(k) < 1 \quad\quad (5) $$
Equation $ (5) $ is known as one step ahead control. This control scheme can execute a trajectory perfectly assuming the input can take any value. However, given the reality of input constraints, this control is no longer optimal as seen in the below plot where the input is clamped at 1 and 0:
As seen in the plot, with real world constraints, one step ahead control is no longer an optimal control. Consequently, to achieve an optimal control, constraints must be factored in when solving for optimal inputs. In order to handle MPC as a large scale constrained optimization problem, the problem should be stated in a matrix format versus a recursive algorithm. This matrix format must also has to handle initial conditions such that MPC can be continuously updating once new information becomes available. An example will be shown for predicting 4 steps into the future. In this matrix form, future outputs $ y(k+1,2, \ldots ,N) $ will be denoted as a function of future inputs $ u(k+0,1,2, \cdots, N) $ as well as past data $ \tilde{y}(k-1) , \tilde{u}(k-1) $ where $ \tilde{} $ denotes a constant / initial condition.
Equation $ (6) $ shows the short hand matrix prediction equation where $ \hat{y} $ is our predicted future outputs and where $ \hat{u} $ are our chosen/calculated future inputs. The future inputs are multiplied by $ J $ which is the jacobian matrix of the system and matrix $ C $ represents a matrix of constants that are dependent on the initial conditions. Note that Jacobians are crucial for nearly all optimization algorithms as well as their 2nd order equivalent which is the hessian (often approximated as $H \approx J^T \cdot J $ ). Note the structure of the jacobian in MPC problems is a lower triangular matrix as well as a toeplitz matrix. These features are key for validation as well as an efficient implementation of inversion/solving.
Moving on, solve equation $ (6) $ for the optimal inputs given target output values as well as a set of initial conditions.
$$ \hat{u} = J^{-1} \cdot (\hat{y} -C) \quad\quad (8) $$ $$ \hat{u} = J^{-1} \cdot (\hat{y} -C) \quad st. \quad lb < \hat{u} < ub \quad\quad (9) $$ Equation $ (8) $ solves for optimal inputs given target future outputs as well as initial conditions. However, this solution is still an un-constrained solution. Equation $ (9) $ states the constrained case where $ lb,ub $ are the lower bounds and upper bounds respectively. Note this is not a easy problem to solve optimally in my humble opinion. This problem has $ 2\cdot N $ constraints and is considered a non-linear programming problem. Furthermore, with so many constraints, if not handled with care, the computation time can become excessive. The easiest way to solve this problem would be with an active set method however this would be a bad idea since active set methods generally only identify 1-2 active constraints at a time. For MPC, a common case may be where half to all the constraints are simultaneously active. Consequently, I would recommend a more intelligent method such as the interior point method. Matlab/Octave of course have solvers for these types of constraints relying on such methods is non ideal for a cross platform IoT system. Consequently, implementing a constrained efficient solver in javascript was probably the hardest challenge of getting this Zallus Controller working. Looking at matlab’s nonlinear constrained algorithms is a good place to start for algorithm ideas. Once successfully implemented, the MPC solution to the previous example should look like the following:
As expected, the constrained optimal MPC reacts well ahead of time to future demands and bisects all the step changes in the output to minimize the error. Naturally, this produces lower error than the previously shown control methods. Interestingly enough, at $ k=27, k=36 $ there appeared to be some ripple and sub-optimal control. Surprisingly, those points are indeed optimal points where the controller is making use of the nature of the model to achieve a better curve. The following plot is all the displayed control techniques on the same plot for reference.
Finite Horizon:
While the previous examples demonstrate the concept of MPC, they are missing a key construct for practical implementation which is a finite horizon. Without limiting the scope of future predictions, the calculations involve optimization of $ N $ by $ N $ matrices where $ N $ is the length of the target. In order to make computation time feasible on the fly, it follows that the amount of future predictions must be limited. The plots previously shown were produced with a so called infinite horizon or a horizon of all the steps which was $ N = 50 $ . However, to achieve an optimal control of the previous examples, a future horizon of only $ N = 4 $ was given. The required finite horizon for optimal control is dependent upon the model characteristics as well as the target trajectory. To illustrate the affect of non-infinite horizons, please see the following plot. Note that the accuracy is reduced and converges towards one step ahead control as the finite horizon is reduced. Also note that the model and target were modified to better illustrate the affect of the horizon.
As seen above, when the finite horizon is reduced, the control has less ability to anticipate/optimize for future demand. Consequently, the overall error is increased but the computation time per step was decreased. Accordingly, optimal MPC is a balancing act of the computation time versus the time step acceptable.
This concludes my intro to optimal constrained MPC. Happy controlling! I hope to see some other optimal MPC control products out there!