LQR#
- class LQR(model)[source]#
Bases:
IteratedVariables
Linear Quadratic Regulator.
New in version >v4.5.1: New interface to settings. The class has an attribute
settings
which is an instance ofLQRSettings
(please see this documentation for a list of available settings). Settings are now chosen as:lqr.settings.t_step = .5
Previously, settings were passed to
set_param()
. This method is still available and wraps the new interface. The new method has important advantages:The
lqr.settings
attribute can be printed to see the current configuration.Context help is available in most IDEs (e.g. VS Code) to see the available settings, the type and a description.
Use this class to configure and run the LQR controller according to the previously configured
do_mpc.model.LinearModel
instance.Two types of LQR can be desgined:
Finite Horizon LQR by choosing, e.g.
n_horizon = 20
.Infinite Horizon LQR by choosing
n_horizon = None
.
The value for
n_horizon
is set usingset_param()
.Configuration and setup:
Configuring and setting up the LQR controller involves the following steps:
Configure the LQR controller with
LQRSettings
class. The LQR instance has the attributesettings
which is an instance ofLQRSettings
.Set the objective of the control problem with
set_objective()
To finalize the class configuration call
setup()
.
The
LQR
can be used in two different modes:Standard mode:
Set set-point with
set_setpoint()
(default is0
).Set
Q
andR
values withset_objective()
.
Input Rate Penalization mode:
Setpoint can also be set using
set_setpoint()
(default is0
).Reformulate objective with
set_rterm()
to penalize the input rate by setting the valuedelR
.Set
Q
andR
values withset_objective()
.
Note
The function
set_rterm()
mode is not recommended to use if the model is converted from an DAE to an ODE system. Because the converted model is already in the rated input formulation.Note
During runtime call
make_step()
with the current state \(x\) to obtain the optimal control input \(u\). During runtime callset_setpoint()
with the set points of input \(u_{ss}\) and states \(x_{ss}\) in order to update the respective set points.- Parameters
model (
LinearModel
) – Linear model
Methods#
discrete_gain#
- discrete_gain(self, A, B)#
Computes discrete gain.
This method computes either the finite horizon discrete gain or infinite horizon discrete gain. The gain is computed by the solution of discrete-time algebraic Ricatti equation.
For finite horizon
LQR
, the problem formulation is as follows:\[\begin{split}\pi(N) &= P_f\\ K(k) & = -(B'\pi(k+1)B)^{-1}B'\pi(k+1)A\\ \pi(k) & = Q+A'\pi(k+1)A-A'\pi(k+1)B(B'\pi(k+1)B+R)^{-1}B'\pi(k+1)A\end{split}\]For infinite horizon
LQR
, the problem formulation is as follows:\[\begin{split}K & = -(B'PB+P)^{-1}B'PA\\ P & = Q+A'PA-A'PB(R+B'PB)^{-1}B'PA\\\end{split}\]For example:
K = lqr.discrete_gain(A,B)
- Parameters
A (
ndarray
) – State matrix - constant matrix with no variablesB (
ndarray
) – Input matrix - constant matrix with no variables
- Returns
ndarray
– Gain matrix \(K\)
make_step#
- make_step(self, x0)#
Main method of the class during runtime. This method is called at each timestep and returns the control input for the current initial state.
- Parameters
x0 (
ndarray
) – Current state of the system.- Returns
ndarray
– u0 - current input of the system
reset_history#
- reset_history(self)#
Reset the history of the LQR.
- Return type
None
set_objective#
- set_objective(self, Q, R, P=None)#
Sets the cost matrix for the Optimal Control Problem.
This method sets the inputs, states and algebraic states cost matrices for the given problem.
Since the controller can be operated in two modes. The objective function differes from each other and is as follows
Finite Horizon:
For set-point tracking mode:
\[\begin{split}\begin{aligned} J &= \frac{1}{2}\sum_{k=0} ^{N-1} (x_k - x_{ss})^T Q(x_k-x_{ss})+(u_k-u_{ss})^T R(u_k-u_{ss})\\ &+ (x_N-x_{ss})^T P(x_N-x_{ss}) \end{aligned}\end{split}\]For Input Rate Penalization mode:
\[J = \frac{1}{2}\sum_{k=0} ^{N-1} (\tilde{x}_k - \tilde{x}_{ss})^T \tilde{Q}(\tilde{x}_k-\tilde{x}_{ss})+\Delta u_k^T \Delta R \Delta u_k + (\tilde{x}_N-\tilde{x}_{ss})^TP(\tilde{x}_N-\tilde{x}_{ss})\]Infinite Horizon:
For set-point tracking mode:
\[J = \frac{1}{2}\sum_{k=0} ^{\inf} (x_k - x_{ss})^T Q(x_k-x_{ss})+(u_k-u_{ss})^T R(u_k-u_{ss}) \quad \quad \quad \quad \quad \quad \quad\]For Input Rate Penalization mode:
\[J = \frac{1}{2}\sum_{k=0} ^{\inf} (\tilde{x}_k - \tilde{x}_{ss})^T \tilde{Q}(\tilde{x}_k-\tilde{x}_{ss})+ \Delta u_k^T \Delta R \Delta u_k \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad\]where \(\tilde{x} = [x,u]^T\) .
Note
For the problem to be solved in
inputRatePenalization
mode,Q
,R
anddelR
should be set.delR
is set usingset_rterm()
.P
term is set according to the need of the problem.For example:
# Values used are to show how to use this function. # For ODE models lqr.set_objective(Q = np.identity(2), R = np.identity(2), P = np.identity(2))
Warning
Q
,R
,P
is chosen as matrix of zeros since it is not passed explicitly. IfP
is not given explicitly, thenQ
is chosen asP
for calculating finite discrete gain- Raises
exception – Q matrix must be of type class numpy.ndarray
exception – R matrix must be of type class numpy.ndarray
exception – P matrix must be of type class numpy.ndarray
- Parameters
Q (
ndarray
) – State cost matrixR (
ndarray
) – Input cost matrixP (
Optional
[ndarray
]) – Terminal cost matrix (optional)
- Return type
None
set_param#
- set_param(self, **kwargs)#
Set the parameters of the
LQR
class. Parameters must be passed as pairs of valid keywords and respective argument.Two different kinds of LQR can be desgined. In order to design a finite horizon LQR,
n_horizon
and to design a infinite horizon LQR,n_horizon
should be set toNone
(default value). :rtype:None
Deprecated since version >v4.5.1: This function will be deprecated in the future
Warning
This method will be depreciated in a future version. Please set parameters via
do_mpc.controller.LQRSettings
.Note
A comprehensive list of all available parameters can be found in
do_mpc.controller.LQRSettings
.For example:
lqr.settings.n_horizon = 20
The old interface, as shown in the example below, can still be accessed until further notice.
For example:
lqr.set_param(n_horizon = 20)
Note
The only required parameters are
n_horizon
. All other parameters are optional.set_param()
can be called multiple times. Previously passed arguments are overwritten by successive calls.
set_rterm#
- set_rterm(self, delR)#
Modifies the model such that rated input acts as the input.
Warning
Calling
set_rterm()
modifies the objective function as well as the state and input matrix.Warning
It is not advisible to execute
LQR
in theinputRatePenalization
mode if the model is converted from DAE to ODE system. Because the converted model itself is ininputRatePenalization
mode.The input rate penalization formulation is given as:
\[\begin{split}\begin{aligned} x(k+1) &= \tilde{A} x(k) + \tilde{B}\Delta u(k)\\ \text{where} \quad \tilde{A} &= \begin{bmatrix} A & B \\ 0 & I \end{bmatrix},\quad \tilde{B} = \begin{bmatrix} B \\ I \end{bmatrix} \end{aligned}\end{split}\]We introduce new states of this system as \(\tilde{x} = [x,u]\) where \(x\) and \(u\) are the original states and input of the system. After reformulating the system with
set_rterm()
, the discrete gain is calculated usingdiscrete_gain()
.As the system state matrix and input matrix are altered, cost matrices are also modified accordingly:
\[\begin{split}\tilde{Q} = \begin{bmatrix} Q & 0 \\ 0 & R \end{bmatrix},\quad \tilde{R} = \Delta R\end{split}\]- Parameters
delR (
ndarray
) – Rated input cost matrix - constant matrix with no variables- Return type
None
set_setpoint#
- set_setpoint(self, xss=None, uss=None)#
Sets setpoints for states and inputs.
This method can be used to set setpoints for either states or inputs or for both (states and inputs) at each time step. It can be called inside simulation loop to change the set point dynamically.
Note
If setpoints is not specifically mentioned it will be set to zero (default).
For example:
# For ODE models lqr.set_setpoint(xss = np.array([[10],[15]]) ,uss = np.array([[2],[3]]))
- Parameters
xss (
Optional
[ndarray
]) – set point for states of the system(optional)uss (
Optional
[ndarray
]) – set point for inputs of the system(optional)
- Return type
None
setup#
- setup(self)#
Prepares
LQR
for execution. This method initializes and ensures that all the parameters that are necessary to desgin the lqr are available.- Raises
exception – mode must be standard, inputRatePenalization, None. you have {string value}
- Return type
None
Attributes#
t0#
- LQR.t0#
Current time marker of the class. Use this property to set of query the time.
Set with
int
,float
,numpy.ndarray
orcasadi.DM
type.
u0#
- LQR.u0#
Initial input and current iterate. This is the numerical structure holding the information about the current input in the class. The property can be indexed according to the model definition.
Example:
model = do_mpc.model.Model('continuous') model.set_variable('_u','heating', shape=(4,1)) ... mhe = do_mpc.estimator.MHE(model) # or mpc = do_mpc.estimator.MPC(model) # Get or set current value of variable: mpc.u0['heating', 0] # 0th element of variable mpc.u0['heating'] # all elements of variable mpc.u0['heating', 0:2] # 0th and 1st element
Useful CasADi symbolic structure methods:
.shape
.keys()
.labels()
x0#
- LQR.x0#
Initial state and current iterate. This is the numerical structure holding the information about the current states in the class. The property can be indexed according to the model definition.
Example:
model = do_mpc.model.Model('continuous') model.set_variable('_x','temperature', shape=(4,1)) ... mhe = do_mpc.estimator.MHE(model) # or mpc = do_mpc.estimator.MPC(model) # Get or set current value of variable: mpc.x0['temperature', 0] # 0th element of variable mpc.x0['temperature'] # all elements of variable mpc.x0['temperature', 0:2] # 0th and 1st element
Useful CasADi symbolic structure methods:
.shape
.keys()
.labels()
z0#
- LQR.z0#
Initial algebraic state and current iterate. This is the numerical structure holding the information about the current algebraic states in the class. The property can be indexed according to the model definition.
Example:
model = do_mpc.model.Model('continuous') model.set_variable('_z','temperature', shape=(4,1)) ... mhe = do_mpc.estimator.MHE(model) # or mpc = do_mpc.estimator.MPC(model) # Get or set current value of variable: mpc.z0['temperature', 0] # 0th element of variable mpc.z0['temperature'] # all elements of variable mpc.z0['temperature', 0:2] # 0th and 1st element
Useful CasADi symbolic structure methods:
.shape
.keys()
.labels()