MHE#

class MHE(model, p_est_list=[])[source]#

Bases: Optimizer, Estimator

Moving horizon estimator.

New in version >v4.5.1: New interface to settings. The class has an attribute settings which is an instance of MHESettings (please see this documentation for a list of available settings). Settings are now chosen as:

mhe.settings.n_horizon = 20

Previously, settings were passed to set_param(). This method is still available and wraps the new interface. The new method has important advantages:

  1. The mhe.settings attribute can be printed to see the current configuration.

  2. Context help is available in most IDEs (e.g. VS CODe) to see the available settings, the type and a description.

  3. The MHESettings class has convenient methods, such as MHESettings.supress_ipopt_output() to silence the solver.

For general information on moving horizon estimation, please read our background article.

The MHE estimator extends the do_mpc.optimizer.Optimizer base class (which is also used for do_mpc.controller.MPC), as well as the Estimator base class. Use this class to configure and run the MHE based on a previously configured do_mpc.model.Model instance.

The class is initiated by passing a list of the parameters that should be estimated. This must be a subset (or all) of the parameters defined in do_mpc.model.Model. This allows to define parameters in the model that influence the model externally (e.g. weather predictions), and those that are internal e.g. system parameters and can be estimated. Passing an empty list (default) value, means that no parameters are estimated.

Note

Parameters are influencing the model equation at all timesteps but are constant over the entire horizon. Parameters could also be introduced as states without dynamic but this would increase the total number of optimization variables.

Configuration and setup:

Configuring and setting up the MHE involves the following steps:

  1. Configure the MHE controller with MHESettings. The MHE instance has the attribute settings which is an instance of MHESettings.

  2. Set the objective of the control problem with set_default_objective() or use the low-level interface set_objective().

  1. Set upper and lower bounds.

  2. Optionally, set further (non-linear) constraints with set_nl_cons().

  3. Use get_p_template() and set_p_fun() to set the function for the (not estimated) parameters.

  4. Use get_tvp_template() and set_tvp_fun() to create a method to obtain new time-varying parameters at each iteration.

  5. To finalize the class configuration there are two routes. The default approach is to call setup(). For deep customization use the combination of prepare_nlp() and create_nlp(). See graph below for an illustration of the process.

digraph G { graph [fontname = "helvetica"]; rankdir=LR; subgraph cluster_main { node [fontname = "helvetica", shape=box, fontcolor="#404040", color="#707070"]; edge [fontname = "helvetica", color="#707070"]; start [label="Two ways to setup"]; setup [label="setup", href="../api/do_mpc.estimator.MHE.setup.html", target="_top", fontname = "Consolas"]; create_nlp [label="create_nlp", href="../api/do_mpc.estimator.MHE.create_nlp.html", target="_top", fontname = "Consolas"]; process [label="Modify NLP"]; prepare_nlp [label="prepare_nlp", href="../api/do_mpc.estimator.MHE.prepare_nlp.html", target="_top", fontname = "Consolas"]; finish [label="Configured MHE class"] start -> setup, prepare_nlp; prepare_nlp -> process; process -> create_nlp; setup, create_nlp -> finish; color=none; } subgraph cluster_modification { rankdir=TB; node [fontname = "helvetica", shape=box, fontcolor="#404040", color="#707070"]; edge [fontname = "helvetica", color="#707070"]; opt_x [label="opt_x", href="../api/do_mpc.estimator.MHE.opt_x.html", target="_top", fontname = "Consolas"]; opt_p [label="opt_p", href="../api/do_mpc.estimator.MHE.opt_p.html", target="_top", fontname = "Consolas"]; nlp_cons [label="nlp_cons", href="../api/do_mpc.estimator.MHE.nlp_cons.html", target="_top", fontname = "Consolas"]; nlp_obj [label="nlp_obj", href="../api/do_mpc.estimator.MHE.nlp_obj.html", target="_top", fontname = "Consolas"]; opt_x -> nlp_cons, nlp_obj; opt_p -> nlp_cons, nlp_obj; label = "Attributes to modify the NLP."; color=black; } nlp_cons -> process; nlp_obj -> process; }

Route to setting up the MHE class.#

Warning

Before running the estimator, make sure to supply a valid initial guess for all estimated variables (states, algebraic states, inputs and parameters). Simply set the intial values of x0, z0, u0 and p_est0 and then call set_initial_guess(). To take full control over the initial guess, modify the values of opt_x_num.

During runtime use make_step() with the most recent measurement to obtain the estimated states.

Parameters:
  • model (Union[Model, LinearModel]) – A configured and setup do_mpc.model

  • p_est_list (list) – List with names of parameters (_p) defined in model

Methods#

compile_nlp#

compile_nlp(self, overwrite=False, cname='nlp.c', libname='nlp.so', compiler_command=None)#

Compile the NLP. This may accelerate the optimization. As compilation is time consuming, the default option is to NOT overwrite (overwrite=False) an existing compilation. If an existing compilation with the name libname is found, it is used. This can be dangerous, if the NLP has changed (user tweaked the cost function, the model etc.).

Warning

This feature is experimental and currently only supported on Linux and MacOS.

What happens here?

  1. The NLP is written to a C-file (cname)

  2. The C-File (cname) is compiled. The custom compiler uses:

gcc -fPIC -shared -O1 {cname} -o {libname}
  1. The compiled library is linked to the NLP. This overwrites the original NLP. Options from the previous NLP (e.g. linear solver) are kept.

self.S = nlpsol('solver_compiled', 'ipopt', f'{libname}', self.nlpsol_opts)
Parameters:
  • overwrite (bool) – If True, the existing compiled NLP will be overwritten.

  • cname (str) – Name of the C file that will be exported.

  • libname (str) – Name of the shared library that will be created after compilation.

  • compiler_command (str) – Command to use for compiling. If None, the default compiler command will be used. Please make sure to use matching strings for libname when supplying your custom compiler command.

Return type:

None

create_nlp#

create_nlp(self)#

Create the optimization problem. Typically, this method is called internally from setup().

Users should only call this method if they intend to modify the objective with nlp_obj, the constraints with nlp_cons, nlp_cons_lb and nlp_cons_ub.

To finish the setup process, users MUST call create_nlp() afterwards.

Note

Do NOT call setup() if you intend to go the manual route with prepare_nlp() and create_nlp().

Note

Only AFTER calling prepare_nlp() the previously mentionned attributes nlp_obj, nlp_cons, nlp_cons_lb, nlp_cons_ub become available.

Returns:

None – None

get_p_template#

get_p_template(self)#

Obtain output template for set_p_fun(). This is used to set the (not estimated) parameters. Use this structure as the return of a user defined parameter function (p_fun) that is called at each MHE step. Pass this function to the MHE by calling set_p_fun().

Note

The combination of get_p_template() and set_p_fun() is identical to the do_mpc.simulator.Simulator methods, if the MHE is not estimating any parameters.

Returns:

Union[SXStruct, MXStruct] – p_template

get_tvp_template#

get_tvp_template(self)#

Obtain output template for set_tvp_fun().

The method returns a structured object with n_horizon+1 elements, and a set of time-varying parameters (as defined in do_mpc.model.Model) for each of these instances. The structure is initialized with all zeros. Use this object to define values of the time-varying parameters.

This structure (with numerical values) should be used as the output of the tvp_fun function which is set to the class with set_tvp_fun(). Use the combination of get_tvp_template() and set_tvp_fun().

Example:

# in model definition:
alpha = model.set_variable(var_type='_tvp', var_name='alpha')
beta = model.set_variable(var_type='_tvp', var_name='beta')

...
# in optimizer configuration:
tvp_temp_1 = optimizer.get_tvp_template()
tvp_temp_1['_tvp', :] = np.array([1,1])

tvp_temp_2 = optimizer.get_tvp_template()
tvp_temp_2['_tvp', :] = np.array([0,0])

def tvp_fun(t_now):
    if t_now<10:
        return tvp_temp_1
    else:
        tvp_temp_2

optimizer.set_tvp_fun(tvp_fun)
Returns:

Union[SXStruct, MXStruct] – Casadi SX or MX structure

get_y_template#

get_y_template(self)#

Obtain output template for set_y_fun().

Use this structure as the return of a user defined parameter function (y_fun) that is called at each MHE step. Pass this function to the MHE by calling set_y_fun().

The structure carries a set of measurements for each time step of the horizon and can be accessed as follows:

y_template['y_meas', k, 'meas_name']
# Slicing is possible, e.g.:
y_template['y_meas', :, 'meas_name']

where k runs from 0 to N_horizon and meas_name refers to the user-defined names in do_mpc.model.

Note

The structure is ordered, sucht that k=0 is the “oldest measurement” and k=N_horizon is the newest measurement.

By default, the following measurement function is choosen:

y_template = self.get_y_template()

def y_fun(t_now):
    n_steps = min(self.data._y.shape[0], self.n_horizon)
    for k in range(-n_steps,0):
        y_template['y_meas',k] = self.data._y[k]
    try:
        for k in range(self.n_horizon-n_steps):
            y_template['y_meas',k] = self.data._y[-n_steps]
    except:
        None
    return y_template

Which simply reads the last results from the MHE.data object.

Returns:

Union[SXStruct, MXStruct] – y_template

make_step#

make_step(self, y0)#

Main method of the class during runtime. This method is called at each timestep and returns the current state estimate for the current measurement y0.

The method prepares the MHE by setting the current parameters, calls solve() and updates the do_mpc.data.Data object.

Warning

Moving horizon estimation will only work reliably once a full sequence of measurements corresponding to the set horizon ist available.

Parameters:

y0 (ndarray) – Current measurement.

Returns:

ndarray – x0, estimated state of the system.

prepare_nlp#

prepare_nlp(self)#

Prepare the optimization problem. Typically, this method is called internally from setup().

Users should only call this method if they intend to modify the objective with nlp_obj, the constraints with nlp_cons, nlp_cons_lb and nlp_cons_ub.

To finish the setup process, users MUST call create_nlp() afterwards.

Note

Do NOT call setup() if you intend to go the manual route with prepare_nlp() and create_nlp().

Note

Only AFTER calling prepare_nlp() the previously mentionned attributes nlp_obj, nlp_cons, nlp_cons_lb, nlp_cons_ub become available.

Returns:

None – None

reset_history#

reset_history(self)#

Reset the history of the optimizer. All data from the do_mpc.data.Data instance is removed.

Return type:

None

set_default_objective#

set_default_objective(self, P_x, P_v=None, P_p=None, P_w=None)#

Configure the suggested default MHE formulation.

Use this method to pass tuning matrices for the MHE optimization problem:

\[\begin{split}\underset{ \begin{array}{c} \mathbf{x}_{0:N+1}, \mathbf{u}_{0:N}, p,\\ \mathbf{w}_{0:N}, \mathbf{v}_{0:N} \end{array} }{\mathrm{min}} &m(x_0,\tilde{x}_0, p,\tilde{p}) +\sum_{k=0}^{N-1} l(v_k, w_k, p, p_{\text{tv},k}),\\ &\left.\begin{aligned} \mathrm{s.t.}\quad x_{k+1} &= f(x_k,u_k,z_k,p,p_{\text{tv},k})+ w_k,\\ y_k &= h(x_k,u_k,z_k,p,p_{\text{tv},k}) + v_k, \\ &g(x_k,u_k,z_k,p_k,p_{\text{tv},k}) \leq 0 \end{aligned}\right\} k=0,\dots, N\end{split}\]

where we introduce the bold letter notation, e.g. \(\mathbf{x}_{0:N+1}=[x_0, x_1, \dots, x_{N+1}]^T\) to represent sequences and where \(\|x\|_P^2=x^T P x\) denotes the \(P\) weighted squared norm.

Pass the weighting matrices \(P_x\), \(P_p\) and \(P_v\) and \(P_w\). The matrices must be of appropriate dimension and array-like.

Note

It is possible to pass parameters or time-varying parameters defined in the do_mpc.model.Model as weighting. You’ll probably choose time-varying parameters (_tvp) for P_v and P_w and parameters (_p) for P_x and P_p. Use set_p_fun() and set_tvp_fun() to configure how these values are determined at each time step.

General remarks:

The respective terms are not present in the MHE formulation in that case.

Note

Use set_objective() as a low-level alternative for this method, if you want to use a custom objective function.

Parameters:
  • P_x (Union[ndarray, SX, MX]) – Tuning matrix \(P_x\) of dimension \(n \times n\) \((x \in \mathbb{R}^{n})\)

  • P_v (Union[ndarray, SX, MX]) – Tuning matrix \(P_v\) of dimension \(m \times m\) \((v \in \mathbb{R}^{m})\)

  • P_p (Union[ndarray, SX, MX]) – Tuning matrix \(P_p\) of dimension \(l \times l\) \((p_{\text{est}} \in \mathbb{R}^{l})\))

  • P_w (Union[ndarray, SX, MX]) – Tuning matrix \(P_w\) of dimension \(k \times k\) \((w \in \mathbb{R}^{k})\)

Return type:

None

set_initial_guess#

set_initial_guess(self)#

Initial guess for optimization variables. Uses the current class attributes x0, z0 and u0, p_est0 to create an initial guess for the MHE. The initial guess is simply the initial values for all \(k=0,\dots,N\) instances of \(x_k\), \(u_k\) and \(z_k\), \(p_{\text{est,k}}\). :rtype: None

Warning

If no initial values for x0, z0 and u0 were supplied during setup, these default to zero.

Note

The initial guess is fully customizable by directly setting values on the class attribute: opt_x_num.

set_nl_cons#

set_nl_cons(self, expr_name, expr, ub=inf, soft_constraint=False, penalty_term_cons=1, maximum_violation=inf)#

Introduce new constraint to the class. Further constraints are optional. Expressions must be formulated with respect to _x, _u, _z, _tvp, _p. They are implemented as:

\[m(x,u,z,p_{\text{tv}}, p) \leq m_{\text{ub}}\]

Setting the flag soft_constraint=True will introduce slack variables \(\epsilon\), such that:

\[\begin{split}m(x,u,z,p_{\text{tv}}, p)-\epsilon &\leq m_{\text{ub}},\\ 0 &\leq \epsilon \leq \epsilon_{\text{max}},\end{split}\]

Slack variables are added to the cost function and multiplied with the supplied penalty term. This formulation makes constraints soft, meaning that a certain violation is tolerated and does not lead to infeasibility. Typically, high values for the penalty are suggested to avoid significant violation of the constraints.

Parameters:
  • expr_name (str) – Arbitrary name for the given expression. Names are used for key word indexing.

  • expr (Union[SX, MX]) – CasADi SX or MX function depending on _x, _u, _z, _tvp, _p.

  • ub (float) – Upper bound

  • soft_constraint (bool) – Flag to enable soft constraint

  • penalty_term_cons (int) – Penalty term constant

  • maximum_violation (float) – Maximum violation

Raises:
  • assertion – expr_name must be str

  • assertion – expr must be a casadi SX or MX type

Returns:

Union[SX, MX] – Returns the newly created expression. Expression can be used e.g. for the RHS.

set_objective#

set_objective(self, stage_cost, arrival_cost)#

Set the stage cost \(l(\cdot)\) and arrival cost \(m(\cdot)\) function for the MHE problem:

\[\begin{split}\underset{ \begin{array}{c} \mathbf{x}_{0:N+1}, \mathbf{u}_{0:N}, p,\\ \mathbf{w}_{0:N}, \mathbf{v}_{0:N} \end{array} }{\mathrm{min}} &m(x_0,\tilde{x}_0, p,\tilde{p}) +\sum_{k=0}^{N-1} l(v_k, w_k, p, p_{\text{tv},k}),\\ &\left.\begin{aligned} \mathrm{s.t.}\quad x_{k+1} &= f(x_k,u_k,z_k,p,p_{\text{tv},k})+ w_k,\\ y_k &= h(x_k,u_k,z_k,p,p_{\text{tv},k}) + v_k, \\ &g(x_k,u_k,z_k,p_k,p_{\text{tv},k}) \leq 0 \end{aligned}\right\} k=0,\dots, N\end{split}\]

Use the class attributes:

  • mhe._w as \(w_k\)

  • mhe._v as \(v_k\)

  • mhe._x_prev as \(\tilde{x}_0\)

  • mhe._x as \(x_0\)

  • mhe._p_est_prev as \(\tilde{p}_0\)

  • mhe._p_est as \(p_0\)

To formulate the objective function and pass the stage cost and arrival cost independently.

Note

The retrieved attributes are symbolic structures, which can be queried with the given variable names, e.g.:

x1 = mhe._x['state_1']

For a vector of all states, use the .cat method as shown in the example below.

Example:

# Get variables:
v = mhe._v.cat

stage_cost = v.T@np.diag(np.array([1,1,1,20,20]))@v

x_0 = mhe._x
x_prev = mhe._x_prev
p_0 = mhe._p_est
p_prev = mhe._p_est_prev

dx = x_0.cat - x_prev.cat
dp = p_0.cat - p_prev.cat

arrival_cost = 1e-4*dx.T@dx + 1e-4*dp.T@dp

mhe.set_objective(stage_cost, arrival_cost)

Note

Use set_default_objective() as a high-level wrapper for this method, if you want to use the default MHE objective function.

Parameters:
  • stage_cost (Union[SX, MX]) – Stage cost that is added to the MHE objective at each age.

  • arrival_cost (Union[SX, MX]) – Arrival cost that is added to the MHE objective at the initial state.

Return type:

None

set_p_fun#

set_p_fun(self, p_fun)#

Set function which returns parameters.. The p_fun is called at each MHE time step and returns the (fixed) parameters. The function must return a numerical CasADi structure, which can be retrieved with get_p_template().

Parameters:

p_fun (Callable[[float], Union[SXStruct, MXStruct]]) – Parameter function.

Return type:

None

set_param#

set_param(self, **kwargs)#

Method to set the parameters of the MHE class. Parameters must be passed as pairs of valid keywords and respective argument. :rtype: None

Deprecated since version >v4.5.1: This function will be deprecated in the future

Note

A comprehensive list of all available parameters can be found in do_mpc.estimator.MHESettings.

For example:

mhe.settings.n_horizon = 20

The old interface, as shown in the example below, can still be accessed until further notice.

For example:

mhe.set_param(n_horizon = 20)

Note

The only required parameters are n_horizon and t_step. All other parameters are optional.

Note

We highly suggest to change the linear solver for IPOPT from mumps to MA27. In many cases this will drastically boost the speed of do-mpc. Any available linear solver can be set using do_mpc.estimator.MHESettings.set_linear_solver(). For more details, please check the do_mpc.estimator.MHESettings.

Note

The output of IPOPT can be suppressed do_mpc.estimator.MHESettings.supress_ipopt_output(). For more details, please check the do_mpc.estimator.MHESettings.

set_tvp_fun#

set_tvp_fun(self, tvp_fun)#

Set function which returns time-varying parameters.

The tvp_fun is called at each optimization step to get the current prediction of the time-varying parameters. The supplied function must be callable with the current time as the only input. Furthermore, the function must return a CasADi structured object which is based on the horizon and on the model definition. The structure can be obtained with get_tvp_template().

Example:

# in model definition:
alpha = model.set_variable(var_type='_tvp', var_name='alpha')
beta = model.set_variable(var_type='_tvp', var_name='beta')

...
# in optimizer configuration:
tvp_temp_1 = optimizer.get_tvp_template()
tvp_temp_1['_tvp', :] = np.array([1,1])

tvp_temp_2 = optimizer.get_tvp_template()
tvp_temp_2['_tvp', :] = np.array([0,0])

def tvp_fun(t_now):
    if t_now<10:
        return tvp_temp_1
    else:
        tvp_temp_2

optimizer.set_tvp_fun(tvp_fun)

Note

The method set_tvp_fun(). must be called prior to setup IF time-varying parameters are defined in the model. It is not required to call the method if no time-varying parameters are defined.

Parameters:

tvp_fun (Callable[[float], Union[SXStruct, MXStruct]]) – Function that returns the predicted tvp values at each timestep. Must have single input (float) and return a structure3.DMStruct (obtained with get_tvp_template()).

Return type:

None

set_y_fun#

set_y_fun(self, y_fun)#

Set the measurement function. The function must return a CasADi structure which can be obtained from get_y_template(). See the respective doc string for details.

Parameters:

y_fun (Callable[[float], Union[SXStruct, MXStruct]]) – measurement function.

Return type:

None

setup#

setup(self)#

The setup method finalizes the MHE creation. The optimization problem is created based on the configuration of the module. :rtype: None

Note

After this call, the solve() and make_step() method is applicable.

solve#

solve(self)#

Solves the optmization problem.

The current problem is defined by the parameters in the opt_p_num CasADi structured Data.

Typically, opt_p_num is prepared for the current iteration in the make_step() method. It is, however, valid and possible to directly set paramters in opt_p_num before calling solve().

The method updates the opt_p_num and opt_x_num attributes of the class. By resetting opt_x_num to the current solution, the method implicitly enables warmstarting the optimizer for the next iteration, since this vector is always used as the initial guess. :rtype: None

Warning

The method is part of the public API but it is generally not advised to use it. Instead we recommend to call make_step() at each iterations, which acts as a wrapper for solve().

Raises:

asssertion – Optimizer was not setup yet.

Attributes#

bounds#

MHE.bounds#

Query and set bounds of the optimization variables. The bounds() method is an indexed property, meaning getting and setting this property requires an index and calls this function. The power index (elements are separated by commas) must contain atleast the following elements:

order

index name

valid options

1

bound type

lower and upper

2

variable type

_x, _u and _z (and _p_est for MHE)

3

variable name

Names defined in do_mpc.model.Model.

Further indices are possible (but not neccessary) when the referenced variable is a vector or matrix.

Example:

# Set with:
optimizer.bounds['lower','_x', 'phi_1'] = -2*np.pi
optimizer.bounds['upper','_x', 'phi_1'] = 2*np.pi

# Query with:
optimizer.bounds['lower','_x', 'phi_1']

lb_opt_x#

MHE.lb_opt_x#

Query and modify the lower bounds of all optimization variables opt_x. This is a more advanced method of setting bounds on optimization variables of the MPC/MHE problem. Users with less experience are advised to use bounds instead.

The attribute returns a nested structure that can be indexed using powerindexing. Please refer to opt_x for more details.

Note

The attribute automatically considers the scaling variables when setting the bounds. See scaling for more details.

Note

Modifications must be done after calling prepare_nlp() or setup() respectively.

nlp_cons#

MHE.nlp_cons#

Query and modify (symbolically) the NLP constraints. Use the variables in opt_x and opt_p.

Prior to calling create_nlp() this attribute returns a list of symbolic constraints. After calling create_nlp() this attribute returns the concatenation of this list and the attribute cannot be altered anymore.

It is advised to append to the current list of nlp_cons:

mpc.prepare_nlp()

# Create new constraint: Input at timestep 0 and 1 must be identical.
extra_cons = mpc.opt_x['_u', 0, 0]-mpc.opt_x['_u',1, 0]
mpc.nlp_cons.append(
    extra_cons
)

# Create appropriate upper and lower bound (here they are both 0 to create an equality constraint)
mpc.nlp_cons_lb.append(np.zeros(extra_cons.shape))
mpc.nlp_cons_ub.append(np.zeros(extra_cons.shape))

mpc.create_nlp()

See the documentation of opt_x and opt_p on how to query these attributes.

Warning

This is a VERY low level feature and should be used with extreme caution. It is easy to break the code.

Be especially careful NOT to accidentially overwrite the default objective.

Note

Modifications must be done after calling prepare_nlp() and before calling create_nlp()

nlp_cons_lb#

MHE.nlp_cons_lb#

Query and modify the lower bounds of the nlp_cons.

Prior to calling create_nlp() this attribute returns a list of lower bounds matching the list of constraints obtained with nlp_cons. After calling create_nlp() this attribute returns the concatenation of this list.

Values for lower (and upper) bounds MUST be added when adding new constraints to nlp_cons.

Warning

This is a VERY low level feature and should be used with extreme caution. It is easy to break the code.

Note

Modifications must be done after calling prepare_nlp()

nlp_cons_ub#

MHE.nlp_cons_ub#

Query and modify the upper bounds of the nlp_cons.

Prior to calling create_nlp() this attribute returns a list of upper bounds matching the list of constraints obtained with nlp_cons. After calling create_nlp() this attribute returns the concatenation of this list.

Values for upper (and lower) bounds MUST be added when adding new constraints to nlp_cons.

Warning

This is a VERY low level feature and should be used with extreme caution. It is easy to break the code.

Note

Modifications must be done after calling prepare_nlp()

nlp_obj#

MHE.nlp_obj#

Query and modify (symbolically) the NLP objective function. Use the variables in opt_x and opt_p.

It is advised to add to the current objective, e.g.:

mpc.prepare_nlp()
# Modify the objective
mpc.nlp_obj += sum1(vertcat(*mpc.opt_x['_x', -1, 0])**2)
# Finish creating the NLP
mpc.create_nlp()

See the documentation of opt_x and opt_p on how to query these attributes.

Warning

This is a VERY low level feature and should be used with extreme caution. It is easy to break the code.

Be especially careful NOT to accidentially overwrite the default objective.

Note

Modifications must be done after calling prepare_nlp() and before calling create_nlp()

opt_p#

MHE.opt_p#

Full structure of (symbolic) MHE parameters.

The attribute can be used to alter the objective function or constraints of the NLP.

The attribute is a CasADi numeric structure with nested power indices. It can be indexed as follows:

# previously estimated state:
opt_p['_x_prev', _x_name]
# previously estimated parameters:
opt_p['_p_est_prev', _x_name]
# known parameters
opt_p['_p_set', _p_name]
# time-varying parameters:
opt_p['_tvp', time_step, _tvp_name]
# sequence of measurements:
opt_p['_y_meas', time_step, _y_name]

The names refer to those given in the do_mpc.model.Model configuration. Further indices are possible, if the variables are itself vectors or matrices.

Warning

Do not tweak or overwrite this attribute unless you known what you are doing.

Note

The attribute is populated when calling setup() or create_nlp().

opt_p_num#

MHE.opt_p_num#

Full MHE parameter vector.

This attribute is used when calling the solver to pass all required parameters, including

  • previously estimated state(s)

  • previously estimated parameter(s)

  • known parameters

  • sequence of time-varying parameters

  • sequence of measurements parameters

do-mpc handles setting these parameters automatically in the make_step() method. However, you can set these values manually and directly call solve().

The attribute is a CasADi numeric structure with nested power indices. It can be indexed as follows:

# previously estimated state:
opt_p_num['_x_prev', _x_name]
# previously estimated parameters:
opt_p_num['_p_est_prev', _x_name]
# known parameters
opt_p_num['_p_set', _p_name]
# time-varying parameters:
opt_p_num['_tvp', time_step, _tvp_name]
# sequence of measurements:
opt_p_num['_y_meas', time_step, _y_name]

The names refer to those given in the do_mpc.model.Model configuration. Further indices are possible, if the variables are itself vectors or matrices.

Warning

Do not tweak or overwrite this attribute unless you known what you are doing.

Note

The attribute is populated when calling setup()

opt_x#

MHE.opt_x#

Full structure of the (symbolic) MHE optimization variables.

The attribute is a CasADi numeric structure with nested power indices. It can be indexed as follows:

# dynamic states:
opt_x['_x', time_step, collocation_point, _x_name]
# algebraic states:
opt_x['_z', time_step, collocation_point, _z_name]
# inputs:
opt_x['_u', time_step, _u_name]
# estimated parameters:
opt_x_Num['_p_est', _p_names]
# slack variables for soft constraints:
opt_x['_eps', time_step, _nl_cons_name]

The names refer to those given in the do_mpc.model.Model configuration. Further indices are possible, if the variables are itself vectors or matrices.

The attribute can be used to alter the objective function or constraints of the NLP.

Note

The attribute opt_x carries the scaled values of all variables.

Warning

Do not tweak or overwrite this attribute unless you known what you are doing.

Note

The attribute is populated when calling setup() or prepare_nlp()

opt_x_num#

MHE.opt_x_num#

Full MHE solution and initial guess.

This is the core attribute of the MHE class. It is used as the initial guess when solving the optimization problem and then overwritten with the current solution.

The attribute is a CasADi numeric structure with nested power indices. It can be indexed as follows:

# dynamic states:
opt_x_num['_x', time_step, collocation_point, _x_name]
# algebraic states:
opt_x_num['_z', time_step, collocation_point, _z_name]
# inputs:
opt_x_num['_u', time_step, _u_name]
# estimated parameters:
opt_x_Num['_p_est', _p_names]
# slack variables for soft constraints:
opt_x_num['_eps', time_step, _nl_cons_name]

The names refer to those given in the do_mpc.model.Model configuration. Further indices are possible, if the variables are itself vectors or matrices.

The attribute can be used to manually set a custom initial guess or for debugging purposes.

Note

The attribute opt_x_num carries the scaled values of all variables. See opt_x_num_unscaled for the unscaled values (these are not used as the initial guess).

Warning

Do not tweak or overwrite this attribute unless you known what you are doing.

Note

The attribute is populated when calling setup()

p_est0#

MHE.p_est0#

Initial value of estimated parameters and current iterate. This is the numerical structure holding the information about the current estimated parameters in the class. The property can be indexed according to the model definition.

Example:

model = do_mpc.model.Model('continuous')
model.set_variable('_p','temperature', shape=(4,1))

# Initiate MHE with list of estimated parameters:
mhe = do_mpc.estimator.MHE(model, ['temperature'])

# Get or set current value of variable:
mhe.p_est0['temperature', 0] # 0th element of variable
mhe.p_est0['temperature']    # all elements of variable
mhe.p_est0['temperature', 0:2]    # 0th and 1st element

Usefull CasADi symbolic structure methods:

  • .shape

  • .keys()

  • .labels()

scaling#

MHE.scaling#

Query and set scaling of the optimization variables. The Optimizer.scaling() method is an indexed property, meaning getting and setting this property requires an index and calls this function. The power index (elements are seperated by comas) must contain atleast the following elements:

order

index name

valid options

1

variable type

_x, _u and _z (and _p_est for MHE)

2

variable name

Names defined in do_mpc.model.Model.

Further indices are possible (but not neccessary) when the referenced variable is a vector or matrix.

Example:

# Set with:
optimizer.scaling['_x', 'phi_1'] = 2
optimizer.scaling['_x', 'phi_2'] = 2

# Query with:
optimizer.scaling['_x', 'phi_1']

Scaling factors \(a\) affect the MHE / MPC optimization problem. The optimization variables are scaled variables:

\[\bar\phi = \frac{\phi}{a_{\phi}} \quad \forall \phi \in [x, u, z, p_{\text{est}}]\]

Scaled variables are used to formulate the bounds \(\bar\phi_{lb} \leq \bar\phi_{ub}\) and for the evaluation of the ODE. For the objective function and the nonlinear constraints the unscaled variables are used. The algebraic equations are also not scaled.

Note

Scaling the optimization problem is suggested when states and / or inputs take on values which differ by orders of magnitude.

t0#

MHE.t0#

Current time marker of the class. Use this property to set of query the time.

Set with int, float, numpy.ndarray or casadi.DM type.

u0#

MHE.u0#

Initial input and current iterate. This is the numerical structure holding the information about the current input in the class. The property can be indexed according to the model definition.

Example:

model = do_mpc.model.Model('continuous')
model.set_variable('_u','heating', shape=(4,1))

...
mhe = do_mpc.estimator.MHE(model)
# or
mpc = do_mpc.estimator.MPC(model)

# Get or set current value of variable:
mpc.u0['heating', 0] # 0th element of variable
mpc.u0['heating']    # all elements of variable
mpc.u0['heating', 0:2]    # 0th and 1st element

Useful CasADi symbolic structure methods:

  • .shape

  • .keys()

  • .labels()

ub_opt_x#

MHE.ub_opt_x#

Query and modify the lower bounds of all optimization variables opt_x. This is a more advanced method of setting bounds on optimization variables of the MPC/MHE problem. Users with less experience are advised to use bounds instead.

The attribute returns a nested structure that can be indexed using powerindexing. Please refer to opt_x for more details.

Note

The attribute automatically considers the scaling variables when setting the bounds. See scaling for more details.

Note

Modifications must be done after calling prepare_nlp() or setup() respectively.

x0#

MHE.x0#

Initial state and current iterate. This is the numerical structure holding the information about the current states in the class. The property can be indexed according to the model definition.

Example:

model = do_mpc.model.Model('continuous')
model.set_variable('_x','temperature', shape=(4,1))

...
mhe = do_mpc.estimator.MHE(model)
# or
mpc = do_mpc.estimator.MPC(model)

# Get or set current value of variable:
mpc.x0['temperature', 0] # 0th element of variable
mpc.x0['temperature']    # all elements of variable
mpc.x0['temperature', 0:2]    # 0th and 1st element

Useful CasADi symbolic structure methods:

  • .shape

  • .keys()

  • .labels()

z0#

MHE.z0#

Initial algebraic state and current iterate. This is the numerical structure holding the information about the current algebraic states in the class. The property can be indexed according to the model definition.

Example:

model = do_mpc.model.Model('continuous')
model.set_variable('_z','temperature', shape=(4,1))

...
mhe = do_mpc.estimator.MHE(model)
# or
mpc = do_mpc.estimator.MPC(model)

# Get or set current value of variable:
mpc.z0['temperature', 0] # 0th element of variable
mpc.z0['temperature']    # all elements of variable
mpc.z0['temperature', 0:2]    # 0th and 1st element

Useful CasADi symbolic structure methods:

  • .shape

  • .keys()

  • .labels()