Optimizer#

class Optimizer[source]#

Bases: object

The base clase for the optimization based state estimation (MHE) and predictive controller (MPC). This class establishes the jointly used attributes, methods and properties.

Warning

The Optimizer base class can not be used independently. The methods and properties are inherited to do_mpc.estimator.MHE and do_mpc.controller.MPC.

Methods#

compile_nlp#

compile_nlp(self, overwrite=False, cname='nlp.c', libname='nlp.so', compiler_command=None)#

Compile the NLP. This may accelerate the optimization. As compilation is time consuming, the default option is to NOT overwrite (overwrite=False) an existing compilation. If an existing compilation with the name libname is found, it is used. This can be dangerous, if the NLP has changed (user tweaked the cost function, the model etc.).

Warning

This feature is experimental and currently only supported on Linux and MacOS.

What happens here?

  1. The NLP is written to a C-file (cname)

  2. The C-File (cname) is compiled. The custom compiler uses:

gcc -fPIC -shared -O1 {cname} -o {libname}
  1. The compiled library is linked to the NLP. This overwrites the original NLP. Options from the previous NLP (e.g. linear solver) are kept.

self.S = nlpsol('solver_compiled', 'ipopt', f'{libname}', self.nlpsol_opts)
Parameters:
  • overwrite (bool) – If True, the existing compiled NLP will be overwritten.

  • cname (str) – Name of the C file that will be exported.

  • libname (str) – Name of the shared library that will be created after compilation.

  • compiler_command (str) – Command to use for compiling. If None, the default compiler command will be used. Please make sure to use matching strings for libname when supplying your custom compiler command.

Return type:

None

create_nlp#

create_nlp(self)#

Create the optimization problem. Typically, this method is called internally from setup().

Users should only call this method if they intend to modify the objective with nlp_obj, the constraints with nlp_cons, nlp_cons_lb and nlp_cons_ub.

To finish the setup process, users MUST call create_nlp() afterwards.

Note

Do NOT call setup() if you intend to go the manual route with prepare_nlp() and create_nlp().

Note

Only AFTER calling prepare_nlp() the previously mentionned attributes nlp_obj, nlp_cons, nlp_cons_lb, nlp_cons_ub become available.

Returns:

None – None

get_tvp_template#

get_tvp_template(self)#

Obtain output template for set_tvp_fun().

The method returns a structured object with n_horizon+1 elements, and a set of time-varying parameters (as defined in do_mpc.model.Model) for each of these instances. The structure is initialized with all zeros. Use this object to define values of the time-varying parameters.

This structure (with numerical values) should be used as the output of the tvp_fun function which is set to the class with set_tvp_fun(). Use the combination of get_tvp_template() and set_tvp_fun().

Example:

# in model definition:
alpha = model.set_variable(var_type='_tvp', var_name='alpha')
beta = model.set_variable(var_type='_tvp', var_name='beta')

...
# in optimizer configuration:
tvp_temp_1 = optimizer.get_tvp_template()
tvp_temp_1['_tvp', :] = np.array([1,1])

tvp_temp_2 = optimizer.get_tvp_template()
tvp_temp_2['_tvp', :] = np.array([0,0])

def tvp_fun(t_now):
    if t_now<10:
        return tvp_temp_1
    else:
        tvp_temp_2

optimizer.set_tvp_fun(tvp_fun)
Returns:

Union[SXStruct, MXStruct] – Casadi SX or MX structure

prepare_nlp#

prepare_nlp(self)#

Prepare the optimization problem. Typically, this method is called internally from setup().

Users should only call this method if they intend to modify the objective with nlp_obj, the constraints with nlp_cons, nlp_cons_lb and nlp_cons_ub.

To finish the setup process, users MUST call create_nlp() afterwards.

Note

Do NOT call setup() if you intend to go the manual route with prepare_nlp() and create_nlp().

Note

Only AFTER calling prepare_nlp() the previously mentionned attributes nlp_obj, nlp_cons, nlp_cons_lb, nlp_cons_ub become available.

Returns:

None – None

reset_history#

reset_history(self)#

Reset the history of the optimizer. All data from the do_mpc.data.Data instance is removed.

Return type:

None

set_nl_cons#

set_nl_cons(self, expr_name, expr, ub=inf, soft_constraint=False, penalty_term_cons=1, maximum_violation=inf)#

Introduce new constraint to the class. Further constraints are optional. Expressions must be formulated with respect to _x, _u, _z, _tvp, _p. They are implemented as:

\[m(x,u,z,p_{\text{tv}}, p) \leq m_{\text{ub}}\]

Setting the flag soft_constraint=True will introduce slack variables \(\epsilon\), such that:

\[\begin{split}m(x,u,z,p_{\text{tv}}, p)-\epsilon &\leq m_{\text{ub}},\\ 0 &\leq \epsilon \leq \epsilon_{\text{max}},\end{split}\]

Slack variables are added to the cost function and multiplied with the supplied penalty term. This formulation makes constraints soft, meaning that a certain violation is tolerated and does not lead to infeasibility. Typically, high values for the penalty are suggested to avoid significant violation of the constraints.

Parameters:
  • expr_name (str) – Arbitrary name for the given expression. Names are used for key word indexing.

  • expr (Union[SX, MX]) – CasADi SX or MX function depending on _x, _u, _z, _tvp, _p.

  • ub (float) – Upper bound

  • soft_constraint (bool) – Flag to enable soft constraint

  • penalty_term_cons (int) – Penalty term constant

  • maximum_violation (float) – Maximum violation

Raises:
  • assertion – expr_name must be str

  • assertion – expr must be a casadi SX or MX type

Returns:

Union[SX, MX] – Returns the newly created expression. Expression can be used e.g. for the RHS.

set_tvp_fun#

set_tvp_fun(self, tvp_fun)#

Set function which returns time-varying parameters.

The tvp_fun is called at each optimization step to get the current prediction of the time-varying parameters. The supplied function must be callable with the current time as the only input. Furthermore, the function must return a CasADi structured object which is based on the horizon and on the model definition. The structure can be obtained with get_tvp_template().

Example:

# in model definition:
alpha = model.set_variable(var_type='_tvp', var_name='alpha')
beta = model.set_variable(var_type='_tvp', var_name='beta')

...
# in optimizer configuration:
tvp_temp_1 = optimizer.get_tvp_template()
tvp_temp_1['_tvp', :] = np.array([1,1])

tvp_temp_2 = optimizer.get_tvp_template()
tvp_temp_2['_tvp', :] = np.array([0,0])

def tvp_fun(t_now):
    if t_now<10:
        return tvp_temp_1
    else:
        tvp_temp_2

optimizer.set_tvp_fun(tvp_fun)

Note

The method set_tvp_fun(). must be called prior to setup IF time-varying parameters are defined in the model. It is not required to call the method if no time-varying parameters are defined.

Parameters:

tvp_fun (Callable[[float], Union[SXStruct, MXStruct]]) – Function that returns the predicted tvp values at each timestep. Must have single input (float) and return a structure3.DMStruct (obtained with get_tvp_template()).

Return type:

None

solve#

solve(self)#

Solves the optmization problem.

The current problem is defined by the parameters in the opt_p_num CasADi structured Data.

Typically, opt_p_num is prepared for the current iteration in the make_step() method. It is, however, valid and possible to directly set paramters in opt_p_num before calling solve().

The method updates the opt_p_num and opt_x_num attributes of the class. By resetting opt_x_num to the current solution, the method implicitly enables warmstarting the optimizer for the next iteration, since this vector is always used as the initial guess. :rtype: None

Warning

The method is part of the public API but it is generally not advised to use it. Instead we recommend to call make_step() at each iterations, which acts as a wrapper for solve().

Raises:

asssertion – Optimizer was not setup yet.

Attributes#

bounds#

Optimizer.bounds#

Query and set bounds of the optimization variables. The bounds() method is an indexed property, meaning getting and setting this property requires an index and calls this function. The power index (elements are separated by commas) must contain atleast the following elements:

order

index name

valid options

1

bound type

lower and upper

2

variable type

_x, _u and _z (and _p_est for MHE)

3

variable name

Names defined in do_mpc.model.Model.

Further indices are possible (but not neccessary) when the referenced variable is a vector or matrix.

Example:

# Set with:
optimizer.bounds['lower','_x', 'phi_1'] = -2*np.pi
optimizer.bounds['upper','_x', 'phi_1'] = 2*np.pi

# Query with:
optimizer.bounds['lower','_x', 'phi_1']

lb_opt_x#

Optimizer.lb_opt_x#

Query and modify the lower bounds of all optimization variables opt_x. This is a more advanced method of setting bounds on optimization variables of the MPC/MHE problem. Users with less experience are advised to use bounds instead.

The attribute returns a nested structure that can be indexed using powerindexing. Please refer to opt_x for more details.

Note

The attribute automatically considers the scaling variables when setting the bounds. See scaling for more details.

Note

Modifications must be done after calling prepare_nlp() or setup() respectively.

nlp_cons#

Optimizer.nlp_cons#

Query and modify (symbolically) the NLP constraints. Use the variables in opt_x and opt_p.

Prior to calling create_nlp() this attribute returns a list of symbolic constraints. After calling create_nlp() this attribute returns the concatenation of this list and the attribute cannot be altered anymore.

It is advised to append to the current list of nlp_cons:

mpc.prepare_nlp()

# Create new constraint: Input at timestep 0 and 1 must be identical.
extra_cons = mpc.opt_x['_u', 0, 0]-mpc.opt_x['_u',1, 0]
mpc.nlp_cons.append(
    extra_cons
)

# Create appropriate upper and lower bound (here they are both 0 to create an equality constraint)
mpc.nlp_cons_lb.append(np.zeros(extra_cons.shape))
mpc.nlp_cons_ub.append(np.zeros(extra_cons.shape))

mpc.create_nlp()

See the documentation of opt_x and opt_p on how to query these attributes.

Warning

This is a VERY low level feature and should be used with extreme caution. It is easy to break the code.

Be especially careful NOT to accidentially overwrite the default objective.

Note

Modifications must be done after calling prepare_nlp() and before calling create_nlp()

nlp_cons_lb#

Optimizer.nlp_cons_lb#

Query and modify the lower bounds of the nlp_cons.

Prior to calling create_nlp() this attribute returns a list of lower bounds matching the list of constraints obtained with nlp_cons. After calling create_nlp() this attribute returns the concatenation of this list.

Values for lower (and upper) bounds MUST be added when adding new constraints to nlp_cons.

Warning

This is a VERY low level feature and should be used with extreme caution. It is easy to break the code.

Note

Modifications must be done after calling prepare_nlp()

nlp_cons_ub#

Optimizer.nlp_cons_ub#

Query and modify the upper bounds of the nlp_cons.

Prior to calling create_nlp() this attribute returns a list of upper bounds matching the list of constraints obtained with nlp_cons. After calling create_nlp() this attribute returns the concatenation of this list.

Values for upper (and lower) bounds MUST be added when adding new constraints to nlp_cons.

Warning

This is a VERY low level feature and should be used with extreme caution. It is easy to break the code.

Note

Modifications must be done after calling prepare_nlp()

nlp_obj#

Optimizer.nlp_obj#

Query and modify (symbolically) the NLP objective function. Use the variables in opt_x and opt_p.

It is advised to add to the current objective, e.g.:

mpc.prepare_nlp()
# Modify the objective
mpc.nlp_obj += sum1(vertcat(*mpc.opt_x['_x', -1, 0])**2)
# Finish creating the NLP
mpc.create_nlp()

See the documentation of opt_x and opt_p on how to query these attributes.

Warning

This is a VERY low level feature and should be used with extreme caution. It is easy to break the code.

Be especially careful NOT to accidentially overwrite the default objective.

Note

Modifications must be done after calling prepare_nlp() and before calling create_nlp()

scaling#

Optimizer.scaling#

Query and set scaling of the optimization variables. The Optimizer.scaling() method is an indexed property, meaning getting and setting this property requires an index and calls this function. The power index (elements are seperated by comas) must contain atleast the following elements:

order

index name

valid options

1

variable type

_x, _u and _z (and _p_est for MHE)

2

variable name

Names defined in do_mpc.model.Model.

Further indices are possible (but not neccessary) when the referenced variable is a vector or matrix.

Example:

# Set with:
optimizer.scaling['_x', 'phi_1'] = 2
optimizer.scaling['_x', 'phi_2'] = 2

# Query with:
optimizer.scaling['_x', 'phi_1']

Scaling factors \(a\) affect the MHE / MPC optimization problem. The optimization variables are scaled variables:

\[\bar\phi = \frac{\phi}{a_{\phi}} \quad \forall \phi \in [x, u, z, p_{\text{est}}]\]

Scaled variables are used to formulate the bounds \(\bar\phi_{lb} \leq \bar\phi_{ub}\) and for the evaluation of the ODE. For the objective function and the nonlinear constraints the unscaled variables are used. The algebraic equations are also not scaled.

Note

Scaling the optimization problem is suggested when states and / or inputs take on values which differ by orders of magnitude.

ub_opt_x#

Optimizer.ub_opt_x#

Query and modify the lower bounds of all optimization variables opt_x. This is a more advanced method of setting bounds on optimization variables of the MPC/MHE problem. Users with less experience are advised to use bounds instead.

The attribute returns a nested structure that can be indexed using powerindexing. Please refer to opt_x for more details.

Note

The attribute automatically considers the scaling variables when setting the bounds. See scaling for more details.

Note

Modifications must be done after calling prepare_nlp() or setup() respectively.