1. Dynamic optimization of DAEs using direct collocation with JMUs

The direct collocation method supported by JModelica.org can be used to solve dynamic optimization problems, including optimal control problems and parameter optimization problems. In the collocation method, the dynamic model variable profiles are approximated by piecewise polynomials. This method of approximating a differential equation corresponds to a fixed step implicit Runge-Kutta scheme, where the mesh defines the length of each step. Also, the number of collocation points in each element, or step, needs to be provided. This number corresponds to the stage order of the Runge-Kutta scheme. The selection of mesh is analogous to the choice of step length in a one-step algorithm for solving differential equations. Accordingly, the mesh needs to be fine-grained enough to ensure sufficiently accurate approximation of the differential constraint. For an overview of simultaneous optimization algorithms, see [2]. The algorithm IPOPT is used to solve the non-linear program resulting from collocation.

The collocation method implemented in JModelica.org requires that the model to be optimized does not contain discontinuities such as if equations, when clauses or integer variables.

The mathematical formulation of the algorithm can be found in the JMI API documentation.

The collocation algorithm provides a number of options, summarized in Table D.1, “Options for the JMU and collocation-based optimization algorithm”.

Table D.1. Options for the JMU and collocation-based optimization algorithm

n_e50Number of elements of the finite element mesh.
n_cp3Number of collocation points in each element. Values between 1 and 10 are supported.
hsEquidistant points using default n_eA vector containing n_e elements representing the finite element lengths. The sum of all element should equal to 1.
blocking_factorsNone (not used)A vector of blocking factors. Blocking factors are specified by a vector of integers, where each entry in the vector corresponds to the number of elements for which the control profile should be kept constant. For example, the blocking factor specification [2,1,5] means that u_0=u_1 and u_3=u_4=u_5=u_6=u_7 assuming that the number of elements is 8. Notice that specification of blocking factors implies that controls are present in only one collocation point (the first) in each element. The number of constant control levels in the optimization interval is equal to the length of the blocking factor vector. In the example above, this implies that there are three constant control levels. If the sum of the entries in the blocking factor vector is not equal to the number of elements, the vector is normalized, either by truncation (if the sum of the entries is larger than the number of element) or by increasing the last entry of the vector. For example, if the number of elements is 4, the normalized blocking factor vector in the example is [2,1,1]. If the number of elements is 10, then the normalized vector is [2,1,7].
init_trajNone (i.e. not used, set this argument to activate initialization)Variable trajectory data used for initialization of the optimization problem. The data is represented by an object of the type pyjmi.common.io.ResultDymolaTextual.
result_mode'default'Specifies the output format of the optimization result. 'default' gives the the optimization result at the collocation points. 'element_interpolation' computes the values of the variable trajectories using the collocation interpolation polynomials. The option 'n_interpolation_points' is used to specify the number of evaluation points within each finite element. 'mesh_interpolation' computes the values of the variable trajectories at points defined by the option 'result_mesh'.
n_interpolation_points20Number of interpolation points in each finite element if the result reporting option result_mode is set to 'element_interpolation'.
result_meshNoneA vector of time points at which the the optimization result is computed. This option is used if result_mode is set to 'mesh_interpolation'.
result_file_nameEmpty string (default generated file name will be used)Specifies the name of the file where the optimization result is written. Setting this option to an empty string results in a default file name that is based on the name of the optimization class.
result_format'txt'Specifies in which format to write the result. Currently only textual mode is supported.
write_scaled_resultFalseWrite the scaled optimization result if set to true. This option is only applicable when automatic variable scaling is enabled. Only for debugging use.

In addition to the options for the collocation algorithm, IPOPT options can also be set by modifying the dictionary IPOPT_options contained in the collocation algorithm options object. Here, all valid IPOPT options can be specified, see the IPOPT documentation for further information. For example, setting the option max_iter:

opts['IPOPT_options']['max_iter'] = 300

makes IPOPT terminate after 300 iterations even if no optimal solution has been found.

Some statistics from IPOPT can be obtained by issuing the command:


The return argument of this function can be found by using the interactive help:

Get statistics from the last optimization run.
    return_status -- 
        Return status from IPOPT.
    nbr_iter -- 
        Number of iterations.
    objective -- 
       Final value of objective function.
    total_exec_time -- 
        Execution time.

1.1. Examples

1.1.1. Optimal control

This tutorial is based on the Hicks-Ray Continuously Stirred Tank Reactors (CSTR) system. The model was originally presented in [1]. The system has two states, the concentration, c, and the temperature, T. The control input to the system is the temperature, Tc, of the cooling flow in the reactor jacket. The chemical reaction in the reactor is exothermic, and also temperature dependent; high temperature results in high reaction rate. The CSTR dynamics is given by:

This tutorial will cover the following topics:

  • How to solve a DAE initialization problem. The initialization model have equations specifying that all derivatives should be identically zero, which implies that a stationary solution is obtained. Two stationary points, corresponding to different inputs, are computed. We call the stationary points A and B respectively. Point A corresponds to operating conditions where the reactor is cold and the reaction rate is low, whereas point B corresponds to a higher temperature where the reaction rate is high. For more information about the DAE initialization algorithm, see the JMI API documentation.

  • An optimal control problem is solved where the objective is to transfer the state of the system from stationary point A to point B. The challenge is to ignite the reactor while avoiding uncontrolled temperature increase. It is also demonstrated how to set parameter and variable values in a model. More information about the simultaneous optimization algorithm can be found at JModelica.org API documentation.

  • The optimization result is saved to file and then the important variables are plotted.

The Python commands in this tutorial may be copied and pasted directely into a Python shell, in some cases with minor modifications. Alternatively, you may copy the commands into a text file, e.g., cstr.py.

Start the tutorial by creating a working directory and copy the file $JMODELICA_HOME/Python/pyjmi/examples/files/CSTR.mop to your working directory. An on-line version of CSTR.mop is also available (depending on which browser you use, you may have to accept the site certificate by clicking through a few steps). If you choose to create Python script file, save it to the working directory. Compile and instantiate a model object

The functions and classes used in the tutorial script need to be imported into the Python script. This is done by the following Python commands. Copy them and paste them either directly into your Python shell or, preferably, into your Python script file.

import numpy as N
import matplotlib.pyplot as plt

from pymodelica import compile_jmu
from pyjmi import JMUModel

Before we can do operations on the model, such as optimizing it, the model file must be compiled and the resulting DLL file loaded in Python. These steps are described in more detail Section 4.

# Compile the stationary initialization model into a JMU
jmu_name = compile_jmu("CSTR.CSTR_Init","CSTR.mop", 

# load the JMU
init_model = JMUModel(jmu_name)

Notice that automatic scaling of the model is enabled by setting the compiler option enable_variable_scaling to true. At this point, you may open the file CSTR.mop, containing the CSTR model and the static initialization model used in this section. Study the classes CSTR.CSTR and CSTR.CSTR_Init and make sure you understand the models. Before proceeding, have a look at the interactive help for one of the functions you used:

help(compile_jmu) Solve the DAE initialization problem

In the next step, we would like to specify the first operating point, A, by means of a constant input cooling temperature, and then solve the initialization problem assuming that all derivatives are zero.

# Set inputs for Stationary point A
Tc_0_A = 250
# Solve the DAE initialization system with Ipopt
init_result = init_model.initialize()

# Store stationary point A
c_0_A = init_result['c'][0]
T_0_A = init_result['T'][0]

# Print some data for stationary point A
print(' *** Stationary point A ***')
print('Tc = %f' % Tc_0_A)
print('c = %f' % c_0_A)
print('T = %f' % T_0_A)

Notice how the method set is used to set the value of the control input. The initialization algorithm is invoked by calling the JMUModel method initialize, which returns a result object from which the initialization result can be accessed. The initialize method relies on the algorithm IPOPT for computing the solution of the initialization problem. The values of the states corresponding to point A can then be extracted from the result object. Look carefully at the printouts in the Python shell to see a printout of the stationary values. Display the help text for the initialize method and take a moment to look through it. The procedure is now repeated for operating point B:

# Set inputs for Stationary point B
Tc_0_B = 280
# Solve the DAE initialization system with Ipopt
init_result = init_model.initialize()
# Store stationary point B
c_0_B = init_result['c'][0]
T_0_B = init_result['T'][0]

# Print some data for stationary point B
print(' *** Stationary point B ***')
print('Tc = %f' % Tc_0_B)
print('c = %f' % c_0_B)
print('T = %f' % T_0_B)

We have now computed two stationary points for the system based on constant control inputs. In the next section, these will be used to set up an optimal control problem. Solving an optimal control problem

The optimal control problem we are about to solve is given by:

and is expressed in Optimica format in the class CSTR.CSTR_Opt in the CSTR.mop file above. Have a look at this class and make sure that you understand how the optimization problem is formulated and what the objective is.

Direct collocation methods often require good initial guesses in order to ensure robust convergence. Since initial guesses are needed for all discretized variables along the optimization interval, simulation provides a convenient mean to generate state and derivative profiles given an initial guess for the control input(s). It is then convenient to set up a dedicated model for computation of initial trajectories. In the model CSTR.CSTR_Init_Optimization in the CSTR.mop file, a step input is applied to the system in order obtain an initial guess. Notice that the variable names in the initialization model must match those in the optimal control model. Therefore, also the cost function is included in the initialization model.

First, compile the model and set model parameters:

# Compile the optimization initialization model
jmu_name = compile_jmu("CSTR.CSTR_Init_Optimization","CSTR.mop")

# Load the model
init_sim_model = JMUModel(jmu_name)

# Set model parameters

Having initialized the model parameters, we can simulate the model using the simulate function.

res = init_sim_model.simulate(start_time=0.,final_time=150.)

The method simulate first computes consistent initial conditions and then simulates the model in the interval 0 to 150 seconds. Take a moment to read the interactive help for the simulate method.

The simulation result object is returned and to retrieve the simulation data use Python dictionary access to retrieve the variable trajectories.

# Extract variable profiles
t_init_sim = res['time']

# Plot the results


plt.ylabel('Cooling temperature')

Look at the plots and try to relate the trajectories to the optimal control problem. Why is this a good initial guess?

Once the initial guess is generated, we compile the model containing the optimal control problem:

# Compile model
jmu_name = compile_jmu("CSTR.CSTR_Opt", "CSTR.mop")

# Load model
cstr = JMUModel(jmu_name)

We will now initialize the parameters of the model so that their values correspond to the optimization objective of transferring the system state from operating point A to operating point B. Accordingly, we set the parameters representing the initial values of the states to point A and the reference values in the cost function to point B:

# Set reference values

# Set initial values

Collocation-based optimization algorithms often require a good initial guess in order to achieve fast convergence. Also, if the problem is non-convex, initialization is even more critical. Initial guesses can be provided in Optimica by the initialGuess attribute, see the CSTR.mop file for an example for this. Notice that initialization in the case of collocation-based optimization methods means initialization of all the control and state profiles as a function of time. In some cases, it is sufficient to use constant profiles. For this purpose, the initialGuess attribute works well. In more difficult cases, however, it may be necessary to initialize the profiles using simulation data, where an initial guess for the input(s) has been used to generate the profiles for the dependent variables. This approach for initializing the optimization problem is used in this tutorial.

We are now ready to solve the actual optimization problem. This is done by invoking the method optimize:

n_e = 100 # Number of elements 

# Set options
opt_opts = cstr.optimize_options()
opt_opts['n_e'] = n_e
opt_opts['init_traj'] = res.result_data

res = cstr.optimize(options=opt_opts)

In this case, we would like to increase the number of finite elements in the mesh from 50 to 100. This is done by setting the corresponding option and provide it as an argument to the optimize method. You should see the output of Ipopt in the Python shell as the algorithm iterates to find the optimal solution. Ipopt should terminate with a message like 'Optimal solution found' or 'Solved to an acceptable level' in order for an optimum to be found. The optimization result object is returned and the optimization data are stored in res.

We can now retrieve the trajectories of the variables that we intend to plot:

# Extract variable profiles
time_res = res['time']


Finally, we plot the result using the functions available in matplotlib:

# Plot the result


plt.ylabel('Cooling temperature')

Notice that parameters are returned as scalar values whereas variables are returned as vectors and that this must be taken into account when plotting. You should now see the plot shown in Figure D.1, “Optimal profiles for the CSTR problem.”.

Figure D.1. Optimal profiles for the CSTR problem.

Optimal profiles for the CSTR problem.

Take a minute to analyze the optimal profiles and to answer the following questions:

  1. Why is the concentration high in the beginning of the interval?

  2. Why is the input cooling temperature high in the beginning of the interval? Verify optimal control solution

Solving optimal control problems by means of direct collocation implies that the differential equation is approximated by a discrete time counterpart. The accuracy of the solution is dependent on the method of collocation and the number of elements. In order to assess the accuracy of the discretization, we may simulate the system using a DAE solver using the optimal control profile as input. With this approach, the state profiles are computed with high accuracy and the result may then be compared with the profiles resulting from optimization. Notice that this procedure does not verify the optimality of the resulting optimal control profiles, but only the accuracy of the discretization of the dynamics.

The procedure for setting up and executing this simulation is similar to above:

# Simulate to verify the optimal solution
# Set up the input trajectory
t = time_res 
u = Tc_res
u_traj = N.transpose(N.vstack((t,u)))

# Compile the Modelica model to a JMU
jmu_name = compile_jmu("CSTR.CSTR", "CSTR.mop")

# Load model
sim_model = JMUModel(jmu_name)


res = sim_model.simulate(start_time=0.,final_time=150.,

Finally, we load the simulated data and plot it to compare with the optimized trajectories:

# Extract variable profiles
time_sim = res['time']

# Plot the results


plt.ylabel('Cooling temperature')

You should now see the plot shown in Figure D.2, “Optimal control profiles and simulated trajectories corresponding to the optimal control input.”.

Figure D.2. Optimal control profiles and simulated trajectories corresponding to the optimal control input.

Optimal control profiles and simulated trajectories corresponding to the optimal control input.

Discuss why the simulated trajectories differs from the optimized counterparts. Exercises

After completing the tutorial you may continue to modify the optimization problem and study the results.

  1. Remove the constraint on cstr.T. What is then the maximum temperature?

  2. Play around with weights in the cost function. What happens if you penalize the control variable with a larger weight? Do a parameter sweep for the control variable weight and plot the optimal profiles in the same figure.

  3. Add terminal constraints ('cstr.T(finalTime)=someParameter') for the states so that they are equal to point B at the end of the optimization interval. Now reduce the length of the optimization interval. How short can you make the interval?

  4. Try varying the number of elements in the mesh and the number of collocation points in each interval. 2-10 collocation points are supported. References

[1] G.A. Hicks and W.H. Ray. Approximation Methods for Optimal Control Synthesis. Can. J. Chem. Eng., 40:522–529, 1971.

[2] Bieger, L., A. Cervantes, and A. Wächter (2002): "Advances in simultaneous strategies for dynamic optimization." Chemical Engineering Science, 57, pp. 575-593.

1.1.2. Minimum time problems

Minimum time problems are dynamic optimization problems where not only the control inputs are optimized, but also the final time. Typically, elements of such problems include initial and terminal state constraints and an objective function where the transition time is minimized. The following example will be used to illustrate how minimum time problems are formulated in Optimica. We consider the optimization problem:

subject to the Van der Pol dynamics:

and the constraints:

This problem is encoded in the following Optimica specification:

optimization VDP_Opt_Min_Time (objective = finalTime,
                               startTime = 0,

  // The states
  Real x1(start = 0,fixed=true);
  Real x2(start = 1,fixed=true);

  // The control signal
  input Real u(free=true,min=-1,max=1);

  // Dynamic equations
  der(x1) = (1 - x2^2) * x1 - x2 + u;
  der(x2) = x1;

  // terminal constraints
end VDP_Opt_Min_Time;

Notice how the class attribute finalTime is set to be free in the optimization. The problem is solved by the following Python script:

# Import numerical libraries
import numpy as N
import matplotlib.pyplot as plt

# Import the JModelica.org Python packages
from pymodelica import compile_jmu
from pyjmi import JMUModel

model_name = 'VDP_pack.VDP_Opt_Min_Time'

jmu_name = compile_jmu('VDP_Opt_Min_Time', 'VDP_Opt_Min_Time.mop')
vdp = JMUModel(jmu_name)
res = vdp.optimize()

# Extract variable profiles

# Plot



The resulting control and state profiles are shown in Figure D.3, “Minimum time profiles for the Van der Pol Oscillator.”. Notice the difference as compared to Figure Figure 6.1, “Optimal profiles for the VDP oscillator”, where the Van der Pol oscillator system is optimized using a quadratic objective function.

Figure D.3. Minimum time profiles for the Van der Pol Oscillator.

Minimum time profiles for the Van der Pol Oscillator.

1.1.3. Parameter optimization

In this tutorial it will be demonstrated how to solve parameter estimation problems. We consider a quadruple tank system depicted in Figure 6.6, “A schematic picture of the quadruple tank process.”.

Figure D.4. A schematic picture of the quadruple tank process.

A schematic picture of the quadruple tank process.

The dynamics of the system are given by the differential equations:

Where the parameter values are given in Table 6.3, “Parameters for the quadruple tank process.”.

Table D.2. Parameters for the quadruple tank process.

Parameter nameValueUnit

The states of the model are the tank water levels x1, x2, x3, and x4. The control inputs, u1 and u2, are the flows generated by the two pumps.

The Modelica model for the system is located in QuadTankPack.mop. Download the file to your working directory and open it in a text editor. Locate the class QuadTankPack.QuadTank and make sure you understand the model. In particular, notice that all model variables and parameters are expressed in SI units.

Measurement data, available in qt_par_est_data.mat, has been logged in an identification experiment. Download also this file to your working directory.

Open a text file and name it qt_par_est.py. Then enter the imports:

from scipy.io.matlab.mio import loadmat
import matplotlib.pyplot as plt
import numpy as N

from pymodelica import compile_jmu
from pyjmi import JMUModel

into the file. Next, we enter code to open the data file, extract the measurement time series and plot the measurements:

# Load measurement data from file
data = loadmat('qt_par_est_data.mat',appendmat=False)

# Extract data series
t_meas = data['t'][6000::100,0]-60
y1_meas = data['y1_f'][6000::100,0]/100
y2_meas = data['y2_f'][6000::100,0]/100
y3_meas = data['y3_d'][6000::100,0]/100
y4_meas = data['y4_d'][6000::100,0]/100
u1 = data['u1_d'][6000::100,0]
u2 = data['u2_d'][6000::100,0]    

# Plot measurements and inputs


You should now see two plots showing the measurement state profiles and the control input profiles similar to Figure 6.7, “Measured state profiles.” and Figure 6.8, “Control inputs used in the identification experiment.”.

Figure D.5. Measured state profiles.

Measured state profiles.

Figure D.6. Control inputs used in the identification experiment.

Control inputs used in the identification experiment.

In order to evaluate the accuracy of nominal model parameter values, start by simulating the model, assuming that the start values of the states are given by the state measurement at the start of the experiment. This assumption can be expressed in the model:

model Sim_QuadTank
  QuadTank qt;
  input Real u1 = qt.u1;
  input Real u2 = qt.u2;
initial equation
  qt.x1 = 0.0627;
  qt.x2 = 0.06044;
  qt.x3 = 0.024;
  qt.x4 = 0.023;
end Sim_QuadTank;

Notice that initial equations have been added to the model. Before the model is simulated, a matrix containing the input trajectories is created:

# Build input trajectory matrix for use in simulation
u = N.transpose(N.vstack((t_meas,u1,u2)))

Now, the model can be simulated:

# compile JMU
jmu_name = compile_jmu('QuadTankPack.Sim_QuadTank','QuadTankPack.mop')

# Load model
model = JMUModel(jmu_name)

# Simulate model response with nominal parameters
res = model.simulate(input=(['u1','u2'],u),start_time=0.,final_time=60)

The simulation result can now be extracted:

# Load simulation result
x1_sim = res['qt.x1']
x2_sim = res['qt.x2']
x3_sim = res['qt.x3']
x4_sim = res['qt.x4']
t_sim  = res['time']
u1_sim = res['u1']
u2_sim = res['u2']

and then plotted:

# Plot simulation result


Figure 6.9, “Simulation result for the nominal model.” shows the result of the simulation.

Figure D.7. Simulation result for the nominal model.

Simulation result for the nominal model.

Here, the simulated profiles are given by the green curves. Clearly, there is a mismatch in the response, especially for the two lower tanks. Think about why the model does not match the data, i.e., which parameters may have wrong values.

The next step towards solving a parameter estimation problem is to identify which parameters to tune. Typically, parameters which are not known precisely are selected. Also, the selected parameters need of course affect the mismatch between model response and data, when tuned. In a first attempt, we aim at decreasing the mismatch for the two lower tanks, and therefore we select the lower tank outflow areas, a1 and a2, as parameters to optimize. The Optimica specification for the estimation problem contained in the class QuadTankPack.QuadTank_ParEst:

optimization QuadTank_ParEst (objective=sum((y1_meas[i] - qt.x1(t_meas[i]))^2 + 
                                            (y2_meas[i] - qt.x2(t_meas[i]))^2 for i in 1:N_meas),
    // Initial tank levels
  parameter Modelica.SIunits.Length x1_0 = 0.06255;
  parameter Modelica.SIunits.Length x2_0 = 0.06045;
  parameter Modelica.SIunits.Length x3_0 = 0.02395;
  parameter Modelica.SIunits.Length x4_0 = 0.02325;

  QuadTank qt(x1(fixed=true),x1_0=x1_0,
              a1(free=true,initialGuess = 0.03e-4,min=0,max=0.1e-4),
              a2(free=true,initialGuess = 0.03e-4,min=0,max=0.1e-4));

  // Number of measurement points
  parameter Integer N_meas = 61;
  // Vector of measurement times
  parameter Real t_meas[N_meas] = 0:60.0/(N_meas-1):60;
  // Measurement values for x1 
  // Notice that dummy values are entered here:
  // the real measurement values will be set from Python
  parameter Real y1_meas[N_meas] = ones(N_meas);
  // Measurement values for x2 	
  parameter Real y2_meas[N_meas] = ones(N_meas);
  // Input trajectory for u1 
  PRBS1 prbs1;
  // Input trajectory for u2
  PRBS2 prbs2;	
end QuadTank_ParEst;

The cost function is here given as a squared sum of the difference between the measured profiles for x1 and x2 and the corresponding model profiles. Also the, parameters a1 and a2 are set to be free, and are given initial guesses as well as bounds. As for the measurement data, parameter vectors are declared, but only dummy data is provided in the model - the actual data values will be set from the Python script. Also, the input profiles are connected to signal generators that outputs the same input profiles as those used in the experiment. Take some time to look at QuadTankPack.mop and locate the classes used above.

Before the optimization problem can be solved, the Optimica specification needs to be compiled:

# Compile parameter optimization model
jmu_name = compile_jmu("QuadTankPack.QuadTank_ParEst","QuadTankPack.mop")

# Load the model
qt_par_est = JMUModel(jmu_name)

Next, we load the measurement data into the model:

# Number of measurement points
N_meas = N.size(u1,0)

# Set measurement data into model
for i in range(0,N_meas):

We are now ready to solve the optimization problem:

n_e = 100 # Numer of element in collocation algorithm

# Get an options object for the optimization algorithm
opt_opts = qt_par_est.optimize_options()
# Set the number of collocation points
opt_opts['n_e'] = n_e

# Solve parameter optimization problem
res = qt_par_est.optimize(options=opt_opts)

Now, lets extract the optimal values of the parameters a1 and a2 and print them to the console:

# Extract optimal values of parameters
a1_opt = res.final("qt.a1")
a2_opt = res.final("qt.a2")

# Print optimal parameter values
print('a1: ' + str(a1_opt*1e4) + 'cm^2')
print('a2: ' + str(a2_opt*1e4) + 'cm^2')

You should get an output similar to:

a1: 0.0266cm^2
a2: 0.0272cm^2

The estimated values are slightly smaller than the nominal values - think about why this may be the case. Also note that the estimated values do not necessarily correspond to the physically true values. Rather, the parameter values are adjusted to compensate for all kinds of modeling errors in order to minimize the mismatch between model response and measurement data.

Next we plot the optimized profiles:

# Load state profiles
x1_opt = res["qt.x1"]
x2_opt = res["qt.x2"]
x3_opt = res["qt.x3"]
x4_opt = res["qt.x4"]
u1_opt = res["qt.u1"]
u2_opt = res["qt.u2"]
t_opt  = res["time"]

# Plot

You will see the plot shown in Figure 6.10, “State profiles corresponding to estimated values of a1 and a2.”.

Figure D.8. State profiles corresponding to estimated values of a1 and a2.

State profiles corresponding to estimated values of a1 and a2.

The profiles corresponding to the estimated values of a1 and a2 are shown in black curves. As can be seen, the match between the model response and the measurement data has been significantly increased. Is the behavior of the model consistent with the estimated parameter values?

Never the less, There is still a mismatch for the upper tanks, especially for tank 4. In order to improve the match, a second estimation problem may be formulated, where the parameters a1, a2, a3, a4 are free optimization variables, and where the squared errors of all four tank levels are penalized. Take a minute to locate the class QuadTankPack.QuadTank_ParEst2 and make sure that you understand the model. Solve the optimization problem by typing the Python code:

# Compile second parameter estimation model
jmu_name = compile_jmu("QuadTankPack.QuadTank_ParEst2", "QuadTankPack.mop")

# Load model
qt_par_est2 = JMUModel(jmu_name)

# Number of measurement points
N_meas = N.size(u1,0)

# Set measurement data into model
for i in range(0,N_meas):

# Solve parameter estimation problem
res_opt2 = qt_par_est2.optimize(options=opt_opts)

Next, we print the optimal parameter values:

# Get optimal parameter values
a1_opt2 = res_opt2.final("qt.a1")
a2_opt2 = res_opt2.final("qt.a2")
a3_opt2 = res_opt2.final("qt.a3")
a4_opt2 = res_opt2.final("qt.a4")

# Print optimal parameter values 
print('a1:' + str(a1_opt2*1e4) + 'cm^2')
print('a2:' + str(a2_opt2*1e4) + 'cm^2')
print('a3:' + str(a3_opt2*1e4) + 'cm^2')
print('a4:' + str(a4_opt2*1e4) + 'cm^2')

The output in the console should be similar to:


Think about the result - can you explain why the estimated value of a4 is slightly smaller than the nominal value? Finally, plot the state profiles corresponding to the estimated parameters:

# Extract state and input profiles
x1_opt2 = res_opt2["qt.x1"]
x2_opt2 = res_opt2["qt.x2"]
x3_opt2 = res_opt2["qt.x3"]
x4_opt2 = res_opt2["qt.x4"]
u1_opt2 = res_opt2["qt.u1"]
u2_opt2 = res_opt2["qt.u2"]
t_opt2  = res_opt2["time"]

# Plot

The resulting plot is shown in Figure D.9, “State profiles corresponding to estimated values of a1, a2, a3 and a4.”.

Figure D.9. State profiles corresponding to estimated values of a1, a2, a3 and a4.

State profiles corresponding to estimated values of a1, a2, a3 and a4.

The red curves represent the case where a1, a2, a3 and a4 has been estimated.

Take a moment to think about the results. Are there other parameters that could have been selected for estimation?

Having computed the parameter values that fits the data, we proceed to compute the standard deviations for the parameter estimates. This information is valuable when judging how accurate the estimates are. For an introduction to statistical inference in parameter estimation problems, see [Eng2001].

The covariance matrix of the estimated parameter vector is given by the expression:

where J is the Jacobian of the error residual and σ is the estimated measurement noise variance. In order to compute the residual Jacobian, the sensitivity equations needs to be computed.

The model QuadTankPack.QuadTank_Sens2 is used for the sensitivity simulation. Notice that the free attribute is used to mark the parameters for which sensitivities should be computed:

optimization QuadTank_Sens2 
   extends QuadTank (x1(fixed=true),x1_0 = 0.0627,
                     x2(fixed=true),x2_0 = 0.06044,
                     x3(fixed=true),x3_0 = 0.024,
                     x4(fixed=true),x4_0 = 0.023,

end QuadTank_Sens2;

In a first step to simulating the sensitivity equations for the model, we compile the model and set the optimal parameter values:

# compile JMU
jmu_name = compile_jmu('QuadTankPack.QuadTank_Sens2',

# Load model
model = JMUModel(jmu_name)


Next, we set the IDA_option sensitivity to true, and simulate the model:

# Get an options object
sens_opts = model.simulate_options()

# Enable sensitivity computations
sens_opts['IDA_options']['sensitivity'] = True

# Simulate sensitivity equations
sens_res = model.simulate(input=(['u1','u2'],u),start_time=0.,
                          final_time=60, options = sens_opts)

Using the results of sensitivity simulation, the Jacobian and the residual error vector can be created:

# Get result trajectories
x1_sens = sens_res['x1']
x2_sens = sens_res['x2']
x3_sens = sens_res['x3']
x4_sens = sens_res['x4']

dx1da1 = sens_res['dx1/da1']
dx1da2 = sens_res['dx1/da2']
dx1da3 = sens_res['dx1/da3']
dx1da4 = sens_res['dx1/da4']

dx2da1 = sens_res['dx2/da1']
dx2da2 = sens_res['dx2/da2']
dx2da3 = sens_res['dx2/da3']
dx2da4 = sens_res['dx2/da4']

dx3da1 = sens_res['dx3/da1']
dx3da2 = sens_res['dx3/da2']
dx3da3 = sens_res['dx3/da3']
dx3da4 = sens_res['dx3/da4']

dx4da1 = sens_res['dx4/da1']
dx4da2 = sens_res['dx4/da2']
dx4da3 = sens_res['dx4/da3']
dx4da4 = sens_res['dx4/da4']
t_sens = sens_res['time']

# Create a trajectory object for interpolation

# Create Jacobian
jac = N.zeros((61*4,4))

# Error vector
err = N.zeros(61*4)

# Extract Jacobian and residual error information
i = 0
for t_p in t_meas:
    vals = traj.eval(t_p)
    for j in range(4):
        for k in range(4):
            jac[i+j,k] = vals[0,4*j+k+4]
        err[i] = vals[0,0] - y1_meas[i/4]
        err[i+1] = vals[0,1] - y2_meas[i/4]
        err[i+2] = vals[0,2] - y3_meas[i/4]
        err[i+3] = vals[0,3] - y4_meas[i/4]
    i = i + 4

Notice the convention for how the sensitivity variables are named.

Finally, we compute and print the standard deviations for the estimated parameters:

# Compute estimated variance of measurement noice    
v_err = N.sum(err**2)/(61*4-2)

# Compute J^T*J
A = N.dot(N.transpose(jac),jac)

# Compute parameter covariance matrix
P = v_err*N.linalg.inv(A)

# Compute standard deviations for parameters
sigma_a1 = N.sqrt(P[0,0])
sigma_a2 = N.sqrt(P[1,1])
sigma_a3 = N.sqrt(P[2,2])
sigma_a4 = N.sqrt(P[3,3])

print "a1: " + str(sens_res.final('a1')) + ", standard deviation: " + str(sigma_a1)
print "a2: " + str(sens_res.final('a2')) + ", standard deviation: " + str(sigma_a2)
print "a3: " + str(sens_res.final('a3')) + ", standard deviation: " + str(sigma_a3)
print "a4: " + str(sens_res.final('a4')) + ", standard deviation: " + str(sigma_a4)

You should now see the standard deviations for the estimated parameters printed.