Extract the Model
After loading the model and the measurement data, we are ready for the extraction of the model parameters.
The extraction is started by choosing Optimize > Optimize Model (⌘E)
,
or by pressing the ▶︎ Optimize
button in the toolbar over the Graphics pane.
During the extraction process, the parameter values in the model pane and the graph are updated regularly
to show the progress being made.
Sometimes, especially with complex and strongly non-linear models,
the extraction process goes off into the weeds.
Then the extraction can be interrupted by choosing Optimize > Stop Optimization (⌘E)
,
or by pressing the ◼︎ Optimize
button again.
By pressing the ⟲ Reset
button the model parameters are reset
to their values before the last optimization started.
By pressing the ⤓ Default
button the model parameters are reset
to their default values as specified in the model file.
Note that only the parameters that are not fixed are reset.
This allows for iteration with small groups of parameters for stepwise refinement.
Before starting the extraction, it is advised to choose some initial values for the model parameters that bring the model curves in proximity to the measured data points, and if possible, even limit their range. Non-linear optimization is a hard problem, and is grateful for all the help it can get. Nevertheless, we expect you will be pleasantly surprised by the robustness of implementation.
Parameters that are expected, or have shown,
to have little or no effect on the model curves in the region of interest,
should be fixed by toggling their fixed
button.
The value of fixed parameters will not be varied during the optimization.
The extraction algorithm will try to reduce the value of the objective, which is a measure of the sum of the distances of each data point to the model curve(s). The current value of the objective is displayed at the bottom of the graphing pane. When a minimum is reached — no change in the parameter values results in a lower value of the objective — the extraction is either done, or we can start throwing out data points until the fitting criterion is satisfied. This second option is what makes ParX kind of special.
If we decide that a data point has to go, which one do we choose? It all comes down to a measure of “consensus” between the remaining data points. Each data point constrains the possible values of the model parameters, but the data points do not generally agree. This disagreement can be measured with respect to the average, and used to exclude data points that are in a minority. The elimination of data points stops when the fitting criterion is satisfied.
Set the Fitting Tolerance
When the data file contains no accuracy information (no error intervals), the distance between a data point and the model curve is by definition infinite. This is where the tolerance fields in the data pane come in: they allow you to specify a relative and absolute error on all data values after the fact.
The tolerances can also be used to specify the required accuracy of the model. With perfect measurement data, we do not expect the model curve to pass exactly through all the data points. All models have their limitations, and often we are interested in where its limitations are located. Setting the tolerance to the required accuracy of the model will remove the data points that fall outside the validity domain of the model during extraction.
Note that the relative tolerances are specified as a percentage of the measured values.
When both a measurement error is specified in the data file and a tolerance value is given, the larger of the two is used for each coordinate value and data point.
The error intervals in the graph are adjusted to reflect the tolerance values.
Select the Fitting Criterion
The selection of the optimization method is located in the Settings Window ParX > Settings (⌘,)
.
There are several fitting criteria available with different effects on the result:
- BestFit
- the classical method that stops when the optimum parameter values are identified. No data points are eliminated.
- ModeSelection (default)
- data points are eliminated until the average distance for all remaining points is smaller than one (1.0).
- Strict
- data points are eliminated until no point remains with an individual distance greater than one (1.0).
- ChiSquare
- data points are eliminated until the ChiSquare test probability (𝛘²) for the remaining distances exceeds 0.5. This assumes Normally distributed errors for the remaining data points.
The default is ModeSelection, which represents a nice compromise in the handling of coinciding stochastic and systematic errors.
The ChiSquare criterion is most appropriate when Normal stochastic errors dominate.