When you perform an optimization, you need to decide when to stop. One way to check for whether your solution is good enough is to check whether the solution is still changing significantly. There are two ways to measure how much a solution changes: relative change (i.e. % change), or absolute change.
It makes a lot of sense to check for relative change, since a change of 5 means something very different when the solution is around 1 than when it is around 100000. Thus, the optimization routine checks, at every iteration i
whether abs(1-x(i)/x(i-1))<relTol
, i.e. by what fraction the new solution has changed since the last iteration. Note that x
can be an array of solutions if you're optimizing multiple parameters at the same time (the solution thus has "multiple components"). Of course, you want the condition to be fulfilled for all "solution components" before you stop optimizing further.
The relative tolerance, however, becomes problematic when the solution is around zero, since x/0
is undefined. Thus, it makes sense to also look at the absolute change in value, and quit optimizing when abs(x(i)-x(i-1))<absTol
. If you choose absTol
small enough, it will only be relTol
that counts for large solutions, while absTol
only becomes relevant if the solution comes to lie around 0.
Since the solver stops when either of the two criterion is fulfilled, how close you get to a (locally) optimal solution is determined by absTol
or relTol
. For example, if relTol
is 10%, you will never get much closer than 10% to the optimal solution, unless your solution is around zero, in which case the absTol
criterion (of, say, 0.0001) is satisfied before the relTol
criterion.