next up previous
Next: Levenberg-Marquardt Method Up: Nonlinear Chi Square Fits Previous: Steepest Descent

Newton's Method

The minimization condition can be converted into the problem of solving a nonlinear system by requiring that the derivative with respect to each of the parameters must vanish. We have

$\displaystyle 0 = \frac{\partial\chi^{2}}{\partial a_{j}} = -2\sum_{i=1}^{N}
\f...
...y_i-\bar y_{i,exp}]}{\sigma_i^2}
\frac{\partial\bar y_{i,exp}}{\partial a_{j}}.$     (33)

We can try to solve this nonlinear system by using Newton's method. This method starts with a trial value for the parameter vector ${\bf a}$ and then seeks the vector change $\delta{\bf a}$ that improves the trial value. Thus we seek the solution to
\begin{displaymath}
0 = \frac{\partial\chi^{2}({\bf a} + \delta{\bf a})}{\parti...
...\chi^{2}({\bf a})}{\partial
a_{j}\partial a_{k}}\delta a_{k}.
\end{displaymath} (34)

Thus the change is found by solving the linear system
\begin{displaymath}
c_{j} = \sum_{k} M_{j,k} \delta a_{k}
\end{displaymath} (35)

where
\begin{displaymath}
c_{j} = -\frac{1}{2}\frac{\partial\chi^{2}}{\partial a_{j}}...
...]}{\sigma_i^2}
\frac{\partial\bar y_{i,exp}}{\partial a_{j}}.
\end{displaymath} (36)

and
\begin{displaymath}
M_{j,k} = \frac{1}{2}\frac{\partial^{2}\chi^{2}({\bf a})}{\...
...rac{\partial^{2}\bar y_{i,exp}}{\partial a_{j}\partial a_{k}}.
\end{displaymath} (37)

Actually this linear system is the same as what we had to solve for the linear least squares problem. The connection can be made more explicit by realizing that a Taylor's expansion of $\chi({\bf a})$ in small shifts about the vector ${\bf a}$ is just
\begin{displaymath}
\chi^{2}({\bf a} + \delta{\bf a}) = \chi^{2}({\bf a})
- 2...
... \delta {\bf a}
+ \delta {\bf a}\cdot M \cdot \delta {\bf a}
\end{displaymath} (38)

to second order in $\delta{\bf a}$. Thus $M$ is twice the Hessian matrix just as before. However, in the case of a linear least squares problem, the Taylor series was exact, and solving the linear system once led directly to the optimum value of the parameter vector. In the present case, solving the linear system is just a step in the Newton iteration that is supposed to lead us to the solution after a number of steps. To construct the components of the vector ${\bf c}$ and the matrix $M$, we must be able to evaluate the first and second partial derivatives of the fitting function with respect to the fitting parameters. Then to proceed with the Newton iteration, we must solve the linear system each time we take a step.

The error in the $i$th fitted parameter is found from the diagonal element of the matrix $M_{j,k}$ at the minimum of $\chi^{2}$.

\begin{displaymath}
\sigma_i^2 = (M^{-1})_{ii}.
\end{displaymath} (39)


next up previous
Next: Levenberg-Marquardt Method Up: Nonlinear Chi Square Fits Previous: Steepest Descent
Carleton DeTar 2009-11-23