Although the least squares method gives us the best estimate of the parameters and , it is also very important to know how well determined these best values are. In other words, if we repeated the experiment many times with the same conditions, what range of values of these parameters would we get? To answer this question, we use a maximum likelihood method.

We start by assuming a probability distribution for the entire set of
measurements
. We assume that the
measurement of the data points independent of each other, and
each one follows a Gaussian normal distibution with mean value and standard deviation . The probability that a single
experiment results in the set of values
is then just a product of the individual Gaussians:

The idea of maximum likelihood is to replace the ideal mean values
with the theoretically “expected” values
predicted by a linear-function model. The probability
distribution then becomes a conditional probability

. In other words, assuming that the slope
and intercept are and , it gives the probability for getting
the result
in a single measurement. But then
we use the power of Bayes's theorem to turn it around and reinterpret
it as the probability
that, given the
experimental result, the linear relationship is given by the
parameters and . Dropping the list of data points, we write
this probability as

(21) |

(22) |

The probability is called the likelihood function for the parameter values and . We want to find the values and that are most probable, i.e. maximize the likelihood function. Clearly this condition is equivalent to requiring that we minimize , and leads to the result discussed in the first section. But now we also have a way to estimate the reliability of our determination of the best values and .

From the expressions (19) we see that is similar to a normal distribution in the variables and , except that instead of one variable, we have two. Instead of a simple quadratic in the exponent we have a quadratic form in the exponent. Once we have realized this, we can use standard results to estimate the error in the best fit values.

The variance in the parameter is determined from the formula

(23) |

(24) |

(25) |

(26) |

(27) |

(28) |

Written explicitly, we have

This is an important result, since it allows us to assign confidence ranges for the best fit parameters and to determine how they are correlated.