#tex2html_wrap_inline626# takes at
least two measurements, and even then we would be putting ourselves at
the mercy of the gods of statistics for taking a chance with only two
measurements. If we weren't so rash as to assume a Gaussian, we would
have to make many measurements of the length of the table top to get
the probability distribution in the measured length.
We now ask, what is the most probable value of #tex2html_wrap_inline628#, given that we
just found #tex2html_wrap_inline630#? This is the value with maximum likelihood. If we
examine the probability distribution, we see that it peaks at #tex2html_wrap_inline632#,
just as we might have expected. We may then ask, what is the error in
the determination of this value. This is a tricky question, because
the Poisson distribution is not shaped like a Gaussian distribution.
However, for large #tex2html_wrap_inline634# it looks more and more like a Gaussian.
Expanding the log of the Poisson distribution for large #tex2html_wrap_inline636# and fixed
#tex2html_wrap_inline638# gives

so for large #tex2html_wrap_inline640# the error is

To summarize, a single measurement yields the entire probability distribution.
For large enough #tex2html_wrap_inline642# we can say that

To see how Bayesian statistics works, suppose we repeated the
experiment and got a new value #tex2html_wrap_inline644#. What is the probability
distribution for #tex2html_wrap_inline646# in light of the new result? Now things have
changed, since the <#234#>*a priori*<#234#> probability for #tex2html_wrap_inline648# is no
longer constant because we already made one measurement and got #tex2html_wrap_inline650#.
Instead we have

so

Notice that the likelihood function is now the product of the
individual likelihood functions. A more systematic notation would
write this function as #math90##tex2html_wrap_inline652#, i.e. the
probability for #tex2html_wrap_inline654# having a particular value, given that we made
two measurements and found #tex2html_wrap_inline656# and #tex2html_wrap_inline658#. The normalization
factor #tex2html_wrap_inline660# is obtained by requiring the total probability to be 1.
The most likely value of #tex2html_wrap_inline662# is easily shown to be just the
average

as we should have expected.
The Bayesian approach insists that we fold together all of our
knowledge about a parameter in constructing its likelihood function.
Thus a generalization of these results would state that the likelihood
function for the parameter set #tex2html_wrap_inline664#, given the independently measured
results #tex2html_wrap_inline666#, #tex2html_wrap_inline668#, #tex2html_wrap_inline670#, etc. is just

where #tex2html_wrap_inline672# is a normalization factor. Again, this is just the product
of the separate likelihood functions. The result is completely
general and applies to any probability distribution, not just a
Poisson distribution. We will use this result in discussing
#tex2html_wrap_inline674# fits to data as a maximum likelihood search.
#./chap_statistics_intro.ltx#