***  For details on how to be removed from this list visit the  ***
***          CCP4 home page http://www.ccp4.ac.uk         ***



LS itself does not assume a Normal error distribution, I guess we all
agree on that. And the Bayesian viewpoint doesn't actually make any
assumptions about the error distribution either, but it does make strong
statements about the current state of knowledge which is a probability
distribution. The way you summarise data or knowledge should allow anyone
(the concept of an agent or robot is often introduced here to warrant
objectivity) to reach the same conclusion. The true B-factors should not
be negative but that is domain-specific knowledge that by providing only
the first two moments is lost. Any unbiased robot would have to state
his/her state of knowledge about the B-factor distribution as being Normal
if only the mean and average were provided.

If we assume - and there are good reasons for this - that the Bayesian
approach a la Jaynes (there are many inconsistent flavours that confuse
the Bayesian inference approach with traditional statistical concepts and
get caught in condradictions from continuous limiting theorems) provides
an optimal method for testing hypotheses (optimal in terms of not making
conclusions that are neither warranted by the data and/or the specified
domain prior knowledge),
then one can analyse traditional methods in this framework and see for
which cases the results coincide. When the results are the same, I would
say that the conditions that led to this equivalence show the hidden
assumptions for the optimal use of the method (I'm afraid I can't provide
any textbooks or webpages to back this up - it just seems right).

In this case, I guess I'm asking for trouble as the terminology is so
different. Representing a 'state of knowledge' doesn't make much sense in
traditional statistics. Maximum Likelihood is sort of half way between
tradtional and Bayesian and can be seen as a limiting case of
uninformative (not necessaerily uniform) priors, however it's normal use
is closer to tradtional statistics and would indeed aim as modelling an
error distribution. Within this framework LS gives the same results as
MaxLik for normal errors. All this says is that the results coincide for
normal errors (you hardly ever actually know the real distribution of
errors). However, from the Bayesian point of view, an unbiased state of
knowledge would be the Normal distribution (that says nothing about the
errors other than you have so reason to assume that they are not Normal).
Now if LS only gives the same results under the conditions of Normal
errors or Normal state of knowledge, then this is the case in which LS is
the method of choice.

Maybe. Maybe not ;o)

Richard

PS
Talking about bias is also... well, biased. Maximum likelihood estimates
are often baised but for normally distributed errors superior to unbiased
estimates in the mean square error sense.

Reply via email to