***  For details on how to be removed from this list visit the  ***
***          CCP4 home page http://www.ccp4.ac.uk         ***


 
The statement "least-squares is a special case of maximum likelihood" is
not an accurate statement of the facts, because it implies that LS is
only applicable to a subset of the problems in which ML is applicable,
the implication being that LS assumes that the observational errors are
normally distributed, and is only applicable in that special case.  LS &
ML are different methods of parameter estimation which indeed give
identical results in the particular case of normally distributed errors.
However LS is still applicable in the non-normal cases, you just get
different results from what ML would give.

The ML results are by definition the most likely (i.e. most consistent
with the data), but the method is restricted to cases where the
algebraic form of the error distribution is known (fortunately in
crystallography we usually do know it reasonably accurately!).  LS has
no such restriction because far from assuming that the error
distribution is normal, it makes no assumptions whatever concerning the
error distribution, hence it will give a result whatever the actual
distribution.  So LS is a fallback to ML in the many cases in practice
where the error distribution is not known sufficiently accurately.

For further info, see e.g. this: 

http://en.wikipedia.org/wiki/Gauss-Markov_theorem

-- Ian

> -----Original Message-----
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On 
> Behalf Of Peter Adrian Meyer
> Sent: 18 August 2005 14:11
> To: [email protected]
> Subject: [ccp4bb]: maximum likelihood question
> 
> ***  For details on how to be removed from this list visit the  ***
> ***          CCP4 home page http://www.ccp4.ac.uk         ***
> 
> 
> Hi all,
> 
> Anthony Duff wrote (regarding R-fac and R-free from mapfiles?)
> > 2.  CNS does a worse job of refining a structure in the late stages,
> even
> > accounting for differences in default restraint weights.  
> (I don't know
> why
> > this would be so, with both using maximum likelihood... 
> maybe the CNS
> algorithms are inferior?)
> 
> This reminded me of a question I've been wondering about for 
> a bit: Does
> maximum likelihood refer to a scoring function (generate gradients to
> optimize while refining), or both a scoring function and refinement
> menthod?  As far as I understand, it's the first (based on 
> what I've seen
> of poking around in the internals of programs that do ML refinement vs
> other types of refinement).  But least-squares is a special case of
> maximum likelihood, and least-squares (again as far as I 
> know) is both a
> scoring function and refinement method.
> 
> Could somebody more knowledgable about maximum likelihood 
> clear this up?
> 
> Thanks,
> 
> Pete
> 
> 
> Pete Meyer
> Fu Lab
> BMCB grad student
> Cornell University
> 
> 
> 
> 
> 
> 
> 
> 
> 

**********************************************************************
CONFIDENTIALITY NOTICE 
This email contains confidential information and may be otherwise protected by 
law. Its content should not be disclosed and it should not be given or copied 
to anyone other than the person(s) named or referenced above. If you have 
received this email in error, please contact [EMAIL PROTECTED]
**********************************************************************


Reply via email to