***  For details on how to be removed from this list visit the  ***
***          CCP4 home page http://www.ccp4.ac.uk         ***


I think many people would disagree, arguing that LS does represent a
choice of error distribution when it is not otherwise known.  In fact, LS
makes several assumptions about error (errors are independent, have the
same variance, expectation is zero..., see the wikipedia page from the
original message>)  Just because we do not actively choose an error
distribution does not mean that one is not chosen.  When we use LS, and
claim a "best fit" of the data, we are making the assumption that the
errors are normal.  Like it or not, you never escape an error distribution
when you fit data.  Even if it isn't formally present, it is there.

What bothers me most about the claim that LS and ML are separate is that
it clouds the reasons why one does better with "ML" methods.  The reason
is that one has treated the error distribution more correctly.  If you
claim that LS says nothing about error, that connection is not so clear.

Furthermore, what's wrong with making an assumption?  Rather than deny
what's really going on, we simply can say, "In lieu of better knowledge,
we assume."

Marcus Collins

*****************************************************************************
                              Marcus D. Collins
     Gruner Biophysics Group, Cornell University Dept. of Physics, LASSP
             (h) 607.347.4720 (w) 607.255.8678 (c) 607.351.8650
           "You have opened a new door, and I share this with you,
                    for I have been where you are now."
*****************************************************************************

On Mon, 22 Aug 2005, Ian Tickle wrote:

> ***  For details on how to be removed from this list visit the  ***
> ***          CCP4 home page http://www.ccp4.ac.uk         ***
>
>
>
> The statement "least-squares is a special case of maximum likelihood" is
> not an accurate statement of the facts, because it implies that LS is
> only applicable to a subset of the problems in which ML is applicable,
> the implication being that LS assumes that the observational errors are
> normally distributed, and is only applicable in that special case.  LS &
> ML are different methods of parameter estimation which indeed give
> identical results in the particular case of normally distributed errors.
> However LS is still applicable in the non-normal cases, you just get
> different results from what ML would give.
>
> The ML results are by definition the most likely (i.e. most consistent
> with the data), but the method is restricted to cases where the
> algebraic form of the error distribution is known (fortunately in
> crystallography we usually do know it reasonably accurately!).  LS has
> no such restriction because far from assuming that the error
> distribution is normal, it makes no assumptions whatever concerning the
> error distribution, hence it will give a result whatever the actual
> distribution.  So LS is a fallback to ML in the many cases in practice
> where the error distribution is not known sufficiently accurately.
>
> For further info, see e.g. this:
>
> http://en.wikipedia.org/wiki/Gauss-Markov_theorem
>
> -- Ian
>
> > -----Original Message-----
> > From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On
> > Behalf Of Peter Adrian Meyer
> > Sent: 18 August 2005 14:11
> > To: [email protected]
> > Subject: [ccp4bb]: maximum likelihood question
> >
> > ***  For details on how to be removed from this list visit the  ***
> > ***          CCP4 home page http://www.ccp4.ac.uk         ***
> >
> >
> > Hi all,
> >
> > Anthony Duff wrote (regarding R-fac and R-free from mapfiles?)
> > > 2.  CNS does a worse job of refining a structure in the late stages,
> > even
> > > accounting for differences in default restraint weights.
> > (I don't know
> > why
> > > this would be so, with both using maximum likelihood...
> > maybe the CNS
> > algorithms are inferior?)
> >
> > This reminded me of a question I've been wondering about for
> > a bit: Does
> > maximum likelihood refer to a scoring function (generate gradients to
> > optimize while refining), or both a scoring function and refinement
> > menthod?  As far as I understand, it's the first (based on
> > what I've seen
> > of poking around in the internals of programs that do ML refinement vs
> > other types of refinement).  But least-squares is a special case of
> > maximum likelihood, and least-squares (again as far as I
> > know) is both a
> > scoring function and refinement method.
> >
> > Could somebody more knowledgable about maximum likelihood
> > clear this up?
> >
> > Thanks,
> >
> > Pete
> >
> >
> > Pete Meyer
> > Fu Lab
> > BMCB grad student
> > Cornell University
> >
> >
> >
> >
> >
> >
> >
> >
> >
>
> **********************************************************************
> CONFIDENTIALITY NOTICE
> This email contains confidential information and may be otherwise protected 
> by law. Its content should not be disclosed and it should not be given or 
> copied to anyone other than the person(s) named or referenced above. If you 
> have received this email in error, please contact [EMAIL PROTECTED]
> **********************************************************************
>
>

Reply via email to