*** For details on how to be removed from this list visit the *** *** CCP4 home page http://www.ccp4.ac.uk ***
>> I think many people would disagree, arguing that LS does represent a >> choice of >> error distribution when it is not otherwise known. In fact, LS makes >> several >> assumptions about error (errors are independent, have the same variance, >> expectation is zero..., see the wikipedia page from the original >> message>) >> Just because we do not actively choose an error distribution does not >> mean >> that one is not chosen. When we use LS, and claim a "best fit" of the >> data, >> we are making the assumption that the errors are normal. > > This is incorrect. The statistical justification for LS (Gauss-Markov > theorem) assumes nothing about the form of the error distribution aside > from > (1) zero expectation, (2) noncorrelation (*not* independence), and (3) > equal > variance. In fact, weighted least squares can correct exactly for > violations of (2) and (3). So, WLS only assumes (1). The error > distribution > can certainly be non-normal and the optimal properties guaranteed by the > G-M > theorem will still hold. Yes, but given an expectation value and a variance, Maximum Entropy methods say that the only unbiased error distribution is Gaussian. So, although it's never explicitly stated in Least-Squares, the fact that you consider only an expectation and a variance (and the lack of correlation), in a sense, actually already nails down the error model (if you want to remain unbiased, that is). If the errors were non-Normal then you can still apply the method and get good results, but I'm not sure that they are optimal, they don't have the highest likelihood and are certainly not the most probable. Richard
