In article <[EMAIL PROTECTED]>,
Ken Reed  <[EMAIL PROTECTED]> wrote:
>It's not really possible to explain this in lay person's terms. The
>difference between principal factor analysis and common factor analysis is
>roughly that PCA uses raw scores, whereas factor analysis uses scores
>predicted from the other variables and does not include the residuals.
>That's as close to lay terms as I can get.

>I have never heard a simple explanation of maximum likelihood estimation,
>but --  MLE compares the observed covariance matrix with a  covariance
>matrix predicted by probability theory and uses that information to estimate
>factor loadings etc that would 'fit' a normal (multivariate) distribution.

>MLE factor analysis is commonly used in structural equation modelling, hence
>Tracey Continelli's conflation of it with SEM. This is not correct though.

>I'd love to hear simple explanation of MLE!

MLE is triviality itself, if you do not make any attempt to
state HOW it is to be carried out.

For each possible value X of the observation, and each state
of nature \theta, there is a probability (or density with 
respect to some base measure) P(X | \theta).  There is no
assumption that X is a single real number; it can be anything;
the same holds about \theta.

What MLE does is to choose the \theta which makes P(X | \theta)
as large as possible.  That is all there is to it.

-- 
This address is for information only.  I do not claim that these views
are those of the Statistics Department or of Purdue University.
Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907-1399
[EMAIL PROTECTED]         Phone: (765)494-6054   FAX: (765)494-0558


=================================================================
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
                  http://jse.stat.ncsu.edu/
=================================================================

Reply via email to