Ken Reed schrieb:
> 
> It's not really possible to explain this in lay person's terms. The
> difference between principal factor analysis and common factor analysis is
> roughly that PCA uses raw scores, whereas factor analysis uses scores
> predicted from the other variables and does not include the residuals.
> That's as close to lay terms as I can get.


If you have a correlation-matrix of 
        itm   itm
      ___1_____2__      

item 1 | 1    0.8   | =  R 
item 2 | 0.8   1    |

then you can factor it into a loadingsmatrix L, so that L*Lī = R.
There is a infinite number of options for L; especially there can
be found infinitely many just in different rotational positions
(pairs of columns rotated). In the list below there are 5 examples
from the extreme solutions (the extremes and 3 rotational positions 
betweeen)







============================================================================
        PCA                             CFA
        factor   factor                 factor  factor  factor
          1        2                       1      2       3
      ----------------------------------------------------------   
1)
Item 1  1.000    0                      1.000   0       0
Item 2  0.800    0.600                  0.800   0       0.600

2)
Item 1  0.987   -0.160                  0.949   0.316   0
Item 2  0.886    0.464                  0.843   0       0.537

3)------------------------------------------------------------------+
Item 1  0.949   -0.316                  0.894   0.447   0           |
Item 2  0.949    0.316                  0.894   0       0.447       |
--------------------------------------------------------------------+

4)
Item 1  0.886    0.464                  0.843   0.537   0
Item 2  0.987   -0.160                  0.949   0       0.316

5)
Item 1  0.800    0.600                  0.800   0.600   0
Item 2  1.000    0                      1.000   0       0
===========================================================================

PCA:
Left list shows, how a components-analyst would attack the problem: 
two factors; *principal* components analysis starts from the configu-
ration 3, where the sum of the squares of the entries is at a maximum.
The reduced number of factors, which a PCA-analyst will select for 
further work, will be determined by criteria like "use all factors with
sum of loadingssquares>1" (equivalent eigenvalues>1) or "apply scree test"
or something like that.



CFA:
Right list shows, how a *common* factor analyst would attack the problem:
a common factor, plus an itemspecific factor  for each item (like measuring
error etc). There are again infinite options, how to position the factors.
One could assume, that again example 3 was taken as default; but this is
not the case, as the algorithms, how to identify itemspecific variance
are iterative and not restricted to a special outcome (only the startposi-
tion is given). The only restriction is, that *for each item* there must
be an itemspecific factor. (In a two-item-case however the CFA-iteration
converges to position 3).
The itemspecific factors are not used in further analysis, only the common
one(s). Hence the name.




--------




> 
> I have never heard a simple explanation of maximum likelihood estimation,
> but --  MLE compares the observed covariance matrix with a  covariance
> matrix predicted by probability theory and uses that information to estimate
> factor loadings etc that would 'fit' a normal (multivariate) distribution.

It estimates the population-covariance matrix in that way, that your
empirical matrix is the most-likely-one, when randomly selected samples
would be taken.


Gottfried Helms


=================================================================
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
                  http://jse.stat.ncsu.edu/
=================================================================

Reply via email to