Re: Factor Analysis

2001-06-15 Thread Tracey Continelli

Hi there,

would someone please explain in lay person's terms the difference
betwn.
principal components, commom factors, and maximum likelihood
estimation
procedures for factor analyses?

Should I expect my factors obtained through maximum likelihood
estimation
tobe highly correlated?  Why?  When should I use a Maximum likelihood
estimation procedure, and when should I not use it?

Thanks.

Rita

[EMAIL PROTECTED]


Unlike the other methods, maximum likelihood allows you to estimate
the entire structural model *simultaneously* [i.e., the effects of
every independent variable upon every dependent variable in your
model].  Most other methods only permit you to estimate the model in
pieces, i.e., as a series of regressions whereby you regress every
dependent variable upon every independent variable that has an arrow
directly pointing to it.  Moreover, maximum likelihood actually
provides a statistical test of significance, unlike many other methods
which only provide generally accepted cut-off points but not an actual
test of statistical significance.  There are very few cases in which I
would use anything except a maximum likelihood approach, which you can
use in either LISREL or if you use SPSS you can add on the module AMOS
which will do this as well.


Tracey


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Factor Analysis

2001-06-16 Thread Alexandre Moura

Dear Haytham,

other issue concern with a measure of the latent construct is the
unidimensionality.  Hair et alli(1998): "unidimensionality is an assumption
underlying the calculation of reliability and is demonstraded when
indicators of a construct have acceptable fit on a
single-factor(one-dimensional) model.(...) The use of reliability measures,
such Cronbach´s alpha, does not ensure unidimensionality but instead assumes
it exists. The researcher is encouraged to perform unidimensionality tests
on all multiple-indicator constructs before assessing their reliability."

This reference is very important:

Gerbing, David W., Anderson, James C. An updated paadigm for scale
development incorporating unidimensionality and its assesment.

Best regards,

Alexandre Moura.
P.S. Please accept my apologies for my English mistakes.



- Original Message -
From: "haytham siala" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Friday, June 15, 2001 5:40 PM
Subject: Factor Analysis


> Hi,
> I will appreciate if someone can help me with this question: if factors
> extracted from a factor analysis were found to be reliable (using an
> internal consistency test like a Cronbach alpha), can they be used to
> represent a measure of the latent construct? If yes, are there any
> references or books that justify this technique?
>
>
>
>
>
>
> =
> Instructions for joining and leaving this list and remarks about
> the problem of INAPPROPRIATE MESSAGES are available at
>   http://jse.stat.ncsu.edu/
> =




=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Factor Analysis

2001-06-16 Thread Alexandre Moura

The complete reference:

Gerbing, David W., Anderson, James C. An updated paradigm for scale
development incorporating unidimensionality and its assesment. Journal of
Marketing Research. Vol. XXV (May 1988).

Alexandre Moura.

- Original Message -
From: "Alexandre Moura" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Saturday, June 16, 2001 9:26 AM
Subject: Re: Factor Analysis


> Dear Haytham,
>
> other issue concern with a measure of the latent construct is the
> unidimensionality.  Hair et alli(1998): "unidimensionality is an
assumption
> underlying the calculation of reliability and is demonstraded when
> indicators of a construct have acceptable fit on a
> single-factor(one-dimensional) model.(...) The use of reliability
measures,
> such Cronbach´s alpha, does not ensure unidimensionality but instead
assumes
> it exists. The researcher is encouraged to perform unidimensionality tests
> on all multiple-indicator constructs before assessing their reliability."
>
> This reference is very important:
>
> Gerbing, David W., Anderson, James C. An updated paadigm for scale
> development incorporating unidimensionality and its assesment.
>
> Best regards,
>
> Alexandre Moura.
> P.S. Please accept my apologies for my English mistakes.
>
>
>
> - Original Message -
> From: "haytham siala" <[EMAIL PROTECTED]>
> To: <[EMAIL PROTECTED]>
> Sent: Friday, June 15, 2001 5:40 PM
> Subject: Factor Analysis
>
>
> > Hi,
> > I will appreciate if someone can help me with this question: if factors
> > extracted from a factor analysis were found to be reliable (using an
> > internal consistency test like a Cronbach alpha), can they be used to
> > represent a measure of the latent construct? If yes, are there any
> > references or books that justify this technique?
> >
> >
> >
> >
> >
> >
> > =
> > Instructions for joining and leaving this list and remarks about
> > the problem of INAPPROPRIATE MESSAGES are available at
> >   http://jse.stat.ncsu.edu/
> > =
>
>
>
>
> =
> Instructions for joining and leaving this list and remarks about
> the problem of INAPPROPRIATE MESSAGES are available at
>   http://jse.stat.ncsu.edu/
> =



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Factor Analysis

2001-06-17 Thread Ken Reed

It's not really possible to explain this in lay person's terms. The
difference between principal factor analysis and common factor analysis is
roughly that PCA uses raw scores, whereas factor analysis uses scores
predicted from the other variables and does not include the residuals.
That's as close to lay terms as I can get.

I have never heard a simple explanation of maximum likelihood estimation,
but --  MLE compares the observed covariance matrix with a  covariance
matrix predicted by probability theory and uses that information to estimate
factor loadings etc that would 'fit' a normal (multivariate) distribution.

MLE factor analysis is commonly used in structural equation modelling, hence
Tracey Continelli's conflation of it with SEM. This is not correct though.

I'd love to hear simple explanation of MLE!



> From: [EMAIL PROTECTED] (Tracey Continelli)
> Organization: http://groups.google.com/
> Newsgroups: sci.stat.consult,sci.stat.edu,sci.stat.math
> Date: 15 Jun 2001 20:26:48 -0700
> Subject: Re: Factor Analysis
> 
> Hi there,
> 
> would someone please explain in lay person's terms the difference
> betwn.
> principal components, commom factors, and maximum likelihood
> estimation
> procedures for factor analyses?
> 
> Should I expect my factors obtained through maximum likelihood
> estimation
> tobe highly correlated?  Why?  When should I use a Maximum likelihood
> estimation procedure, and when should I not use it?
> 
> Thanks.
> 
> Rita
> 
> [EMAIL PROTECTED]
> 
> 
> Unlike the other methods, maximum likelihood allows you to estimate
> the entire structural model *simultaneously* [i.e., the effects of
> every independent variable upon every dependent variable in your
> model].  Most other methods only permit you to estimate the model in
> pieces, i.e., as a series of regressions whereby you regress every
> dependent variable upon every independent variable that has an arrow
> directly pointing to it.  Moreover, maximum likelihood actually
> provides a statistical test of significance, unlike many other methods
> which only provide generally accepted cut-off points but not an actual
> test of statistical significance.  There are very few cases in which I
> would use anything except a maximum likelihood approach, which you can
> use in either LISREL or if you use SPSS you can add on the module AMOS
> which will do this as well.
> 
> 
> Tracey



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Factor Analysis

2001-06-15 Thread Timothy W. Victor

_Psychometric Theory_, by Jum Nunnally to name one.

haytham siala wrote:
> 
> Hi,
> I will appreciate if someone can help me with this question: if factors
> extracted from a factor analysis were found to be reliable (using an
> internal consistency test like a Cronbach alpha), can they be used to
> represent a measure of the latent construct? If yes, are there any
> references or books that justify this technique?

-- 
Timothy Victor
[EMAIL PROTECTED]
Policy Research, Evaluation, and Measurement
Graduate School of Education
University of Pennsylvania


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Factor Analysis

2001-06-15 Thread Alexandre Moura

Dear Haytham,

you should asses the construct validity. Internal consistency through Alpha
de Cronbach is the first step. Now, you shoud verify nomological and
discriminant validity through confirmatory factor analysis.

Please, read theses articles, they are a important references concern with
Construct Validity.

1. Bagozzi, Richard P., Yi, Youjae, Phillips, Lynn. Assessing construct
validity in organizational research. Administrative Science Quartely,
36(1991):421-458

2. Churchill Jr, Gilbert A. Marketing research:methodological foundations. 6
ed. Dryden Press.1995

Best regards and good luck in your work.

Alexandre Moura.
P.S. Please accept my apologies for my English mistakes.

- Original Message -
From: "haytham siala" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Friday, June 15, 2001 5:40 PM
Subject: Factor Analysis


> Hi,
> I will appreciate if someone can help me with this question: if factors
> extracted from a factor analysis were found to be reliable (using an
> internal consistency test like a Cronbach alpha), can they be used to
> represent a measure of the latent construct? If yes, are there any
> references or books that justify this technique?
>
>
>
>
>
>
> =
> Instructions for joining and leaving this list and remarks about
> the problem of INAPPROPRIATE MESSAGES are available at
>   http://jse.stat.ncsu.edu/
> =



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Factor Analysis

2001-06-18 Thread Gottfried Helms

Ken Reed schrieb:
> 
> It's not really possible to explain this in lay person's terms. The
> difference between principal factor analysis and common factor analysis is
> roughly that PCA uses raw scores, whereas factor analysis uses scores
> predicted from the other variables and does not include the residuals.
> That's as close to lay terms as I can get.


If you have a correlation-matrix of 
itm   itm
  ___1_2__  

item 1 | 10.8   | =  R 
item 2 | 0.8   1|

then you can factor it into a loadingsmatrix L, so that L*L´ = R.
There is a infinite number of options for L; especially there can
be found infinitely many just in different rotational positions
(pairs of columns rotated). In the list below there are 5 examples
from the extreme solutions (the extremes and 3 rotational positions 
betweeen)








PCA CFA
factor   factor factor  factor  factor
  12   1  2   3
  --   
1)
Item 1  1.0000  1.000   0   0
Item 2  0.8000.600  0.800   0   0.600

2)
Item 1  0.987   -0.160  0.949   0.316   0
Item 2  0.8860.464  0.843   0   0.537

3)--+
Item 1  0.949   -0.316  0.894   0.447   0   |
Item 2  0.9490.316  0.894   0   0.447   |
+

4)
Item 1  0.8860.464  0.843   0.537   0
Item 2  0.987   -0.160  0.949   0   0.316

5)
Item 1  0.8000.600  0.800   0.600   0
Item 2  1.0000  1.000   0   0
===

PCA:
Left list shows, how a components-analyst would attack the problem: 
two factors; *principal* components analysis starts from the configu-
ration 3, where the sum of the squares of the entries is at a maximum.
The reduced number of factors, which a PCA-analyst will select for 
further work, will be determined by criteria like "use all factors with
sum of loadingssquares>1" (equivalent eigenvalues>1) or "apply scree test"
or something like that.



CFA:
Right list shows, how a *common* factor analyst would attack the problem:
a common factor, plus an itemspecific factor  for each item (like measuring
error etc). There are again infinite options, how to position the factors.
One could assume, that again example 3 was taken as default; but this is
not the case, as the algorithms, how to identify itemspecific variance
are iterative and not restricted to a special outcome (only the startposi-
tion is given). The only restriction is, that *for each item* there must
be an itemspecific factor. (In a two-item-case however the CFA-iteration
converges to position 3).
The itemspecific factors are not used in further analysis, only the common
one(s). Hence the name.









> 
> I have never heard a simple explanation of maximum likelihood estimation,
> but --  MLE compares the observed covariance matrix with a  covariance
> matrix predicted by probability theory and uses that information to estimate
> factor loadings etc that would 'fit' a normal (multivariate) distribution.

It estimates the population-covariance matrix in that way, that your
empirical matrix is the most-likely-one, when randomly selected samples
would be taken.


Gottfried Helms


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: factor analysis

2001-06-16 Thread Chong Yu
Title: Re: factor analysis






About consistency and uni-dimensionality, these articles explain well:



Gardner, P. L. (1995). Measuring attitudes to science: Unidimensionality and internal consistency revisited. Research in Science Education, 25, 283-9.

Gardner, P. L. (1996). The dimensionality of attitude scales: A widely misunderstood idea. International Journal of Science Education, 18, 913-9.

Yu, C. H. (2001). An Introduction to computing and interpreting Cronbach Coefficient Alpha in SAS. Proceedings of 26th SAS User Group International Conference. [On-line] Available: URL: http://seamonkey.ed.asu.edu/alex/pub/cronbach.html






Normality in Factor Analysis

2001-06-16 Thread haytham siala

Hi,

I have a question regarding factor analysis: Is normality an important
precondition for using factor analysis?

If no, are there any books that justify this.




=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



factor analysis of dichotomous variables

2001-04-27 Thread Johannes Hartig

Hi all,
I have a (hopefully simple but not stupid) software question
related to factor analysis of dichotomous variables:
Christofferson (1975) described a GLS estimator for the
factorization if dichotomous data based on the marginal
distributions for single items and item pairs. Muthen (1978)
suggested another method based on the same information.
I just need to know: Have these estimators ever been
implemented in some statistical software pacakge?
Thanks a lot for any hint,
Johannes Hartig




=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Normality in Factor Analysis

2001-06-16 Thread Eric Bohlman

In sci.stat.consult haytham siala <[EMAIL PROTECTED]> wrote:
> I have a question regarding factor analysis: Is normality an important
> precondition for using factor analysis?

It's necessary for testing hypotheses about factors extracted by 
Joreskog's maximum-likelihood method.  Otherwise, no.

> If no, are there any books that justify this.

Any book on factor analysis or multivariate statistics in general.



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Normality in Factor Analysis

2001-06-17 Thread Haytham Siala

 I have checked some of the books but I could not find this statment (for
e.g. using multivariate statistics (Tabachnik 1996), Latent variable models
(Loehlin 1998), Easy guide to factor analysis (Kline 1994)).

Can you please give me a examples of references as I reallly need a
reference because I have already conducted a factor analysis on sample of
data containing some non-normal variables .

- Original Message -
From: "Eric Bohlman" <[EMAIL PROTECTED]>
Newsgroups: sci.stat.consult,sci.stat.edu,sci.stat.math
Sent: Sunday, June 17, 2001 2:08 AM
Subject: Re: Normality in Factor Analysis


> In sci.stat.consult haytham siala <[EMAIL PROTECTED]> wrote:
> > I have a question regarding factor analysis: Is normality an important
> > precondition for using factor analysis?
>
> It's necessary for testing hypotheses about factors extracted by
> Joreskog's maximum-likelihood method.  Otherwise, no.
>
> > If no, are there any books that justify this.
>
> Any book on factor analysis or multivariate statistics in general.
>


"Eric Bohlman" <[EMAIL PROTECTED]> wrote in message
9ggvug$451$[EMAIL PROTECTED]">news:9ggvug$451$[EMAIL PROTECTED]...
> In sci.stat.consult haytham siala <[EMAIL PROTECTED]> wrote:
> > I have a question regarding factor analysis: Is normality an important
> > precondition for using factor analysis?
>
> It's necessary for testing hypotheses about factors extracted by
> Joreskog's maximum-likelihood method.  Otherwise, no.
>
> > If no, are there any books that justify this.
>
> Any book on factor analysis or multivariate statistics in general.
>




=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Normality in Factor Analysis

2001-06-17 Thread Herman Rubin

In article <9gg7ht$qa3$[EMAIL PROTECTED]>,
haytham siala <[EMAIL PROTECTED]> wrote:
>Hi,

>I have a question regarding factor analysis: Is normality an important
>precondition for using factor analysis?

>If no, are there any books that justify this.

Factor analysis is quite robust against non-normality.
The essential factor structure is little affected by it
at all, although the representation may get somewhat
sensitive if data-dependent normalizations are used, such
as using correlations rather than covariances, or forcing
normalization on the covariance matrix of the factors.

Some of this is in my paper with Anderson in the
Proceedings of the Third Berkeley Symposium.  The result
on the asymptotic distribution, not at all difficult to
derive, is in one of my abstracts in _Annals of
Mathematical Statistics_, 1955.  It is basically this:

Suppose the factor model is 

x = \Lambda f + s,

f the common factors and s the specific factors.  Further
suppose that f and s, and also the elements of s, are
uncorrelated, and there is adequate normalization and
smooth identification of the model by the elements of
\Lambda alone.  Now estimate \Lambda, M, the covariance
matrix of f, and S, the diagonal covariance matrix of s.
Assuming the usual assumptions for asymptotic normality of
the sample covariances of the elements of f with s, and of
the pairs of different elements of s, the asymptotic
distribution of the estimates of \Lambda and the SAMPLE
values of M and S from their actual values will have the
expected asymptotic joint normal distribution.  This makes
no assumption about the distribution of M and S about 
their expected values, which is the main place were there
is an effect of normality. 



-- 
This address is for information only.  I do not claim that these views
are those of the Statistics Department or of Purdue University.
Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907-1399
[EMAIL PROTECTED] Phone: (765)494-6054   FAX: (765)494-0558


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Normality in Factor Analysis

2001-06-22 Thread Robert Ehrlich

Calculation of eigenvalues and eigenvalues requires no assumption.
However evaluation of the results IMHO implicitly assumes at least a
unimodal distribution and reasonably homogeneous variance for the same
reasons as ANOVA or regression.  So think of th consequencesof calculating
means and variances of a strongly bimodal distribution where no sample
ocurrs near the mean and all samples are tens of standard devatiations
from the mean.

> Hi,
>
> I have a question regarding factor analysis: Is normality an important
> precondition for using factor analysis?
>
> If no, are there any books that justify this.



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Normality in Factor Analysis

2001-06-22 Thread Herman Rubin

In article <[EMAIL PROTECTED]>,
Robert Ehrlich  <[EMAIL PROTECTED]> wrote:
>Calculation of eigenvalues and eigenvalues requires no assumption.
>However evaluation of the results IMHO implicitly assumes at least a
>unimodal distribution and reasonably homogeneous variance for the same
>reasons as ANOVA or regression.  So think of th consequencesof calculating
>means and variances of a strongly bimodal distribution where no sample
>ocurrs near the mean and all samples are tens of standard devatiations
>from the mean.

Unimodality is not a concern at all.  Asymptotic
distributions of moments only involve moments, and factor
analysis is carried out on sample moments.

One cannot have all observations "tens of standard
deviations from the mean".  The Chebyshev inequality limits
how large the tails can be.

There are problems if the covariance matrix varies from
observation to observation, even with the same sample
structure.  See my previous posting on what can be done
with weak assumptions.

>> I have a question regarding factor analysis: Is normality an important
>> precondition for using factor analysis?

>> If no, are there any books that justify this.



-- 
This address is for information only.  I do not claim that these views
are those of the Statistics Department or of Purdue University.
Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907-1399
[EMAIL PROTECTED] Phone: (765)494-6054   FAX: (765)494-0558


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Normality in Factor Analysis

2001-06-24 Thread Glen Barnett


Robert Ehrlich <[EMAIL PROTECTED]> wrote in message
[EMAIL PROTECTED]">news:[EMAIL PROTECTED]...
> Calculation of eigenvalues and eigenvalues requires no assumption.
> However evaluation of the results IMHO implicitly assumes at least a
> unimodal distribution and reasonably homogeneous variance for the same
> reasons as ANOVA or regression.  So think of th consequencesof calculating
> means and variances of a strongly bimodal distribution where no sample
> ocurrs near the mean and all samples are tens of standard devatiations
> from the mean.

The largest number of standard deviations all data can be from the mean is 1.

To get some data further away than that, some of it has to be less than 1 s.d.
from the mean.

Glen





=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: factor analysis of dichotomous variables

2001-04-27 Thread Michael Babyak

In sci.stat.edu Johannes Hartig <[EMAIL PROTECTED]> wrote:
: Hi all,
: I have a (hopefully simple but not stupid) software question
: related to factor analysis of dichotomous variables:
: Christofferson (1975) described a GLS estimator for the
: factorization if dichotomous data based on the marginal
: distributions for single items and item pairs. Muthen (1978)
: suggested another method based on the same information.
: I just need to know: Have these estimators ever been
: implemented in some statistical software pacakge?
: Thanks a lot for any hint,
: Johannes Hartig

Muthen's approach is available in his MPlus software, which can be
purchased at www.statmodel.com.

-- 
_
  Michael A. Babyak, PhD(919) 684-8843 (Voice)  
  Box 3119  (919) 684-8629 (Fax)
  Department of Psychiatry  
  Duke University Medical Center[EMAIL PROTECTED]
  Durham, NC 27710  
_



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: factor analysis of dichotomous variables

2001-04-29 Thread David Duffy

In sci.stat.edu Johannes Hartig <[EMAIL PROTECTED]> wrote:
> Hi all,
> I have a (hopefully simple but not stupid) software question
> related to factor analysis of dichotomous variables:
> Christofferson (1975) described a GLS estimator for the
> factorization if dichotomous data based on the marginal
> distributions for single items and item pairs. Muthen (1978)
> suggested another method based on the same information.

The NAG library contains a routine for the full ML single factor
solution under the threshold model.  Mx (http://griffin.vcu.edu/mx - a
free SEM program) will also do this, but you would have to write a
script (it does have the advantage of dealing with missing (MCAR) data
in its "raw data" approach) and are probably limited to 20-30 variables
tops. LISREL allows you to fit the WLS ("ADF") model to the tetrachoric
correlation matrix where the weight matrix is for all pairs of variables
(for single factor models, this is very close to that from the NAG
routine).

David Duffy.

-- 
| David Duffy. ,-_|\
| email: [EMAIL PROTECTED]  ph: INT+61+7+3362-0217 fax: -0101/ *
| Epidemiology Unit, The Queensland Institute of Medical Research \_,-._/
| 300 Herston Rd, Brisbane, Queensland 4029, Australia v 


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: factor analysis of dichotomous variables

2001-05-01 Thread John Uebersax

A list of such programs and discussion can be found at:

http://ourworld.compuserve.com/homepages/jsuebersax/binary.htm

The results of Knol & Berger (1991) and Parry & MacArdle (1991) 
(see above web page for citations) suggest that there is not much 
difference in results between the Muthen method and the simpler 
method of factoring tetrachoric correlations.  For additional 
information (including examples using PRELIS/LISREL and SAS) on 
factoring tetrachorics, see

http://ourworld.compuserve.com/homepages/jsuebersax/irt.htm 

Hope this helps.

John Uebersax


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Maximum likelihood Was: Re: Factor Analysis

2001-06-18 Thread Herman Rubin

In article <[EMAIL PROTECTED]>,
Ken Reed  <[EMAIL PROTECTED]> wrote:
>It's not really possible to explain this in lay person's terms. The
>difference between principal factor analysis and common factor analysis is
>roughly that PCA uses raw scores, whereas factor analysis uses scores
>predicted from the other variables and does not include the residuals.
>That's as close to lay terms as I can get.

>I have never heard a simple explanation of maximum likelihood estimation,
>but --  MLE compares the observed covariance matrix with a  covariance
>matrix predicted by probability theory and uses that information to estimate
>factor loadings etc that would 'fit' a normal (multivariate) distribution.

>MLE factor analysis is commonly used in structural equation modelling, hence
>Tracey Continelli's conflation of it with SEM. This is not correct though.

>I'd love to hear simple explanation of MLE!

MLE is triviality itself, if you do not make any attempt to
state HOW it is to be carried out.

For each possible value X of the observation, and each state
of nature \theta, there is a probability (or density with 
respect to some base measure) P(X | \theta).  There is no
assumption that X is a single real number; it can be anything;
the same holds about \theta.

What MLE does is to choose the \theta which makes P(X | \theta)
as large as possible.  That is all there is to it.

-- 
This address is for information only.  I do not claim that these views
are those of the Statistics Department or of Purdue University.
Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907-1399
[EMAIL PROTECTED] Phone: (765)494-6054   FAX: (765)494-0558


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Latent trait models and factor analysis

2001-06-25 Thread Michael Preminger

Hello!

In their book from 1999, "Latent Variable Models and Factor Analysis",
pages 85, as well as110-119 David Bartholomew and Martin Knott describe
a method for approximating maximum likelihood estimation of the model
coefficients within dichotomous/polytomous latent trait models.

I have a somewhat scarse background in statistics and, and even the
method is very well described, some of the details seems implicit to me.

What I am looking for would be a little (but detailed) numeric example
that uses the method. possibly programmed in Matlab or SPSS (this is of
course no requirement. A text would be marvelous).

I would be greatful to any help

Thanks

Michael




=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



PCA and factor analysis: when to use which

2001-04-18 Thread Ken Reed

What is the basis for deciding when to use principal components analysis and
when to use factor analysis. Could anyone describe a problem that
illustrates the difference?



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



RE: PCA and factor analysis: when to use which

2001-04-18 Thread Dale Glaser

references you may want to look at that address this:

Gorsuch, R. L.  (1990).  Common factor analysis versus component analysis:
Some well and little known facts.  Multivariate Behavioral Research, 25(1),
33-39.

Snook, S. C., & Gorsuch, R. L.  (1989).  Component analysis versus common
factor analysis: A Monte Carlo study.  Psychological Bulletin, 106(1),
148-154.

Velicer, W. F., & Jackson, D. N.  (1990).  Component analysis versus common
factor analysis: Some issues in selecting an appropriate procedure.
Multivariate Behavioral Research, 25(1), 1-28.

for some the bottom line may be if your intention is to maximize
variance then PCA may be appropriate.whereas if estimation of the common
factor variance + uniqueness is the primary objective, then factor analysis
may be warranted.  I have heard many opinions about this issue, and even
though it is emphasized that just relying on a generic default (e.g., PCA
with varimax) may belie the researcher's primary intention, more times than
not I have found the result and my ultimate interpretation to be very
similar regardless if I used PCA or factor analysis...



Dale N. Glaser,Ph.D.
Senior Statistician
Pacific Science & Engineering Group
6310 Greenwich Drive; Suite 200
San Diego, CA 92122
Phone: (858) 535-1661
Fax: (858) 535-1665
e-mail: [EMAIL PROTECTED]

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]]On Behalf Of Ken Reed
Sent: Wednesday, April 18, 2001 3:34 PM
To: [EMAIL PROTECTED]
Subject: PCA and factor analysis: when to use which


What is the basis for deciding when to use principal components analysis and
when to use factor analysis. Could anyone describe a problem that
illustrates the difference?



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: PCA and factor analysis: when to use which

2001-04-19 Thread Eric Bohlman

Ken Reed <[EMAIL PROTECTED]> wrote:
> What is the basis for deciding when to use principal components analysis and
> when to use factor analysis. Could anyone describe a problem that
> illustrates the difference?

PCA is simply a reparameterization of your data, sort of analogous to 
taking the Fourier transform of a time series.  It retains all the 
properties of your data; it simply lets you look at them from a different 
perspective.

FA, OTOH, involves assuming that your data can be described by a very 
specific kind of linear model and then fitting such a model to your data.  
Like all models, it will be wrong, but it might be useful.  By doing FA, 
you're choosing to discard some information from your data in the hopes 
that what remains will be interpretable.  You're blurring some of the 
trees in order to get a better idea of the shape of the forest.



=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=