Re: comparing 2 slopes

2001-06-20 Thread Tracey Continelli

mccovey@psych [EMAIL PROTECTED] wrote in message 
news:[EMAIL PROTECTED]...
 in article [EMAIL PROTECTED], Tracey
 Continelli at [EMAIL PROTECTED] wrote on 6/13/01 4:14 PM:
 
  Mike Tonkovich [EMAIL PROTECTED] wrote in message
  news:3b20f210_1@newsfeeds...
  Was hoping someone might be able to confirm that my approach for comparing 2
  slopes was correct.
  
  I ran an analysis of covariance using PROC GLM (in SAS) with an interaction
  statement.  My understanding was that a nonsignificant interaction term
  meant that the slopes were the same, and vice versa for a significant
  interaction term.  Is this correct and is this the best way to approach this
  problem with SAS?  Any help would certainly be apprectiated.
  
  Mike Tonkovich
  
  --
  Michael J. Tonkovich, Ph.D.
  Wildlife Research Biologist
  ODNR, Division of Wildlife
  [EMAIL PROTECTED]
  
  The slopes need not be the same if the interaction term is
  non-significant, BUT, the difference between them will not be
  statistically significant.  If the differences between the slops *are*
  statistically significant, this will be reflected in a statistically
  significant product term.  I have preferred using regression analyses
  with interaction terms, which can be easily incorporated by simply
  multiplying the variables together and then running the regression
  equation with each independent variable plus the product term [which
  is simply another name for the interaction term].  The results are
  much more straightforward in my mind.
  
  Tracey Continelli
  SUNY at Albany
 
 
 I agree completely but there can be problems interpreting the regression
 Output (e.g., mistakes like talking about main effects).  For advice on
 avoiding the common interpretation pitfalls, see
 
 Aiken  West (1991).  Multiple regression: Testing and interpreting
 interactions.  Sage.
 
 Irwin  McClelland (2001).  In Journal of Marketing Research.
 
 Gary McClelland
 Univ of Colorado


Quite so.  Once you add the product term, the interpretation changes,
and the parameter estimates are now known as simple main effects. 
The interpretation is pretty straightforward however.  The parameter
estimate, or slope, for your focal independent variable in the
interaction model simply represents the effect of your independent
variable upon your dependent variable when your moderator variable is
equal to zero, holding constant all other independent variables in
your model.  The same may be said for the slope of your moderator
variable - it represents the effect of that variable upon your
dependent variable when your focal independent variable is equal to
zero.  Because in my research [the social science variety] that
information isn't terribly useful [because most of the time you won't
realistically see the moderator variable at zero, i.e., a zero crime
rate or a zero poverty rate], what I will do is a mean centering
trick.  I'll subtract the mean from the moderator variable, rerun the
equation with the new mean centered variable and product term, and NOW
the parameter estimates of the simple main effects are meaningful for
me.  Now, when I look at the parameter estimates of the focal
independent variable, it is telling me the effect of that independent
variable upon the dependent variable when my moderator variable is at
its mean.  The actual product term remains identical to the original
equation [of course], but now the simple main effects are
realistically meaningful.  I'll also apply the same technique for when
the moderator variable is 2 standard deviations below the mean, 1
below the mean, all the way up to 2 standard deviations above the
mean.  This gives one a nice graphic sense of the way in which the
slope between your focal independent variable and your dependent
variable changes with successive changes in your moderator variable.


Tracey Continelli
Doctoral candidate
SUNY at Albany


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: Factor Analysis

2001-06-15 Thread Tracey Continelli

Hi there,

would someone please explain in lay person's terms the difference
betwn.
principal components, commom factors, and maximum likelihood
estimation
procedures for factor analyses?

Should I expect my factors obtained through maximum likelihood
estimation
tobe highly correlated?  Why?  When should I use a Maximum likelihood
estimation procedure, and when should I not use it?

Thanks.

Rita

[EMAIL PROTECTED]


Unlike the other methods, maximum likelihood allows you to estimate
the entire structural model *simultaneously* [i.e., the effects of
every independent variable upon every dependent variable in your
model].  Most other methods only permit you to estimate the model in
pieces, i.e., as a series of regressions whereby you regress every
dependent variable upon every independent variable that has an arrow
directly pointing to it.  Moreover, maximum likelihood actually
provides a statistical test of significance, unlike many other methods
which only provide generally accepted cut-off points but not an actual
test of statistical significance.  There are very few cases in which I
would use anything except a maximum likelihood approach, which you can
use in either LISREL or if you use SPSS you can add on the module AMOS
which will do this as well.


Tracey


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: comparing 2 slopes

2001-06-13 Thread Tracey Continelli

Mike Tonkovich [EMAIL PROTECTED] wrote in message news:3b20f210_1@newsfeeds...
 Was hoping someone might be able to confirm that my approach for comparing 2
 slopes was correct.
 
 I ran an analysis of covariance using PROC GLM (in SAS) with an interaction
 statement.  My understanding was that a nonsignificant interaction term
 meant that the slopes were the same, and vice versa for a significant
 interaction term.  Is this correct and is this the best way to approach this
 problem with SAS?  Any help would certainly be apprectiated.
 
 Mike Tonkovich
 
 --
 Michael J. Tonkovich, Ph.D.
 Wildlife Research Biologist
 ODNR, Division of Wildlife
 [EMAIL PROTECTED]

The slopes need not be the same if the interaction term is
non-significant, BUT, the difference between them will not be
statistically significant.  If the differences between the slops *are*
statistically significant, this will be reflected in a statistically
significant product term.  I have preferred using regression analyses
with interaction terms, which can be easily incorporated by simply
multiplying the variables together and then running the regression
equation with each independent variable plus the product term [which
is simply another name for the interaction term].  The results are
much more straightforward in my mind.

Tracey Continelli
SUNY at Albany
 
 
 
 
 -= Posted via Newsfeeds.Com, Uncensored Usenet News =-
 http://www.newsfeeds.com - The #1 Newsgroup Service in the World!
 -==  Over 80,000 Newsgroups - 16 Different Servers! =-


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=



Re: multivariate techniques for large datasets

2001-06-13 Thread Tracey Continelli

Sidney Thomas [EMAIL PROTECTED] wrote in message 
news:[EMAIL PROTECTED]...
 srinivas wrote:
  
  Hi,
  
I have a problem in identifying the right multivariate tools to
  handle datset of dimension 1,00,000*500. The problem is still
  complicated with lot of missing data. can anyone suggest a way out to
  reduce the data set and  also to estimate the missing value. I need to
  know which clustering tool is appropriate for grouping the
  observations( based on 500 variables ).

One of the best ways in which to handle missing data is to impute the
mean for other cases with the selfsame value.  If I'm doing
psychological research and I am missing some values on my depression
scale for certain individuals, I can look at their, say, locus of
control reported and impute the mean value.  Let's say [common
finding] that I find a pattern - individuals with a high locus of
control report low levels of depression, and I have a scale ranging
from 1-100 listing locus of control.  If I have a missing value for
depression at level 75 for one case, I can take the mean depression
level for all individuals at level 75 of locus of control and impute
that for all missing cases in which 75 is the listed locus of control
value.  I'm not sure why you'd want to reduce the size of the data
set, since for the most part the larger the N the better.


Tracey Continelli


=
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
  http://jse.stat.ncsu.edu/
=