On Tue, 17 Feb 2004, it was written by Aleks Jakulin:

> if the only objective is *testing* an interaction, is there a good
> reason why not just compare the improvement in fit achieved by the
> model
>  Z = aX+bY+c(X*Y)
> over
>  Z=eX+fY?  This gets you the uncontaminated improvement explainable
> solely with the interaction effect. Because we're looking only at the
> quality of fit, the correlations among predictors are irrelevant, and
> a good fitting procedure doesn't mind multicollinearity.

No, can't think of a compelling reason.  And I'd usually do it myself in
the way you suggest.  I was trying to answer the somewhat inchoate
question put by Arthur Tabachneck, who appeared to want to carry out an
analysis without attending to the main effects.  I'm not convinced that
what I thought he wanted was especially sensible, in the absence of more
detail from him;  and I was also trying (inter alia) to point out that
what one gets depends heavily on the formal construction of the variable
that carries the interaction information.

> But I agree that orthogonalization is important when you try to
> decompose the variance explained among the predictors.

 ------------------------------------------------------------
 Donald F. Burrill                              [EMAIL PROTECTED]
 56 Sebbins Pond Drive, Bedford, NH 03110      (603) 626-0816
.
.
=================================================================
Instructions for joining and leaving this list, remarks about the
problem of INAPPROPRIATE MESSAGES, and archives are available at:
.                  http://jse.stat.ncsu.edu/                    .
=================================================================

Reply via email to