In article <[EMAIL PROTECTED]>,
Rich Ulrich  <[EMAIL PROTECTED]> wrote:
>On Tue, 23 Nov 1999 04:39:28 GMT, [EMAIL PROTECTED] wrote:

>> Does any one know how one might test for significant differences
>> between two multiple R's (or R squar's)generated from two sets of data?
>> I need to determine if two R's generated on two separate occasions
>> using the same DV and IV's differ significantly from one another.

>Correlations are not very good candidates for comparisons, since it is
>so easy to do tests that are more precise.
> - to test whether the predictive relations are different, you would
>test the regressions -- do a Chow test or the equivalent, to see if a
>different set of regressors are needed for a different sampling.
> - to test whether the variances are different (which is something
>that would change the correlations), you might test variances
>directly.

This is correct.  In fact, it is generally the case that
correlations, except as measures of how well the model
fits, do not have any real meaning.

Even the amount of the variance explained can change
drastically with a change in design, but the parameters of
the model do not change, if normalizations are not done.
For example, if one has a "normal" model with correlation
coefficient .5, 25% of the variance is explained.  Now 
suppose that the predictor variable is selected to be
2 standard deviations away from the mean, equally likely
to be in either direction.  Then the correlation becomes
.756, and the proportion of the variance explained goes
up to 57%.  But the prediction model is still the same.
-- 
This address is for information only.  I do not claim that these views
are those of the Statistics Department or of Purdue University.
Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907-1399
[EMAIL PROTECTED]         Phone: (765)494-6054   FAX: (765)494-0558

Reply via email to