On Thu, 25 Mar 2004, don allen went: > On page 84 the authors state, > > "There are several commonly used measures of effect size, any of > which can be applied to experimental, correlational and longitudinal > types of studies. To provide a common metric for this discussion, we > have converted all effect sizes to correlation coefficients (rs)." > > I haven't seen this type of transform before. My questions are: > > 1. How is it done? > 2. Is it a reasonable thing to do?
There's good precedent for it. In the standard text _Statistical Power Analysis for the Behavioral Sciences_, Jacob Cohen accompanied each of his effect-size measures with its equivalent in terms of r, r-squared, or both. But I'm trying to figure out whether it makes sense to translate every r-squared into an r. For example, suppose you've derived your r-squared value from an ANOVA table on these imaginary data: Level of media exposure: Mean degree of aggression: <5 hrs a week 90 +/- 5 5-10 hrs a week 10 +/- 5 >10 hrs a week 90 +/- 5 The r-squared value from the ANOVA will correctly tell you that media exposure accounts for a lot of the variance in aggression, but it doesn't imply a linear "dose-response" relationship. Converting it to a correlation coefficient would imply exactly that, and would thus be misleading. Am I making sense? --David Epstein [EMAIL PROTECTED] --- You are currently subscribed to tips as: [EMAIL PROTECTED] To unsubscribe send a blank email to [EMAIL PROTECTED]
