Hello all,
I am hoping to get some expert opinions on a
relatively simple problem. Suppose that you want to
combine Pearson r and r-squared values from 20 imputed
data sets. It seems that standard advice (in small to
moderate samples) is to first transform r using
Fisher's (1915) r-to-z transformation. Similarly,
ln(r-squared) seems to be an appropriate
transformation.
After combining and back-transforming, it is quite
possible -- perhaps likely -- that you get
inconsistent estimates. What I mean by that is that
squaring the combined Pearson r value can be quite
different from the combined R-square value that you
get from back-transforming ln(R-sq).
The fact that these two estimates does not surprise
me. However, I'm looking for some practical advice on
how to deal with this. How might you explain the
discrepancy in a manuscript, for example? Is there a
better way to deal with this (e.g., report the
combined r value and square it to get a value of
r-squared rather than combine the r-squared values
separately).
Any opinions on this would be much appreciated.
Amy Rangel
____________________________________________________________________________________
Never miss a thing. Make Yahoo your home page.
http://www.yahoo.com/r/hs