Radford Neal wrote:
>I'll go for three out of four of these. But "important non-significant
>effects"?
Perhaps what Thom means is that getting nonsignificant effects can be an
important finding. If the research was conducted under conditions for which
power would be great (say 95%) even for the smallest effect which you would
consider practically meaningful, then failing to get a significant effect
means that the effect is likely quite small, so small as to be, for
practical purposes, zero. Determining that effects are essentially zero
can, indeed, be important. However, I think we would be served better here
by reporting confidence intervals for effects rather than p-values for tests
of nil hypotheses -- if the confidence interval indicates that effect is at
best very small in one direction or the other, or even if it indicates that
the effect is very small in one direction (in which case the effect is
'significant'), then we know that the effect is of trivial magnitude.
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
++++++ Karl L. Wuensch, Department of Psychology, East Carolina University,
Greenville NC 27858-4353 Voice: 252-328-4102 Fax: 252-328-6283
[EMAIL PROTECTED] http://core.ecu.edu/psyc/wuenschk/klw.htm
=================================================================
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
http://jse.stat.ncsu.edu/
=================================================================