Now let me jump on Andy!


On Thu, 23 Mar 2000 17:51:04 GMT, [EMAIL PROTECTED] (Andy Gilpin)
wrote:
 < snip, problem; comment >
> 
>   Still, it seems to me that, other things equal, (a) measuring data
> costs a researcher something, and (b) there are clear diminishing
> returns in terms of increased power.  Consider the following estimated
> sample sizes for an independent-groups t-test with 2-tailed alpha=.05
> and a moderate effect size (in Cohen's terms) of d=.5.  Incidentally,
> values were generated using g*power
> 
> http://www.psychologie.uni-trier.de:8000/projects/gpower.html
> 
> Step        Power         Total sample size 
 ... 
>    2              0.8000            128.0000
 ...
>    9              0.9749            248.0000
>   10              0.9999            518.0000
> 
> Below the lowest power level indicated, the relationship between N and
> power is approximately linear, but N accelerates precipitously above
> about power of .9 (visually).

Oh, Andy, this is such a naive *scaling*  conclusion.  How can you
regard "power" as a metric that ought to be equal-interval?  By my way
of thinking, extra power gets cheaper and cheaper, as the magnitude of
N increases.  In your table, I am looking at successive doublings of
N, and Odds and Odds-ratios for power:

The power at .80, in terms of an "Odds", 
is 0,80/0.20 or 4:1,  with N=128; 
it is 39:1 when the N is doubled,  to  N=248;
it is 9999:1 if N is doubled again,  to  N=518.

The first doubling corresponds to an Odds ratio, for your increase in
power, of 10:1, which might seem sizable, but the second doubling
provides an OR of 250:1.  That is *one*  way to say that I disagree.

Andy>
>                                           So once you have a sample size of about
> 180 (90 per sample), the ground gained by dumping in more cases really
> decreases rapidly.  Sure, the more cases, the more power, but you
> could easily double or triple the N (and perhaps your costs of
> administration) without increasing power very appreciably.
> 
> It may still be worth it, in terms of the nature of the conceptual
> implications of Type II errors.  But it does seem to make sense to ask
> the question.
> 

The ground gained by quadrupling the number of cases is always,
basically for a t-test, the reduction of the width of the Confidence
Interval by half.

Do you  want a smaller CI?  Do you need a smaller CI?  

"Barely not-overlapping zero" is what you have for the usual rejection
of the null hypothesis in psychology.  That's not too bad with tiny N,
because it works out neatly:  the 5% rejection when d=1.0, implying
"d>0.0"  may be equivent to a 15% test (say) that "d>0.5".   But to
set up a CI > 0.5  with a large sample is going to assume that there
is a *huge*  power for detecting  an effect that is merely  " > 0.0"

TO put it another way, you can't assume that the only goal is to
detect an effect as being non-zero.  In fact, I think it is pretty
useless to cite 95% CI's as an "effect" when the test is barely at 5%;
the range is just LARGE.

-- 
Rich Ulrich, [EMAIL PROTECTED]
http://www.pitt.edu/~wpilib/index.html


===========================================================================
This list is open to everyone.  Occasionally, less thoughtful
people send inappropriate messages.  Please DO NOT COMPLAIN TO
THE POSTMASTER about these messages because the postmaster has no
way of controlling them, and excessive complaints will result in
termination of the list.

For information about this list, including information about the
problem of inappropriate messages and information about how to
unsubscribe, please see the web page at
http://jse.stat.ncsu.edu/
===========================================================================

Reply via email to