On Wed, 10 Apr 2013, Marc Carter went:

If my data suggest I should reject the null, why is low power a
concern?  If I *fail* to reject then the first thing I look at is
power, but if I can reject with confidence, then I'm not concerned
about the power of the test.

That's addressed in this statement from the Button et al. paper:

|The relationship between study power and the veracity of the
|resulting finding is under-appreciated. Low statistical power
|(because of low sample size of studies, small effects or both)
|negatively affects the likelihood that a nominally statistically
|significant finding actually reflects a true effect.

And:

|...the lower the power of a study, the lower the probability that an
|observed effect that passes the required threshold of claiming its
|discovery (that is, reaching nominal statistical significance, such
|as p < 0.05) actually reflects a true effect.

They elaborate on that point, which I admit I find it deeply difficult
to grasp.  Here's the best I can do; someone please tell me if I'm on
the right track:

(a) A p value is basically "the likelihood of getting the present
results by chance in a universe where the null was true."  (It says
nothing about the likelihood that we live in that universe.)

(b) In such a "null is true" universe, you'd get an extreme result
more frequently by grabbing tiny samples than by grabbing large ones.

(c) By that logic, in the "null is true" universe, it's easier to get
an extreme result (say, p = .01) with a sample size of ten than with a
sample size of a hundred.

Is that the idea?  I'm still struggling with it.

--David Epstein
  da...@neverdave.com

---
You are currently subscribed to tips as: arch...@jab.org.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5&n=T&l=tips&o=24927
or send a blank email to 
leave-24927-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu

Reply via email to