I think the problem is in the use of the term "power" as synonymous with sample 
size. It is true that certain of the components of power (such as sample size) 
may have an adverse effect on the likelihood of both Type I and Type II errors. 
However, power is the probability of finding a significant effect if one exists 
in the population. Depending on the inferential statistic you use, power is 
determined by a number of factors such as where you set the alpha level, the 
effect size in the population, the within group variability and the sample 
size. Higher power is inversely related to the probability of making a Type II 
error (failing to find an effect in your sample when one exists in the 
population).

In addition to affecting the power of a study, a small sample size might also 
make your statistics less reliable (more variable or extreme) thus leading to 
the possibility that you might get a spurious finding (make a Type I error). 
However, I think it just confuses things to say that a low power study will 
increase the probability of making a Type I error when you mean that a small 
sample size, by making the statistics more variable, is likely to increase the 
probability of making a Type I error because there are many factors besides 
sample size that go into determining the power of a study.

Rick

Dr. Rick Froman, Chair
Division of Humanities and Social Sciences 
Professor of Psychology 
Box 3519
John Brown University 
2000 W. University Siloam Springs, AR  72761 
rfro...@jbu.edu 
(479) 524-7295
http://bit.ly/DrFroman 

-----Original Message-----
From: David Epstein [mailto:da...@neverdave.com] 
Sent: Wednesday, April 10, 2013 4:23 PM
To: Teaching in the Psychological Sciences (TIPS)
Subject: RE: [tips] Why Neuroscience Research Sucks

On Wed, 10 Apr 2013, Marc Carter went:

> If my data suggest I should reject the null, why is low power a 
> concern?  If I *fail* to reject then the first thing I look at is 
> power, but if I can reject with confidence, then I'm not concerned 
> about the power of the test.

That's addressed in this statement from the Button et al. paper:

|The relationship between study power and the veracity of the resulting 
|finding is under-appreciated. Low statistical power (because of low 
|sample size of studies, small effects or both) negatively affects the 
|likelihood that a nominally statistically significant finding actually 
|reflects a true effect.

And:

|...the lower the power of a study, the lower the probability that an 
|observed effect that passes the required threshold of claiming its 
|discovery (that is, reaching nominal statistical significance, such as 
|p < 0.05) actually reflects a true effect.

They elaborate on that point, which I admit I find it deeply difficult to 
grasp.  Here's the best I can do; someone please tell me if I'm on the right 
track:

(a) A p value is basically "the likelihood of getting the present results by 
chance in a universe where the null was true."  (It says nothing about the 
likelihood that we live in that universe.)

(b) In such a "null is true" universe, you'd get an extreme result more 
frequently by grabbing tiny samples than by grabbing large ones.

(c) By that logic, in the "null is true" universe, it's easier to get an 
extreme result (say, p = .01) with a sample size of ten than with a sample size 
of a hundred.

Is that the idea?  I'm still struggling with it.

--David Epstein
   da...@neverdave.com

---
You are currently subscribed to tips as: rfro...@jbu.edu.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13039.37a56d458b5e856d05bcfb3322db5f8a&n=T&l=tips&o=24927
or send a blank email to 
leave-24927-13039.37a56d458b5e856d05bcfb3322db5...@fsulist.frostburg.edu

---
You are currently subscribed to tips as: arch...@jab.org.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5&n=T&l=tips&o=24928
or send a blank email to 
leave-24928-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu

Reply via email to