Hi

I wondered what is the difference between x replications of y observations each 
versus a single study of x*y observations.  Seems logically like they should 
produce the equivalent statistical results.  So I generated 25 samples of 10 
observations from population with mu = 53 and sigma = 10 and tested each sample 
against the null that mu = 50.  About 20% of ts were significant (i.e., low 
power?).  I used Fisher's method to combine p values and the result was p = 
.000122, highly significant.  There are other ways to combine p values that 
produce lower aggregate p values than Fisher's method, but I haven't tried to 
program them yet.

Then I simply treated the 250 observations as a single sample, which produced a 
p value of .000021, much lower than the Fisher's (but of unknown relationship 
to other methods of aggregating ps).

Qualitatively then, a collection of low power studies produces a significant 
result, as does a high power test on exactly the same data.  And logically I'm 
not able to see a substantive difference between the two scenarios.  So perhaps 
multiple modest replications do provide an alternative to insisting on 
sufficient power (expensive?) in individual studies, although the danger would 
be inappropriate or premature conclusions from the early studies or failure to 
carry out and/or publish replications?

Take care
Jim


James M. Clark
Professor & Chair of Psychology
j.cl...@uwinnipeg.ca
Room 4L41A
204-786-9757
204-774-4134 Fax
Dept of Psychology, U of Winnipeg
515 Portage Ave, Winnipeg, MB
R3B 0R4  CANADA


>>> Michael Palij <m...@nyu.edu> 10-Apr-13 7:20 AM >>>
A paper published in Nature Reviews Neuroscience reports
a meta-analysis of neuroscience research studies and, in
keeping with old problems with experimental designs used
by people who perhaps don't know what they're doing (e.g.,
failing to appreciate the role of statistical power), report that
they find (a) low levels of statistical power (around .20),
(b) exaggerated effect sizes, and (c) lack or reproducibility.
But don't take my word for it, here is a link to research article:
http://www.nature.com/nrn/journal/vaop/ncurrent/full/nrn3475.html 

NOTE: you'll need to use you institution's library to access
the article.

There are popular media articles that focus on this article which
may be useful in classes such as critical thinking and maybe
even neuroscience; see:
http://www.guardian.co.uk/science/sifting-the-evidence/2013/apr/10/unreliable-neuroscience-power-matters
 

Jack Cohen pointed out some of the problems back in his 1962
review as well as updated them in subsequent publications; see:
http://classes.deonandan.com/hss4303/2010/cohen%201992%20sample%20size.pdf 

Of course, this is problem of researcher education, the politics of
funding research and publishing, and perhaps sociological factors,
such trying to appear more "scientific" -- focusing on brain is
after all more "scientific" than focusing on just behavior or the mind.

-Mike Palij
New York University
m...@nyu.edu 

---
You are currently subscribed to tips as: j.cl...@uwinnipeg.ca.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13251.645f86b5cec4da0a56ffea7a891720c9&n=T&l=tips&o=24913
 
or send a blank email to 
leave-24913-13251.645f86b5cec4da0a56ffea7a89172...@fsulist.frostburg.edu


---
You are currently subscribed to tips as: arch...@jab.org.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5&n=T&l=tips&o=24951
or send a blank email to 
leave-24951-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu

<<attachment: Jim_Clark.vcf>>

Reply via email to