The other way to increase effect size would be to improve experimental control 
(procedure).
That would be consistent with this being basically a pilot study.

On Dec 12, 2014, at 8:02 AM, Christopher Green <chri...@yorku.ca> wrote:

>  Wow. In an era where repeated failures to replicate “sensational” 
> psychological effects is all over the news, it is astonishing that any editor 
> would have accepted this sloppy of argument (whether the can cite articles 
> from the 1960s and ‘70s that used it as well or not). The solution to high 
> Type II error rates is decidedly not to raise Type I error rates. The 
> solution is to raise power by raising the sample size. Although it is true 
> that the conventional alpha level of .05 is entirely arbitrary, in an era 
> where thousands of psychological studies are published every year (rather 
> than the mere dozens that were published annually back when Fisher first 
> proposed it), the conventional Type I error rate should probably be 
> tightened, not loosened (and the required sample sizes would have to go up 
> for all but the largest effects). The article should have been rejected until 
> the authors could demonstrate the same effect with and increased sample size. 
> 
> As the old saying goes, extraordinary claims require extraordinary evidence. 
> 
> Chris
> …..
> Christopher D Green
> Department of Psychology
> York University
> Toronto, ON M3J 1P3
> Canada
> 
> chri...@yorku.ca
> http://www.yorku.ca/christo
> ………………………………...
> 
> On Dec 11, 2014, at 2:18 PM, Ken Steele <steel...@appstate.edu> wrote:
> 
>> 
>> A colleague sent me a link to an article -
>> 
>> https://www.insidehighered.com/news/2014/12/10/study-finds-gender-perception-affects-evaluations
>> 
>> I took a look at the original article and found this curious footnote.
>> 
>> Quoting footnote 4 from the study:
>> 
>> "While we acknowledge that a significance level of .05 is conventional in 
>> social science and higher education research, we side with Skipper, 
>> Guenther, and Nass (1967), Labovitz (1968), and Lai (1973) in pointing out 
>> the arbitrary nature of conventional significance levels. Considering our 
>> study design, we have used a significance level of .10 for some tests where: 
>> 1) the results support the hypothesis and we are consequently more willing 
>> to reject the null hypothesis of no difference; 2) our hypothesis is 
>> strongly supported theoretically and by empirical results in other studies 
>> that use lower significance levels; 3) our small n may be obscuring large 
>> differences; and 4) the gravity of an increased risk of Type I error is 
>> diminished in light of the benefit of decreasing the risk of a Type II error 
>> (Labovitz, 1968; Lai, 1973).”


Paul Brandon
Emeritus Professor of Psychology
Minnesota State University, Mankato
pkbra...@hickorytech.net




---
You are currently subscribed to tips as: arch...@mail-archive.com.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5&n=T&l=tips&o=40843
or send a blank email to 
leave-40843-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu

Reply via email to