>
>I think that reading the scientific literature would disabuse one
>about the limited application of statistical significance.  My
>students tell me that learning about statistical inference
>greatly increases their capacity to read primary
>literature.  Perhaps it is different in your discipline.\

but, you assume that this is a good thing ... i don't necessarily share 
that view


it is not different in my discipline ... and, therefore the  same mistake 
is made here as in most others

most empirical literature depends highly on, n fact it does not get IN to 
the literature, unless one shows one or more cases of "statistical 
significance". however, most 'honest' statisticians will admit that the 
importance of statistical significance is HIGHLY OVERRATED ... and has very 
limited applications ... if one disputes this, then follow the wave that 
has been mushrooming for years (actually decades) to include confidence 
intervals where possible and/or effect sizes ... since rejecting the 
typical null hypothesis (at the heart of significance testing) leaves one 
at a DEADEND alley.

so, if you are saying that your students are saying that they are in a much 
better position to understand the literature that is dominated by 
hypothesis testing ... F tests, z tests, t tests, and on and on ... that is 
great. but, of course ... their increased confidence is on something that 
if far FAR less important than we teach it or how we emphasize it when we 
disseminate it

when we have had extensive discussions about that the meaning of a p value 
is ... associated with the typical significance test ... i think it is fair 
to summarize (sort of by vote, the majority opinion) that the smaller the p 
(assuming the study is done well), the less plausible is the null hypothesis

personally, i like this view BUT, what does it really mean then? since in 
the typical case, we set up things hoping like the dickens to reject the 
null ... AND when we do, what can we say? let's assume that the null 
hypothesis is that the mean SAT M score in california is 500 ... and, in a 
decent study (moore and mccabe use this one), we reject the null. conclusion???

we don't think the mean SAT M score in california is 500 ... and we keep 
pressing because surely there has to be more that this? again ... we say 
... we don't think the mean SAT M score in california is 500 ... and, with 
a p value of .003 ... we are pretty darn sure of that.

but, the real question here is NOT what it isn't ... but WHAT it (might) is 
... and the desirable result of rejecting the null helps you NOT in any way 
... to answer the question ... that is the REAL question of interest

this is true in most all of significance testing ... doing what we hope ... 
ie, reject a null, leaves you hanging

most will quick to point out well, you could build a CI to go along with 
that and/or ... present an effect size ...

sure, but what this means is that without this additional information, the 
hypothesis testing exercise has yielded essentially no useful information

again ... if we help students to learn all about logic of hypothesis 
testing, and the right way to go about it ... AS a way to make sure they 
read literature correctly ... AND/OR be able to apply the correct methods 
in their own research ... all of this is great ...

BUT, it does not change the fact that this over reliance on and dominance 
of ... significance testing in the literature is misplaced effort ... and, 
i submit, a poor practice for students to emulate





=================================================================
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
                  http://jse.stat.ncsu.edu/
=================================================================

Reply via email to