In article <[EMAIL PROTECTED]>,
Lise DeShea <[EMAIL PROTECTED]> wrote:
>Alan McLean wrote:

>> ... In general, I emphasise the use of p values - in
>> many ways it is a  more natural way than using critical values to carry
>> out a test. The p value is a direct measure of 'strength of evidence'.

>I disagree.  The p-value may be small when a study has enormous power yet a
>small effect size.  A p-value by itself doesn't say much.

A p-value tells me nothing of importance.  It is in no way
a measure of strength of evidence.  Besides, I need no
evidence to reject the point null hypothesis actually being
tested.  I would not believe that saccharin has ABSOLUTELY
NO effect on bladder cancer.  Even when the claimed null
hypothesis might well be true, such as the speed of light
in vacuum is constant, this is not what the experiment
actually tests.

Testing is important, but the test should be on whether one
should act as if the effect should be neglected.  Even if
one can approximate this by testing a point null, which is
often the case, the significance level which should be used
depends on the sample size, and not minimally.
-- 
This address is for information only.  I do not claim that these views
are those of the Statistics Department or of Purdue University.
Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907-1399
[EMAIL PROTECTED]         Phone: (765)494-6054   FAX: (765)494-0558


=================================================================
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
                  http://jse.stat.ncsu.edu/
=================================================================

Reply via email to