In article <[EMAIL PROTECTED]>,
Alan McLean <[EMAIL PROTECTED]> wrote:


>Herman Rubin wrote:

>> In article <[EMAIL PROTECTED]>,
>> Alan McLean <[EMAIL PROTECTED]> wrote:

>> >"Robert J. MacG. Dawson" wrote:

>> >> > Alan McLean wrote:
>> >>  The p value is a direct measure of 'strength of evidence'.

>> >> and Lise DeShea responded:

                         ...................

>> >There is certainly no contradiction. A small p value indicates that the
>> >effect (whatever its size!) is (probably) valid. (Use the word 'genuine'
>> >if you prefer.)

>> The effect is (probably) valid in any case.  What is being
>> tested, which is often not what it is said is being tested,
>> is almost certainly false.

>> >The effect may be too small to be of much use, but that is a very
>> >different question.

>> But this should be the only question.  What action should
>> be taken?

>It cannot possibly be the only question.

>One of the roles of statistics, and it is performed particularly by
>hypothesis testing, is to be conservative - to stop people from taking
>foolish actions by jumping to conclusions.

It should be to help people decide what action to take.  
The attitude above is opposed to this.

        If you observe a large
>effect, you shout whoopee! and jump in - invest your life savings, write
>your world shattering paper, or whatever. Then your friendly
>neighbourhood statistician does a test on your data and points out that
>this large effect appears to be mostly a matter of chance - it was not
>'significant'. He does say that it *might* be genuine! But you are more
>likely to get egg on your face.......

If it is a large effect, it is a large effect.  What 
you should do is to act on your posterior distribution.

>Of course the size of the (apparent) effect and its significance are
>related. But both are important.

The relation is usually present in a given situation,
but not across situations.  

>On a different issue, the frequent claim that 'the null is always false'
>is a meaningless statement - at best, irrelevant. A significance test
>compares two *models*, providing evidence as to which of them is
>(probably) the better choice. It does not pretend to say anything about
>'true' values of parameters, and does not deal with exactitude.
>Unfortunately it is usually taught in those terms - leading to such
>ideas as 'the null is always false'!

It is often the case that the models are nested.  In 
this case, one model is equivalent to a parameter in
the other one having a specific value.  

The decision as to which model to use depends not only
on the fit, as the model which states that there is a
joint probability distribution of all the results gives
a perfect fit.  The essence of decision theory is that
one needs to consider all the consequences; looking at
a p value considers only one.  The same is true for 
such things as confidence intervals; interval estimation
is reasonable, but not confidence intervals.
-- 
This address is for information only.  I do not claim that these views
are those of the Statistics Department or of Purdue University.
Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907-1399
[EMAIL PROTECTED]         Phone: (765)494-6054   FAX: (765)494-0558


=================================================================
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
                  http://jse.stat.ncsu.edu/
=================================================================

Reply via email to