On 21/03/2009, at 12:50 AM, Fredrik Karlsson wrote:

Dear list,

Sorry for posting a borderline statistical question on the list, but hte SPSS people around me just stares at me blankly when refering to tests with any term other than ANOVA and post-hoc. I would appreciate any insight on
how this all is possible:

I have a model fitted by aov() stored in "ppdur", which gives this result
when using ANOVA:

        <snip>

As you can see, I don't get a significant p-value for this interaction
effect  anymore. How could that be?

        How?  Well it just could.  That's the way with statistics.  Remember
        you're not talking about things being definitely true, you're talking
        about there being ``significant'' evidence that they're true.  This
        can lead to apparent paradoxes.

        A simple example of such ``paradoxes'' arises in the context of
        multiple comparisons.  In a one-way anova these might show you that
        level A is ``the same as'' level B, and level B is ``the same as''
        level C, but nevertheless level A is *different from* level C.

        This is just saying that we have *evidence* that level A is different
        from level C.  Ergo it follows, as doth the night follow the day,
        that level B must differ either from level A or from level C, or
        both.  There just isn't enough information in the data to decide
        which of the possibilities is true.  A larger data set would be able
        to ``make the decision''.

        Your example of multiple comparison results ``contradicting'' the
        anova results is not an unheard of phenomenon.  I like to illustrate
        what's going on via the diagram shown in the attached pdf file.

        Think of the null hypothesis of ``no difference between the levels''
being rejected whenever the sample falls outside of a certain enclosure.
        In the diagram the circle represents the enclosure corresponding to the
        anova test; the square represents the enclosure corresponding to the
        multiple comparisons test.  If the sample lands outside both the circle
        and the square, then both tests reject the null.  But it can happen,
        rarely but not too rarely, that the sample lands inside one of the bits
of the circle that stick out beyond the square. In this case the multiple comparisons test will say that there are differences, but the anova test will say there are none. Alternatively, the sample could land in one of the corners of the square that stick out beyond the circle. In this case the anova test will say that there are differences, but the multiple comparisons
        test will find none.

That's just All Part of the Rich Tapestry of Life when you do statistical
        hypothesis testing.

        BTW don't take the circle and the square too literally.  They are just
        illustrative analogies; don't try to interpret them in terms of what's
        really going on in the actual hypothesis testing mechanism.

        HTH.

                cheers,

                        Rolf Turner


###################################################################### Attention: This e-mail message is privileged and confidential. If you are not the intended recipient please delete the message and notify the sender. Any views or opinions presented are solely those of the author.

This e-mail has been scanned and cleared by MailMarshal www.marshalsoftware.com
######################################################################

Attachment: diagram.pdf
Description: Adobe PDF document

______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Reply via email to