Hi Jim,

        I differ from Ryan in that I am generally more concerned about
Type II errors than Type I errors.  Accordingly, I think we have gone
way overboard in our attempt to cap familywise error at the great cost
of power and would be better served by designing our research with a
small number of focused contrasts in mind and just not worrying about
familywise error.  I have no fears of burning in hell for having made
one or more Type I errors.  :-)

        I agree with your Bayesian reasoning, but it is slippery.  How
confident are you a priori that this contrast is big and that one is
trivial/zero?  What really qualifies as a "planned comparison?"  I do a
three-way ANOVA.  The omnibus analysis involves seven tests of effects.
I treat these as planned comparisons, but did I really expect all seven
of the effects to be nontrivial, or can I even say that each of the
seven effects addressed questions that I had posed a priori?

        Requiring the omnibus ANOVA to be significant can lead to faulty
inference.  Suppose your research involved three or four control groups
and one experimental group.  You expect the control groups not to differ
from one another, as each controls for a factor that you believe is not
relevant.  If you are right, the omnibus ANOVA might well be
nonsignificant when contrasts between each control group and the
treatment group would be significant. 

-----Original Message-----
From: Jim Clark [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, April 04, 2007 3:54 AM
To: Teaching in the Psychological Sciences (TIPS)
Subject: [tips] RE: ANOVA, HSD, and LSD

Hi

Thanks to Karl for making this available ... now for a somewhat
alternative perspective from a non-statistician.

1.  I start with the following quote from Ryan which concerns the
distinction between a priori and a posteriori comparisons.  He appears
to believe the distinction is a false one.

"There is no justification whatever for the notion that planning allows
us to use uncorrected t tests. This notion is perpetrated in a number of
textbooks but never given any logical justification. It is simply stated
that it is "self evident." It is a dangerous notion, since those
who want significance at all costs can always claim they planned their
tests in advance. Whether they did or not is actually irrelevant."

But is the distinction really without a rationale?  Using a quasi-
(pseudo?) bayesian analogy, would not a planned comparison based on
previous findings or well-founded theory be akin to setting the prior
probability, and would not that mean that you need less evidence from
the present study to conclude in favor of Ha?  That is, a more liberal
test is justified.  Or to use a perceptual analogy, if you have reason
to expect the presence of some object, you require less bottom-up
perceptual input to detect its presence.

2. Continuing along this line of thinking, the decision about what
multiple comparison procedure to use is essentially about how strong the
evidence needs to be before you will conclude a difference (probably)
exists.  But in practice this appears a far less precise sort of
judgment than the perhaps idealized concerns of mathematical
statisticians, simulations, and the like.  I just do not see that our
judgment about how conservative to be is so precise that we are likely
to be ill served by requiring the omnibus F to be significant even
though it is not strictly speaking required, assuming of course that we
want to be conservative (e.g., when we really have no prior rationale
for a more sensitive, liberal test or when cost of a type I error is
high).

Take care
Jim

James M. Clark
Professor of Psychology
204-786-9757
204-774-4134 Fax
[EMAIL PROTECTED]


---
To make changes to your subscription go to:
http://acsun.frostburg.edu/cgi-bin/lyris.pl?enter=tips&text_mode=0&lang=english

Reply via email to