I would think so.  Alternatively, use the Dunn/Bonferroni test, but it will
be a little more conservative.  Or you could declare those comparisons to be
"planned" or "a priori," like the various tests in a typical factorial
ANOVA, and then act like you don't have to worry about inflation of
familywise error rate in that case -- which does not really make any sense
to me, but is commonly done.  IMHO, we worry too much about Type I errors
and too little about Type II errors, anyhow.

Karl W.
-----Original Message-----
From: John Mercer [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, June 24, 2003 10:46 PM
To: [EMAIL PROTECTED]
Subject: Re: t-Test vs Dunnett's Test


In article <[EMAIL PROTECTED]>,
 [EMAIL PROTECTED] (Karl L. Wuensch) wrote:

> Dunnett's test represents an attempt to control familywise error in 
> the situation where each of several means is contrasted with a single 
> reference mean.  The test statistic is computed the same as the usual 
> t (possibly with pooled error), but the function relating t to p is 
> different, and dependant on the number of groups.

So would this be the proper choice if I have a single experimental group 
and three control groups? I am in this situation, with very large 
standard deviations caused by uncontrollable factors.
.
.
=================================================================
Instructions for joining and leaving this list, remarks about the
problem of INAPPROPRIATE MESSAGES, and archives are available at:
.                  http://jse.stat.ncsu.edu/                    .
=================================================================

Reply via email to