On Tue, 13 Jun 2000, Kirsten Rewey went:

> Because I've conducted *32* Chi-square tests I'm concerned about
> alpha error.  Can anyone help me identify a rule-of-thumb to
> minimize the alpha error?  I have a couple of ideas but I'd like
> your input first.

I'm no statistician, but I feel like looking foolish in public today,
so I'll venture an answer--two different answers.

You're probably familiar with the Bonferroni correction, wherein you
divide your alpha (.05) by the number of tests you're conducting (32),
then accept as significant only the p values that are below
(.05 / 32) = .00156.  This is a ferociously conservative procedure,
and several preferable alternatives have been developed.  Many of
these are available in the SAS procedure PROC MULTTEST, which will
accept your list of 32 p values as input.

But then there's the question of whether chi-squares are really what
you wanted to use.  If this is what you're doing:

> A student and I are comparing student and faculty definitions of
> cheating; a number of you were kind enough to complete a survey for
> me.  I used Chi-square tests to determine if students and faculty
> differed in their assessment of 32 individual behaviors.

...then perhaps the test you want is an "agreement statistic" like
Cohen's kappa statistic, which measures the degree of "agreement
beyond chance" between two raters or instruments.  With agreement
statistics, the p value is usually of less interest than the actual
amount of agreement.  I've seen kappa values interpreted as follows:

<0        = below-chance agreement [this is rare, but would be interesting
            if you found it in your study]
0.0       = chance agreement only
>0-0.19   = poor agreement
0.20-0.39 = fair agreement
0.40-0.59 = moderate agreement
0.60-0.79 = substantial agreement
0.80-1.00 = almost perfect agreement

  source: Landis JR and Koch GG (1977).  The measurement of observer
  agreement for categorical data.  Biometrics 33: 159-174.

If you have SAS, you can generate agreement statistics with PROC FREQ;
just specify the AGREE or KAPPA options.

But after calculating 32 separate kappas, what should you do?  Just
discuss their range?  I'm not sure.  I guess TIPSters wiser than
myself will have to take over from here.

--David Epstein
  [EMAIL PROTECTED]

Reply via email to