Ben,
The other posters give some good advice, but you could also set
simultaneous confidence intervals for all 7 categories.

If you are trying to establish "equivalence" then you could define
some range that would be appropriate (say, within 3%).  The CI's are
easy enough to compute and I think a Bonferroni approach would work
okay here.  There are various methods of computing the intervals, but
I would suggest a Wilson-type interval.  (For a discussion, see the
2000 issue of the Journal of Statistical Software that discusses
setting CI's for the multinomial)

Warren May

[EMAIL PROTECTED] (Benjamin Kenward) wrote in message 
news:<9vnj9m$s2c$[EMAIL PROTECTED]>...
> Hi folks,
> 
> Let's say you have a repeatable experiment and each time the result can be
> classed into a number of discrete categories (in this real case, seven).
> If a treatment has no effect, it is known what the expected by chance
> distribution of results between these categories would be. I know that a
> good test to see if a distribution of results from a particular treatment
> is different to the expected by chance distribution is to use a
> chi-squared test. What I want to know is, is it valid to compare just one
> category? In other words, for both the obtained and expected
> distributions, summarise them to two categories, one of which is the
> category you are interested in, and the other containing all the other
> categories. If the chi-square result of the comparison of these categories
> is significant, can you say that your treatment produces significantly
> more results in particularly that category, or can you only think of the
> whole distribution?
> 
> Thanks,
> 
> Ben


=================================================================
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
                  http://jse.stat.ncsu.edu/
=================================================================

Reply via email to