Re: Question about kappa

2000-04-28 Thread David Cross/Psych Dept/TCU

I think I would consider using generalizability theory for this problem.
Shavelson and Webb have a good book out on the subject, published by Sage.

On Thu, 27 Apr 2000, Robert McGrath wrote:

> I am looking for a formula for kappa that applies for very special
> circumstances:
> 
> 1) Two raters rated each event, but the raters varied across event.
> 2) The study involved 100 subjects, each of whom generated app. 17 events,
> so multiple events were generated by the same subject.
> 
> I know Fleiss has developed a formula for kappa that allows for multiple
> sets of raters, but is there a formula that is appropriate for the
> circumstance I have described?  Thanks for your help!
> 
> Bob
> 
> -
> 
> Robert McGrath, Ph.D.
> School of Psychology T110A
> Fairleigh Dickinson University, Teaneck NJ 07666
> voice: 201-692-2445   fax: 201-692-2304
> 
> - Original Message -
> From: "Bob Wheeler" <[EMAIL PROTECTED]>
> To: <[EMAIL PROTECTED]>
> Sent: Thursday, April 27, 2000 3:15 PM
> Subject: Sample size and distributions programs
> 
> 
> > I have uploaded two programs that some may find of
> > use:
> >
> > (1) Tables. A Windows program written quite a few
> > years ago. It treats 42 distributions extensively
> > including plots and technical documentation.
> > (2) SSize. A sample size program for the Palm
> > devices. It treats linear models for several
> > distributions: normal, binomial, Poisson, and
> > chi-squared. ANOVA, t-tests, logistic, etc. There
> > is a fairly extensive documentation in pdf format.
> > This is a new program, so there are undoubtedly
> > bugs.
> > I would greatly appreciate hearing about them.
> >
> > They are at  http://www.bobwheeler.com/stat/
> >
> >
> > --
> > Bob Wheeler --- (Reply to: [EMAIL PROTECTED])
> > ECHIP, Inc.
> >
> >
> >
> ===
> > This list is open to everyone.  Occasionally, less thoughtful
> > people send inappropriate messages.  Please DO NOT COMPLAIN TO
> > THE POSTMASTER about these messages because the postmaster has no
> > way of controlling them, and excessive complaints will result in
> > termination of the list.
> >
> > For information about this list, including information about the
> > problem of inappropriate messages and information about how to
> > unsubscribe, please see the web page at
> > http://jse.stat.ncsu.edu/
> >
> ===
> >
> 
> 
> 
> ===
> This list is open to everyone.  Occasionally, less thoughtful
> people send inappropriate messages.  Please DO NOT COMPLAIN TO
> THE POSTMASTER about these messages because the postmaster has no
> way of controlling them, and excessive complaints will result in
> termination of the list.
> 
> For information about this list, including information about the
> problem of inappropriate messages and information about how to
> unsubscribe, please see the web page at
> http://jse.stat.ncsu.edu/
> ===
> 



===
This list is open to everyone.  Occasionally, less thoughtful
people send inappropriate messages.  Please DO NOT COMPLAIN TO
THE POSTMASTER about these messages because the postmaster has no
way of controlling them, and excessive complaints will result in
termination of the list.

For information about this list, including information about the
problem of inappropriate messages and information about how to
unsubscribe, please see the web page at
http://jse.stat.ncsu.edu/
===



Re: Question about kappa

2000-04-28 Thread Rich Ulrich

On 27 Apr 2000 13:24:01 -0700, [EMAIL PROTECTED] (Robert McGrath)
wrote:

> I am looking for a formula for kappa that applies for very special
> circumstances:
> 
> 1) Two raters rated each event, but the raters varied across event.
> 2) The study involved 100 subjects, each of whom generated app. 17 events,
> so multiple events were generated by the same subject.
> 
> I know Fleiss has developed a formula for kappa that allows for multiple
> sets of raters, but is there a formula that is appropriate for the
> circumstance I have described?  Thanks for your help!

I think it was Fleiss who stated that for complex situations, the
kappa is usually equal to the Intraclass correlation (ICC), to the
first two decimal places.  So all you need to do, is this:   Define
the appropriate ANOVA table, and decide on the appropriate version of
the ICC.

My stats-FAQ has a reference on ICC for an unbalanced design.  It
entails approximations, so I hope the design is not *too* unbalanced.

< snip, McGrath sig.>
< snip, Bob Wheeler post; included for no imaginable reason. > 
< snip, quoting of Edstat-L message from the bottom of Bob Wheeler's
post >
< snip, Edstat-L message >

-- 
Rich Ulrich, [EMAIL PROTECTED]
http://www.pitt.edu/~wpilib/index.html


===
This list is open to everyone.  Occasionally, less thoughtful
people send inappropriate messages.  Please DO NOT COMPLAIN TO
THE POSTMASTER about these messages because the postmaster has no
way of controlling them, and excessive complaints will result in
termination of the list.

For information about this list, including information about the
problem of inappropriate messages and information about how to
unsubscribe, please see the web page at
http://jse.stat.ncsu.edu/
===