As I recall, Kappa is a measurement of agreement. It is best used for
dichotomous outcomes such as judgment by raters in terms of
"mastery/non-mastery" "pass/fail". I am not sure if it is proper for your
data. If the data are continuous-scaled and more than two raters
involved, a repeated mea
Your post makes it seem unclear that kappa is the right statistic.
Usually one uses kappa when each rater/clinician rates a sample of
patients or cases. But you merely describe a questionnaire (sp?) that
each clinician completes. Assuming each clinician completes the
questionnaire only one tim
On 3 Mar 2000 11:36:25 -0800, [EMAIL PROTECTED] (Marie Elaine Rump)
wrote:
...
>
> We are in the middle of a study that compares 16 clinicians
> answers to a questionnaire (answers selected from 0,1,2,3) and
> would like to use weighted kappa to analyse our intra and inter
>
We are a group of undergraduate physio students and were
wondering if you could help us.
We are in the middle of a study that compares 16 clinicians
answers to a questionnaire (answers selected from 0,1,2,3) and
would like to use weighted kappa to analyse our intra and inter
rater results