Allen,
You might refer to this paper.

        Burry-Stock, J. A., Shaw, D. G., Laurie, C., & Chissom, B. S.
(1996).  Rater agreement indexes for performance assessment.  Educational &
Psychological Measurement, 56, 251-262.
Peter Chen


        -----Original Message-----
        From:   Allen E Cornelius [SMTP:[EMAIL PROTECTED]]
        Sent:   Wednesday, January 19, 2000 11:22 AM
        To:     [EMAIL PROTECTED]
        Subject:        Interrater reliability





        Stat folks,

             I have an interrater reliability dilemma.  We are examining a
3-item
        scale (each item scored 1 to 5) used to rate compliance behavior of
        patients.  Two separate raters have used the scale to rate patients'
        behavior, and we now want to calculate the interrater agreement for
the
        scale.  Two problems:
               1) The majority of patients are compliant, and receive either
a 4 or
        5 for each of the three items from both of the raters.  While this
is high
        agreement, values for ICC are very low due to the limited range of
scores.
        Are there any indexes that would reflect the high agreement of the
raters
        under these conditions?  Perhaps something that accounts for the
full range
        of the scale (1 to 5)?
             2)  The dataset contains a total of about 100 observations, but
there
        are multiple observations on the same patients at different times,
probably
        about 5 to 6 observations per patient.  Does this repeated
assessment need
        to be accounted for in the interrater agreement, or can each
observation be
        treated as independent for the purpose of interrater agreement?

             Any suggestions or references addressing this problem would be
        appreciated.  Thanks.

        Allen Cornelius

Reply via email to