When calculating agreement between test and retest, Cohen's kappa adjusts for chance agreement.  Is it reasonable to assume an independence of responses or is there an autocorrelation that is not accounted for by kappa that would effect the level of agreement, say where you had the same 20 items in a test & retest?

--

Douglas Rugh (D.Rugh)


Reply via email to