I think that the system you originally described is a very complex one ... that involves many facets to the decision making process. I will let others opine about if it appears to be overly complex or not. But, what I would say is that the more facets and/or steps in the process that applicants and reviewer/decision makes go through ... the HARDER it is to bring statistical evidence to bear that something biased (UNfair?) has been done.
In addition, we have to try to separate the concepts of "bias" from "fairness" ... since, they are not necessarily equivalent. Bias IS a statistical phenomenon ... say, if we find for two subgroups who take a test ... subgroup A and subgroup B ... where both A and B have the SAME average ability ... but, on some particular test ITEM ... B has a much lower p value for answering the item correctly than A ... we say the item is biased ... but, is it unfair? Maybe yes ... maybe no Fairness seems to involve a value judgement that, is not necessarily present in the concept of bias. All I can suggest at this point, and I have not heard any other person respond to your inquiry, is to list out IN order, each step of the overall process and, carefully examine from a LOGICAL analysis point of view (first), what could go awry at this step ... that would make the final decision down the line ... something that is not desirable ... and also ask at each of these steps ... would reasoned judgement say that this process at this step ... is a fair one or not? That is, is the process we use at (say) step 1 ... clearly flawed if we follow it to it's logical end point? At 02:29 PM 5/1/02 +0000, Stan Maxwell wrote: >Dennis >Fairness has to be tested using a purely statistical metric. I have two >numeric measures from the outcome of the employment interview >process. The first is the yes or no decision to consider the >applicant. For those applicants where the answer was yes I have a integer >score on a fixed interval. An applicant that is scored in the upper >50%tile can be hired. I have five Groups of Applicants. I have five >groups of reviewers. Each combination has between 5 and 140 applications >per wave. Applications are not randomly assigned to reviewers. A fair >review assumes the joint distribution of yes/no and score is the same for >each applicant group within review group. I need distribution free >statistics and tests that control for both alpha and beta risk. The >reviews come in waves. Applicants that fail in a wave can reapply. I >have data on 18 waves. >Stan > >>From: Dennis Roberts <[EMAIL PROTECTED]> >>To: "Stan Maxwell" <[EMAIL PROTECTED]>, [EMAIL PROTECTED] >>Subject: Re: I have a problem evaluating a Grading System. >>Date: Tue, 30 Apr 2002 15:37:33 -0400 >> >>before we can really attempt an answer to this problem ... the question has >>to be answered ... what do YOU think or what are YOU considering ... to be >>UNfair? >> >>without some rather clear operational definition of that term ... then i >>don't think there is any good answer to your question .. >> >>At 07:31 PM 4/30/02 +0000, Stan Maxwell wrote: >>>I have a problem evaluating a Grading System. >> >>Dennis Roberts, 208 Cedar Bldg., University Park PA 16802 >><Emailto: [EMAIL PROTECTED]> >>WWW: http://roberts.ed.psu.edu/users/droberts/drober~1.htm >>AC 8148632401 > > >_________________________________________________________________ >MSN Photos is the easiest way to share and print your photos: >http://photos.msn.com/support/worldwide.aspx > Dennis Roberts, 208 Cedar Bldg., University Park PA 16802 <Emailto: [EMAIL PROTECTED]> WWW: http://roberts.ed.psu.edu/users/droberts/drober~1.htm AC 8148632401 . . ================================================================= Instructions for joining and leaving this list, remarks about the problem of INAPPROPRIATE MESSAGES, and archives are available at: . http://jse.stat.ncsu.edu/ . =================================================================
