Reverse the meaning of what a 1 and 5 mean in the test. So basically make 1 be strongly agree and 5 be strongly disagree but then change around the questions so strongly agreeing is bad in some cases but good in others.

Also include in there some questions that contradict each other, i.e. if they strongly agree with one question that same question is in reverse somewhere else so they better disagree then or else the result should be thrown out.

Try a different way of asking the questions For example give a list of qualities of a person and allows assigning points to the qualities, but limit the total of those points to a certain number so they must decide which ones they want to rank the highest. Remember all points must be used so include some negative qualities and make it more of a ranking in that getting a one on something just means you are stronger at other things and not bad at it.

Eli


----- Original Message ----- From: "Thane Sherrington" <[EMAIL PROTECTED]>
To: <hardware@hardwaregroup.com>
Sent: Wednesday, June 01, 2005 9:00 PM
Subject: [H] -OT- Logic question for programmers on the list


Here's a a problem I'm wrestling with. I have a company doing on-line performance reviews. Each employee rates a set of other employees on a survey which has six categories with between 3 and 7 questions in each category.

The problem is that there are a couple bad apples who blow through the surveys rating someone either all 1s or all 5s, throwing off that person's ratings and effectively ruining the value of the performance review.

My first attempt to stop this was to time the surveys. People who finished them in less than five minutes (the cheaters generally take two minutes) got a message telling them to go back and think about their answers and try again. That didn't work because it turned out that several non-cheaters print out the review and do it on paper, and then login to enter the answers - since they were working from paper, they finished the review in under five minutes.

Then I tried checking each category - if all the answers in a specific category were the same, I rejected the review and told them to do it again. No soap - occasionally there are legitimate reviews where one category has all the same answers.

So then I switched to checking the entire survey. If all the answers are same, the survey gets rejected. It took the cheaters slightly under a quarter of a second to figure that one out, as you can imagine.

The surveys are all anonymous, so I can't simply go to the person entering the survey and tell him/her to stop cheating.

Can anyone think of a way to monitor and block the cheaters?

T



Reply via email to