Robin <mixent...@aussiebroadband.com.au> wrote:

My problem is with the whole line of research. This is just "a foot in the
> door" so to speak.


What door? What is the problem with this research? Why would there be any
harm if a computer program senses the emotions or attitude of the person
using the program? I should think that would be an advantage in things like
medical surveys. You want to have some indication if the respondent is
upset by the questions, or confused, or lying.

In an interface to a program to operate a large, dangerous factory tool,
you want the computer to know if the operator is apparently upset, bored,
confused or distracted. That should trigger an alarm. Having some sense of
the operator's mood seems like a useful feature. You could just ask in a
satisfaction survey:

"Did you find this interface easy or difficult (1 to 10)?
Did you find this procedure interesting or boring (1 to 10)?
Are you confident you understand how to operate [the gadget]?" . . .

You could ask, but most users will not bother to fill in a survey. It is
better to sense the results from every operator in real time. It does not
seem any more invasive than having the user enter an ID which is verified
and recorded. I assume any large, dangerous factory tool control software
includes registration and a record of the operator actions, in a black box
accident recorder.

I get that if they were trying to install artificial emotions in computers,
that would be a problem. It would be manipulative. In Japan, they are
making furry puppet robot animals to comfort old people. Instead of cats or
dogs. I find that creepy!

The one thing they might do, which is not so manipulative, would be to have
the program say something like: "You appear to be having difficulty filling
in this form. Would you like me to ask a staff member to assist you?"

Reply via email to