In reply to  Jed Rothwell's message of Fri, 7 Jun 2024 16:08:57 -0400:
Hi Jed,

I have no problem with that as far as it goes, however I fear that if will be 
expanded. E.g. it would prove lucrative
for script writing AIs to be able to emulate the emotions of the characters. 
Ergo sooner or later we can expect someone
to start imbuing AIs with pseudo emotions. (Given the short sightedness of most 
human beings, probably sooner rather
than later.)
IOW this is just the first step along a dangerous path, and it wont be obvious 
just how dangerous it is until after it
has become so, by which time it will already be too late.

Throughout human history we have been able to observe events and react 
accordingly, so we expect that pattern of
behaviour to serve us well in the future too. It's part of our biological 
makeup. However we have never before been
confronted with an adversary that can out-think us a thousand to one. We would 
be dead before we even new there was a
threat...and that threat may not even understand (on a human level) or care for 
that matter, what it was doing.
(Think e.g. war games scenario - which is a recurring SF plot.)


>Robin <mixent...@aussiebroadband.com.au> wrote:
>
>My problem is with the whole line of research. This is just "a foot in the
>> door" so to speak.
>
>
>What door? What is the problem with this research? Why would there be any
>harm if a computer program senses the emotions or attitude of the person
>using the program? I should think that would be an advantage in things like
>medical surveys. You want to have some indication if the respondent is
>upset by the questions, or confused, or lying.
>
>In an interface to a program to operate a large, dangerous factory tool,
>you want the computer to know if the operator is apparently upset, bored,
>confused or distracted. That should trigger an alarm. Having some sense of
>the operator's mood seems like a useful feature. You could just ask in a
>satisfaction survey:
>
>"Did you find this interface easy or difficult (1 to 10)?
>Did you find this procedure interesting or boring (1 to 10)?
>Are you confident you understand how to operate [the gadget]?" . . .
>
>You could ask, but most users will not bother to fill in a survey. It is
>better to sense the results from every operator in real time. It does not
>seem any more invasive than having the user enter an ID which is verified
>and recorded. I assume any large, dangerous factory tool control software
>includes registration and a record of the operator actions, in a black box
>accident recorder.
>
>I get that if they were trying to install artificial emotions in computers,
>that would be a problem. It would be manipulative. In Japan, they are
>making furry puppet robot animals to comfort old people. Instead of cats or
>dogs. I find that creepy!
>
>The one thing they might do, which is not so manipulative, would be to have
>the program say something like: "You appear to be having difficulty filling
>in this form. Would you like me to ask a staff member to assist you?"
Regards,

Robin van Spaandonk

Drive your electric car every second day and recharge it from solar panels on 
your roof on the alternate days.
The other days, drive your spouses car, and do the same with it.

Reply via email to