Bill,

You said:
> But I think that a machine whose primary values are for the happiness
> of all humans will not learn any behaviors to evolve against human
> interests. Ask any mother whether she would rewire her brain to want to
> eat her children. 

I'm afraid I can't see your logic.  Firstly not eating ones own children 
leaves about 6 billion other humans that you might be nasty to and 
there's plenty of evidence that once humans asign other humans to the 
'other' category (ie. not my kin/friend/child/whatever) then the others 
can be fair game in at least some circumstances for torture, murder, 
massacre, canibalism, etc. etc.

You might say that humans are not fundamentally programmed to be 
nice to ALL humans and so argue that my objection doesn't hold.

But there are (admitedly rare) cases where parents (including mothers) 
do kill their own kids - most often I guess in suicide/murder cases - and 
kids that scream non-stop for long periods can lead to homicidal 
tendencies in even nice parents (hands up anyone who has not bee in 
that position)!

At the moment in time that we are feeling homicidal we might be 
tempeted to do a bit of reprogramming to make the homicide less guilt-
inducing - if we could only reach into our brains and do it.

So maybe a bit of hard wiring that we should build into an AGI is the 
requirement for a long cooling off period before an AGI could do any 
self-modification to its core ethical coding.

Cheers, Philip

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to