On 10/29/07, Benjamin Goertzel wrote:
> • The world appears to be sufficiently complex that it is essentially
> impossible for seriously resource-bounded systems like humans to guarantee 
> that any
> system's actions are going to have beneficent outcomes.  I.e., guaranteeing 
> (or
> coming anywhere near to guaranteeing) outcome-based Friendliness is 
> effectively
> impossible.  And this conclusion holds for basically any highly specific
> property, not just for Friendliness as conventionally defined.  (What is 
> meant by a
> "highly specific property" will be defined below.)
> "
>

'Friendliness' also has to face the problem that humans are full of
contradictory wants.
Wanting to 'have your cake and eat it' is not just a criticism. It's a
fact of daily life.

Most humans do not carefully analyse their wants and project the
implications into the future and then readjust their wants
accordingly.
'I want it all! And I want it now!' is far more common.

So how friendly is it for an AGI to be like a super-powerful father
figure, saying "You don't really want that because it will be bad for
you'?  It might be true, but humans definitely won't like it.
Especially if they see other people getting what the AGI says will be
bad for them to get.


BillK

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=58706146-5e20b8

Reply via email to