I think we all agree that, loosely speaking, we want our AGI's to have a
goal of respecting and promoting the survival and happiness of humans and
all intelligent and living beings.

However, no two minds interpret these general goals in the same way.  You
and I don't interpret them exactly the same, and my children don't interpret
them exactly the same as me in spite of my explicit & implicit moral
instruction.  Similarly, an AGI will certainly have its own special twist on
the theme...

-- Ben G


>
> Ben Goertzel wrote:
> >
> > However, it's to be expected that an AGI's ethics will be
> different than any
> > human's ethics, even if closely related.
>
> What do a Goertzelian AGI's ethics and a human's ethics have in common
> that makes it a humanly ethical act to construct a Goertzelian AGI?
>
> --
> Eliezer S. Yudkowsky                          http://singinst.org/
> Research Fellow, Singularity Institute for Artificial Intelligence
>
> -------
> To unsubscribe, change your address, or temporarily deactivate
> your subscription,
> please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
>

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to