On 26/10/2007, Richard Loosemore <[EMAIL PROTECTED]> wrote:

> If you build an AGI, and it sets out to discover the convergent desires
> (the CEV) of all humanity, it will be doing this because it has the goal
> of using this CEV as the basis for the "friendly" motivations that will
> henceforth guide it.
>
> But WHY would it be collecting the CEV of humanity in the first phase of
> the operation?  What would motivate it to do such a thing?  What exactly
> is it in the AGI's design that makes it feel compelled to be friendly
> enough toward humanity that it would set out to assess the CEV of humanity?
>
> The answer is:  its initial feelings of friendliness toward humanity
> would have to be the motivation that drove it to find out the CEV.
>
> The goal state of its motivation system is assumed in the initial state
> of its motivation system.  Hence: circular.

You don't have to assume that the AI will figure out the CEV of
humanity because it's friendly; you can just say that its goal is to
figure out the CEV because that is what it has been designed to do,
and that it has been designed to do this because its designers have
decided that this a good way to ensure behaviour which will be
construed as friendly.

I don't see that the CEV goal would be much different to creating an
AI that simply has the goal of obeying its human masters. Some of the
instructions it will be given will be along the lines of "AI, if you
think I'll be really, really upset down the track if you carry out my
instructions (where you determine what I mean by 'really, really
upset' using your superior intellect), then don't carry out my
instructions." If there are many AI's with many human masters,
averaging out their behaviour will result in an approximation of the
CEV of humanity.




-- 
Stathis Papaioannou

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=57845052-2cface

Reply via email to