I wrote:
> I'm not sure why an AIXI, rewarded for pleasing humans, would learn an
> operating program leading it to hurt or annihilate humans, though.
>
> It might learn a program involving actually doing beneficial acts
> for humans
>
> Or, it might learn a program that just tells humans what they
> want to hear,
> using its superhuman intelligence to trick humans into thinking
> that hearing
> its soothing words is better than having actual beneficial acts done.
>
> I'm not sure why you think the latter is more likely than the former.  My
> guess is that the former is more likely.  It may require a simpler program
> to please humans by benefiting them, than to please them by tricking them
> into thinking they're being benefited....

But even in the latter case, why would this program be likely to cause it to
*harm* humans?

That's what I don't see...

If it can get its reward-button jollies by tricking us, or by actually
benefiting us, why do you infer that it's going to choose to get its
reward-button jollies by finding a way to get rewarded by harming us?

I wouldn't feel terribly comfortable with an AIXI around hooked up to a
bright red reward button in Marcus Hutter's basement, but I'm not sure it
would be sudden disaster either...

-- Ben G

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to