Eliezer wrote:
>  > * a paper by Marcus Hutter giving a Solomonoff induction based theory
>  > of general intelligence
>
> Interesting you should mention that.  I recently read through Marcus
> Hutter's AIXI paper, and while Marcus Hutter has done valuable work on a
> formal definition of intelligence, it is not a solution of Friendliness
> (nor do I have any reason to believe Marcus Hutter intended it as one).
>
> In fact, as one who specializes in AI morality, I was immediately struck
> by two obvious-seeming conclusions on reading Marcus Hutter's formal
> definition of intelligence:
>
> 1)  There is a class of physically realizable problems, which humans can
> solve easily for maximum reward, but which - as far as I can tell - AIXI
> cannot solve even in principle;

I don't see this, nor do I believe it...

> 2)  While an AIXI-tl of limited physical and cognitive capabilities might
> serve as a useful tool,

AIXI-tl is a totally computationally infeasible algorithm.  (As opposed to
straight AIXI, which is an outright *uncomputable* algorithm).  I'm sure you
realize this, but those who haven't read Hutter's stuff may not...

If you haven't already, you should look at Juergen Schmidhuber's OOPS
system, which is similar in spirit to AIXI-tl but less computationally
infeasible.  (Although I don't think that OOPS is a viable pragmatic
approach to AGI either, it's a little closer.)

> AIXI is unFriendly and cannot be made Friendly
> regardless of *any* pattern of reinforcement delivered during childhood.

This assertion doesn't strike me as clearly false....  But I'm not sure why
it's true either.

Please share your argument...

-- Ben

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to