Wei Dai wrote:
> > "Important", because I strongly suspect Hofstadterian superrationality
> > is a *lot* more ubiquitous among transhumans than among us...
>
> It's my understanding that Hofstadterian superrationality is not generally
> accepted within the game theory research community as a valid principle of
> decision making. Do you have any information to the contrary, or some
> other reason to think that it will be commonly used by transhumans?

I don't agree with Eliezer about the importance of Hofstadterian
superrationality.

However, I do think he ended up making a good point about AIXItl, which is
that an AIXItl will probably be a lot worse at modeling other AIXItl's, than
a human is at modeling other humans.  This suggests that AIXItl's playing
cooperative games with each other, will likely fare worse than humans
playing cooperative games with each other.

I don't think this conclusion hinges on the importance of Hofstadterian
superrationality...

> About a week ago Eliezer also wrote:
>
> > 2) While an AIXI-tl of limited physical and cognitive
> capabilities might
> > serve as a useful tool, AIXI is unFriendly and cannot be made Friendly
> > regardless of *any* pattern of reinforcement delivered during childhood.
>
> I always thought that the biggest problem with the AIXI model is that it
> assumes that something in the environment is evaluating the AI and giving
> it rewards, so the easiest way for the AI to obtain its rewards would be
> to coerce or subvert the evaluator rather than to accomplish any real
> goals. I wrote a bit more about this problem at
> http://www.mail-archive.com/everything-list@eskimo.com/msg03620.html.

I agree, this is a weakness of AIXI/AIXItl as a practical AI design.  In
humans, and in a more pragmatic AI design like Novamente, one has a
situation where the system's goals adapt and change along with the rest of
the system, beginning from (and sometimes but not always straying far from)
a set of initial goals.

One could of course embed the AIXI/AIXItl learning mechanism in a
supersystem that adapted its goals....  But then one would probably lose the
nice theorems Marcus Hutter proved....

-- Ben G






-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to