On Tue, 11 Feb 2003, Ben Goertzel wrote:

> Eliezer wrote:
> >  > * a paper by Marcus Hutter giving a Solomonoff induction based theory
> >  > of general intelligence
> >
> > Interesting you should mention that.  I recently read through Marcus
> > Hutter's AIXI paper, and while Marcus Hutter has done valuable work on a
> > formal definition of intelligence, it is not a solution of Friendliness
> > (nor do I have any reason to believe Marcus Hutter intended it as one).
> >
> > In fact, as one who specializes in AI morality, I was immediately struck
> > by two obvious-seeming conclusions on reading Marcus Hutter's formal
> > definition of intelligence:
> >
> > 1)  There is a class of physically realizable problems, which humans can
> > solve easily for maximum reward, but which - as far as I can tell - AIXI
> > cannot solve even in principle;
>
> I don't see this, nor do I believe it...

I don't believe it either. Is this a reference to Penrose's
argument based on Goedel's Incompleteness Theorem (which is
wrong)?

> > 2)  While an AIXI-tl of limited physical and cognitive capabilities might
> > serve as a useful tool,
>
> AIXI-tl is a totally computationally infeasible algorithm.  (As opposed to
> straight AIXI, which is an outright *uncomputable* algorithm).  I'm sure you
> realize this, but those who haven't read Hutter's stuff may not...
>
> If you haven't already, you should look at Juergen Schmidhuber's OOPS
> system, which is similar in spirit to AIXI-tl but less computationally
> infeasible.  (Although I don't think that OOPS is a viable pragmatic
> approach to AGI either, it's a little closer.)
>
> > AIXI is unFriendly and cannot be made Friendly
> > regardless of *any* pattern of reinforcement delivered during childhood.
>
> This assertion doesn't strike me as clearly false....  But I'm not sure why
> it's true either.

The formality of Hutter's definitions can give the impression
that they cannot evolve. But they are open to interactions
with the external environment, and can be influenced by it
(including evolving in response to it). If the reinforcement
values are for human happiness, then the formal system and
humans together form a symbiotic system. This symbiotic
system is where you have to look for the friendliness. This
is part of an earlier discussion at:

  http://www.mail-archive.com/agi@v2.listbox.com/msg00606.html

Cheers,
Bill

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to