Bill Hibbard wrote:
On Tue, 11 Feb 2003, Ben Goertzel wrote:

Eliezer wrote:

Interesting you should mention that.  I recently read through Marcus
Hutter's AIXI paper, and while Marcus Hutter has done valuable work on a
formal definition of intelligence, it is not a solution of Friendliness
(nor do I have any reason to believe Marcus Hutter intended it as one).

In fact, as one who specializes in AI morality, I was immediately struck
by two obvious-seeming conclusions on reading Marcus Hutter's formal
definition of intelligence:

1)  There is a class of physically realizable problems, which humans can
solve easily for maximum reward, but which - as far as I can tell - AIXI
cannot solve even in principle;
I don't see this, nor do I believe it...
I don't believe it either. Is this a reference to Penrose's
argument based on Goedel's Incompleteness Theorem (which is
wrong)?
Oh, well, in that case, I'll make my statement more formal:

There exists a physically realizable, humanly understandable challenge C on which a tl-bounded human outperforms AIXI-tl for humanly understandable reasons. Or even more formally, there exists a computable process P which, given either a tl-bounded uploaded human or an AIXI-tl, supplies the uploaded human with a greater reward as the result of strategically superior actions taken by the uploaded human.

:)

--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to