Ben Goertzel wrote:
>> Oh, well, in that case, I'll make my statement more formal:
>>
>> There exists a physically realizable, humanly understandable
>> challenge C on which a tl-bounded human outperforms AIXI-tl for
>> humanly understandable reasons. Or even more formally, there exists
>> a computable process P which, given either a tl-bounded uploaded
>> human or an AIXI-tl, supplies the uploaded human with a greater
>> reward as the result of strategically superior actions taken by the
>> uploaded human.
>>
>> :)
>>
>> -- Eliezer S. Yudkowsky
>
> Hmmm.
>
> Are you saying that given a specific reward function and a specific
> environment, the t1-bounded uploaded human with resources (t,l) will
> act so as to maximize the reward function better than AIXI-tl with
> resources (T,l) with T as specified by Hutter's theorem of AIXI-tl
> optimality?
>
> Presumably you're not saying that, because it would contradict his
> theorem?

Indeed. I would never presume to contradict Hutter's theorem.

> So what clever loophole are you invoking?? ;-)

An intuitively fair, physically realizable challenge with important real-world analogues, solvable by the use of rational cognitive reasoning inaccessible to AIXI-tl, with success strictly defined by reward (not a Friendliness-related issue). It wouldn't be interesting otherwise.

--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to