Eliezer S. Yudkowsky wrote:

An intuitively fair, physically realizable challenge with important real-world analogues, solvable by the use of rational cognitive reasoning inaccessible to AIXI-tl, with success strictly defined by reward (not a Friendliness-related issue). It wouldn't be interesting otherwise.

Give the AIXI a series mathematical hypotheses some of which are
Godelian like statements and asking the AIXI it if each statement
is true and then rewarding it for each correct answer?

I'm just guessing here... this seems too Penrose like, I suppose
you have something quite different?

Shane

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]


Reply via email to