Which is more or less why I figured you weren't going to "do
a Penrose" on us as you would then fact the usual reply...

Which begs the million dollar question:

Just what is this cunning problem that you have in mind?

:)

Shane

Eliezer S. Yudkowsky wrote:
Shane Legg wrote:

Eliezer S. Yudkowsky wrote:

An intuitively fair, physically realizable challenge with important real-world analogues, solvable by the use of rational cognitive reasoning inaccessible to AIXI-tl, with success strictly defined by reward (not a Friendliness-related issue). It wouldn't be interesting otherwise.

Give the AIXI a series mathematical hypotheses some of which are
Godelian like statements and asking the AIXI it if each statement
is true and then rewarding it for each correct answer?

I'm just guessing here... this seems too Penrose like, I suppose
you have something quite different?

Indeed.

Godel's Theorem is widely misunderstood. It doesn't show that humans can understand mathematical theorems which AIs cannot. It does not even show that there are mathematical truths not provable in the Principia Mathematica.

Godel's Theorem actually shows that *if* mathematics and the Principia Mathematica are consistent, *then* Godel's statement is true, but not provable in the Principia Mathematica. We don't actually *know* that the Principia Mathematica, or mathematics itself, is consistent. We just know we haven't yet run across a contradiction. The rest is induction, not deduction.

The only thing we know is that *if* the Principia is consistent *then* Godel's statement is true but not provable in the Principia. But in fact this statement itself can be proved in the Principia. So there are no mathematical truths accessible to human deduction but not machine deduction. Godel's statement is accessible neither to human deduction nor machine deduction.

Of course, Godel's statement is accessible to human *induction*. But it is just as accessible to AIXI-tl's induction as well. Moreover, any human reasoning process used to assign perceived truth to mathematical theorems, if it is accessible to the combined inductive and deductive reasoning of a tl-bounded human, is accessible to the pure inductive reasoning of AIXI-tl as well.

In prosaic terms, AIXI-tl would probably induce a Principia-like system for the first few theorems you showed it, but as soon as you punished it for getting Godel's Statement wrong, AIXI-tl would induce a more complex cognitive system, perhaps one based on induction as well as deduction, that assigned truth to Godel's statement. At the limit AIXI-tl would induce whatever algorithm represented the physically realized computation you were using to invent and assign truth to Godel statements. Or to be more precise, AIXI-tl would induce the algorithm the problem designer used to assign "truth" to mathematical theorems; perfectly if the problem designer is tl-bounded or imitable by a tl-bounded process; otherwise at least as well as any tl-bounded human could from a similar pattern of rewards.

Actually, humans probably aren't really all that good at spot-reading Godel statements. If you get tossed a series of Godel statements and you learned to decode the diagonalization involved, so that you could see *something* was being diagonalized, then the inductive inertia of your success at declaring all those statements true would probably lead you to blindly declare the truth of your own unidentified Godel statement, thus falsifying it. Thus I'd expect AIXI-tl to far outperform tl-bounded humans on any fair Godel-statement-spotting tournament (arranged by AIXI, of course).


-------
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to