Ben Goertzel wrote:
>
> I'll read the rest of your message tomorrow...
>
>> But we aren't *talking* about whether AIXI-tl has a mindlike
>> operating program. We're talking about whether the physically
>> realizable challenge, which definitely breaks the formalism, also
>> breaks AIXI-tl in practice. That's what I originally stated, that's
>> what you originally said you didn't believe, and that's all I'm
>> trying to demonstrate.
>
> Your original statement was posed in a misleading way, perhaps not
> intentionally.
>
> There is no challenge on which *an* AIXI-tl doesn't outperform *an*
> uploaded human.

We are all Lee Corbin; would you really say there's "more than one"... oh,
never mind, I don't want to get *that* started here.

There's a physical challenge which operates on *one* AIXI-tl and breaks it, even though it involves diagonalizing the AIXI-tl as part of the
challenge. In the real world, all reality is interactive and
naturalistic, not walled off by a Cartesian theatre. The example I gave
is probably the simplest case that clearly breaks the formalism and
clearly causes AIXI-tl to operate suboptimally. There's more complex and
important cases, that we would understand as roughly constant
environmental challenges which break AIXI-tl's formalism in more subtle
ways, with the result that AIXI-tl can't cooperate in one-shot PDs with
superintelligences... and neither can a human, incidentally, but another
seed AI or superintelligence can-I-think, by inventing a new kind of
reflective choice which is guaranteed to be correlated as a result of
shared initial conditions, both elements that break AIXI-tl... well, anyway, the point is that there's a qualitatively different kind of intelligence here that I think could turn out to be extremely critical in negotiations among superintelligences. The formalism in this situation gets broken, depending on how you're looking at it, by side effects of the AIXI-tl's existence or by violation of the separability condition. Actually, violations of the formalism are ubiquitous and this is not particularly counterintuitive; what is counterintuitive is that formalism violations turn out to make a real-world difference.

Are we at least in agreement on the fact that there exists a formalizable constant challenge C which accepts an arbitrary single agent and breaks both the AIXI-tl formalism and AIXI-tl?

<Reads Ben Goertzel's other message, while working on this one.>

OK.

We'd better take a couple of days off before taking up the AIXI Friendliness issue. Maybe even wait until I get back from New York in a week. Also, I want to wait for all these emails to show up in the AGI archive, then tell Marcus Hutter about them if no one has already. I'd be interesting in seeing what he thinks.

> What you're trying to show is that there's an inter-AIXI-tl social
> situation in which AIXI-tl's perform less intelligently than humans do
> in a similar inter-human situation.
>
> If you had posed it this way, I wouldn't have been as skeptical
> initially.

If I'd posed it that way, it would have been uninteresting because I
wouldn't have broken the formalism. Again, to quote my original claim:

>> 1) There is a class of physically realizable problems, which humans
>> can solve easily for maximum reward, but which - as far as I can tell
>> - AIXI cannot solve even in principle;
>
> I don't see this, nor do I believe it...

And later expanded to:

> An intuitively fair, physically realizable challenge, with important
> real-world analogues, formalizable as a computation which can be fed
> either a tl-bounded uploaded human or an AIXI-tl, for which the human
> enjoys greater success measured strictly by total reward over time, due
> to the superior strategy employed by that human as the result of
> rational reasoning of a type not accessible to AIXI-tl.

It's really the formalizability of the challenge as a computation which can be fed either a *single* AIXI-tl or a *single* tl-bounded uploaded human that makes the whole thing interesting at all... I'm sorry I didn't succeed in making clear the general class of real-world analogues for which this is a special case.

If I were to take a very rough stab at it, it would be that the cooperation case with your own clone is an extreme case of many scenarios where superintelligences can cooperate with each other on the one-shot Prisoner's Dilemna provided they have *loosely similar* reflective goal systems and that they can probabilistically estimate that enough loose similarity exists.

It's the natural counterpart of the Clone challenge - loosely similar goal systems arise all the time, and it turns out that in addition to those goal systems being interpreted as a constant environmental challenge, there are social problems that depend on your being able to correlate your internal processes with theirs (you can correlate internal processes because you're both part of the same naturalistic universe). This breaks AIXI-tl because it's not loosely similar enough - and it's not a symmetrical situation; AIXI-tl can't handle correlation with even the most similar possible mind. The Clone challenge is just a very extreme case of that. I'd worked out the structure for the general case previously. Encountering AIXI, I immediately saw that AIXI didn't handle the general case, then I devised the Clone challenge as the clearest illustration from a human perspective.

So what happens if you have a (tabula rasa) AIXI-tl trying to impersonate a superintelligence in real life? If AIXI-tl fails to spoof a reflective SI perfectly on the first round, everyone in the universe will soon know it's an AIXI-tl or something similar, which violates Hutter's separability condition. If the environment gives rise to an AIXI-tl in a way that lets an SI reason about AIXI-tl's internals from its initial conditions, it breaks the Cartesian theatre. The interesting part is that these little natural breakages in the formalism create an inability to take part in what I think might be a fundamental SI social idiom, conducting binding negotiations by convergence to goal processes that are guaranteed to have a correlated output, which relies on (a) Bayesian-inferred initial similarity between goal systems, and (b) the ability to create a top-level reflective choice that wasn't there before, that (c) was abstracted over an infinite recursion in your top-level predictive process.

But if this isn't immediately obvious to you, it doesn't seem like a top priority to try and discuss it...

--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to