Ben,

While I sympathise with your distinction between satirical and technically accurate attacks on the value of AIXI (I have presented both in the last week), and I would not want the two to be conflated, I want to go on record as saying that I do not accept some of what you say below.

For example:

Perhaps the main contribution of the AIXI work, in my
view, is that it provides a rigorous mathematical refutation
to the idea that "AGI is impossible."  It shows that, in case
anyone had any doubt, "Yes, the problem of creating AGI
is in essence a problem of coping with limited space
and time resources."   Now, this was intuitively obvious
to me (and many others) already.  But, the AIXI and AIXItl
work shows it quite impressively and definitively.  Because
it shows that if one makes sufficiently generous assumptions
regarding space and time resources, one can achieve
maximally powerful AGI using a very simple algorithm.

It only demonstrates this under conditions that restrict the meaning of "AGI" in particular ways. Those restrictions, as I pointed out before, have to do with defining "intelligence" to be a certain sort of function ... in the real world it remains entirely possible that even AIXI would behave in a way that we would consider "stupid," even though it would technically be following its "goals" perfectly.

The point I just made cannot be pursued very far, however, because any further discussion of it *requires* that someone on the AIXI side become more specific about why they believe their definition of "intelligent behavior" should be considered coextensive with the common sense use of that term. No such justification is forthcoming, so withot it all I can do is rest my case by asking "Why should I believe your (re)definition of intelligence?"



These remarks, also, I find overly generous:

Finally, there are at least two principles underlying AIXI
that I think are applicable to realistic-resources AGI systems:

-- probabilistic reasoning (though a realistic-resources
system can only approximate it, not exactly achieve it)
-- Occam's Razor (preferring the simpler explanation)

These two aspects are key to AIXI and I also think they
are central to achieving AGI using realistic resources.

AIXI does not "prove" that either of these principles is valid, because (among other things) such a proof is dependent on the above issue being sorted out.

AIXI also does not make specific, pragmatic suggestions about exactly how "Occam's razor" and "probabilistic reasoning" play a role in AGI systems. Instead, we are left with the idea that something like Occam's razor and something like probabilitic reasoning must be involved.

Big deal: we were assuming both of them anyway. AIXI does not *force* us to accept those principles now, becuase it does not asy anything more precise about them than people were already saying. (If you think it does force us, more than before, please explain to this skeptic).



But really, I am at my limit for discussing this subject, and I would prefer that you accept that I disagree and we both let it rest. You feel it has some marginal value, I believe that it is truly worthless

I have not had meaningful replies to my earlier, careful criticisms (directed at remarks made by others), and I have no desire to ask you to defend AIXI.


Richard Loosemore.


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983

Reply via email to