Shane Legg wrote:
John,

I talk a little about Ned Block's argument in my journal paper about
formally defining intelligence, unfortunately however this paper
seems to have gone into some kind of infinite loop inside the journal
review process so I'm not sure when it will see the light of day :-(

The first objection to Block's argument that comes up is that it's
only an in-theory argument as you could never build such a machine.
Even trivial problems quickly require that the machine would have 2 ^
2 ^ 10000 bits of memory.  These are problems that any real AGI would
solve without the slightest of effort as it just has to record a few
bytes and apply a trivial function.  Thus, less it turns out that
infinite computation is possible in reality, nobody with a practical
mind will ever have to worry about a Blockhead lookup table machine.

What if we change the Block argument to allow something slightly more
complex that just a lookup table so as to avoid the problem above?
The problem then is that human intelligence appears to be the product
of a finite number of neurons firing etc. in the brain.  In order
words, we are more than a lookup table, however we may well be just
the product of a large (but finite) number of reasonably simple things working together. So if you're not careful you may well
define intelligence in such a way that humans don't have it either.

What if infinite computation did become possible, won't the Block argument then become a serious problem? If you did have infinite
computation then you could just build an AIXI and be done.

Shane,

This is patently false. If you had the infinite computational power to build the hardware for an AIXI system, you would still have to program it to pick up the contingencies of the real world (just building the machine does not get you the functions that are assumed to have been *acquired* by an infinite AIXI). So you also require an infinite amount of time to get it programmed. Just how many infinities are you allowed to assume before this becomes nonsense?

Wait. Come to think of it, the machine itself would also be a part of the real physical universe (unless it is made of ectoplasm and exists only in a parallel dimension which has no interactions with the real world), so it would have to include, as part of its acquired programming, a complete knowledge of all the contingencies associated with its own behavior. Thinking carefully through the implications of this, and bearing in mind that the machine in the real world is currently spending an infinite amount of time capturing the contingencies embedded in the real physical universe, I cannot help but conclude that the AIXI system would also be spending an *additional* infinite amount of time playing catchup in its attempts to model its one infinitely-changing self.

The whole concept of an AIXI machine is replete with these ludicrous implications.


Richard Loosemore.


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983

Reply via email to