Ben Goertzel wrote:

I agree that, to compare humans versus AIXI on an IQ test in a fully fair way (that tests only intelligence rather than prior knowledge) would be hard, because there is no easy way to supply AIXI with the same initial knowledge state that the human has. Regarding whether AIXI, in order to solve an IQ test, would simulate the whole physical universe internally in order to simulate humans and thus figure out what a human would say for each question -- I really doubt it, actually. I am very close to certain that simulating a human is NOT the simplest possible way to create a software program scoring 100% on human-created IQ tests. So, the Occam prior embodied in AIXI would almost surely not cause it to take the strategy you suggest.
-- Ben

Alas, that was not quite the question at issue...

In the proof of AIXI's ability to solve the IQ test, is AIXI *allowed* to go so far as to simulate most of the functionality of a human brain in order to acquire its ability?

I am not asking you to make a judgment call on whether or not it would do so in practice, I am asking whether the structure of the proof allows that possibility to occur, should the contingencies of the world oblige it to do so. (I would also be tempted to question your judgment call, here, but I don't want to go that route :-)).

If the proof allows even the possibility that AIXI will do this, then AIXI has an homunculus stashed away deep inside it (or at least, it has one on call and ready to go when needed).

I only need the possibility that it will do this, and my conclusion holds.

So:  clear question.  Does the proof implicitly allow it?


Richard Loosemore.

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983

Reply via email to