Alas, that was not quite the question at issue...

In the proof of AIXI's ability to solve the IQ test, is AIXI *allowed* to go so far as to simulate most of the functionality of a human brain in order to acquire its ability?

I am not asking you to make a judgment call on whether or not it would do so in practice, I am asking whether the structure of the proof allows that possibility to occur, should the contingencies of the world oblige it to do so. (I would also be tempted to question your judgment call, here, but I don't want to go that route :-)).

If the proof allows even the possibility that AIXI will do this, then AIXI has an homunculus stashed away deep inside it (or at least, it has one on call and ready to go when needed).

I only need the possibility that it will do this, and my conclusion holds.

So:  clear question.  Does the proof implicitly allow it?

Yeah, if AIXI is given initial knowledge or experiential feedback that is in principle adequate for internal reconstruction of simulated humans ... then its learning algorithm may potentially construct simulated humans.

However, it is not at all clear that, in order to do well on an IQ test, AIXI would need to be given enough background data or experiential feedback to **enable** accurate simulation of humans....

It's not right to way "AIXI has a homunculus on call and ready to go when needed." Rather, it's right to say "AIXI has the capability to synthesize an homunculus if it is given adequate data to infer the properties of one, and judges this the best way to approach the problem at hand."

-- Ben G


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983

Reply via email to