Re: [singularity] Apology to the list, and a more serious commentary on AIXI

2007-03-11 Thread Cenny Wenner

I'm still relatively new to the field so of course i do not put a lot of
strength behind my opinions. I think there are some clarifications here that
could be useful, although it is highly improbable that you need them.



My overall argument is completely vindicated by what you say here.

(My wording was sometimes ambiguous in that last email, I confess, but
what I have been targeting is AIXI as proof, not AIXI as actual working
system).

I only care about where AIXI gets the power of its proof, so it does not
matter to me whether a practical implementation [sic] of AIXI would
actually need to build a cognitive system.

It is not important whether it would do so in practice, because if the
proof says that AIXI is allowed to build a complete cognitive system in
the course of solving the IQ test problem, then what is the meaning of
AIXI would equal any other intelligence starting with the same initial
knowledge set  well, yeah, of course it would, if it was allowed to
build something as sophisticated as that other intelligence!

It is like me saying I can prove that I can make a jet airliner with my
bare hands . and then when you delve into the proof you find that
my definition of make includes the act of putting in a phone call to
Boeing and asking them to deliver one.  Such a proof is completely
valueless.



It is rather vacuous that AIXI's capability to (to some degree) simulate
another agent's [intellect] means it is on par with that agent['s
intellect]. However, AIXI also proves the antecedent and it is not merely an
assumption. This is different from s ilar circular arguments (in disguise).
On top of this, it proves that we can do better than this intellect and that
we may do optimal given limited data (in terms of what can be done given the
data).

I think your example is a strawman argument here. What is counter-intuitive
is the definition of 'make' I believe. If you allow this to fall within the
definition, and take care to formulate a strict definition and not one of
varrying strength then any path is of equal worth. Not everything is able
to phone Boeing and it is therefor not valueluess to know it is able to - it
does contribute information and it is merely a choice of scale. What would
you say that in your process of building the plane, your observation of
birds and the wind led you to think in similar manners, and not nec. exactly
the same, as the Wright's? If we do not allow a being to even implicitely (
i.e. without nec. realizing that it merely mimics another agent's train of
thought, albeit I do not personally consider it relevant), or having the
capabilities, to simulate that of another agent, in the world where every
possible agent does (or has) existed, any additional agents are condemned.

AIXI is valueless.


QED.



Richard Loosemore.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Apology to the list, and a more serious commentary on AIXI

2007-03-09 Thread Richard Loosemore

Ben Goertzel wrote:




Sorry, but I simply do not accept that you can make do really well on 
a long series of IQ tests into a computable function without getting 
tangled up in an implicit homuncular trap (i.e. accidentally assuming 
some real intelligence in the computable function).


Let me put it this way:  would AIXI, in building an implementation of 
this function, have to make use of a universe (or universe simulation) 
that *implicitly* included intelligences that were capable of creating 
the IQ tests?


So, if there were a question like this in the IQ tests:

Anna Nicole is to Monica Lewinsky as Madonna is to ..


Richard, perhaps your point is that IQ tests assume certain implicit 
background knowledge.  I stated in my email that AIXI would equal any 
other intelligence starting with the same initial knowledge set  So, 
your point is that IQ tests assume an initial knowledge set that is part 
and parcel of human culture.



No, that was not my point at all.

My point was much more subtle than that.

You claim that AIXI would equal any other intelligence starting with 
the same initial knowledge set.  I am focussing on the initial 
knowledge set.


So let's compare me, as the other intelligence, with AIXI.  What exactly 
is the same initial knowledge set that we are talking about here? 
Just the words I have heard and read in my lifetime?  The words that I 
have heard, read AND spoken in my lifetime?  The sum total of my sensory 
experiences, down at the neuron-firing level?  The sum total of my 
sensory experiences AND my actions, down at the neuron firing level? 
All of the above, but also including the sum total of all my internal 
mental machinery, so as to relate the other fluxes of data in a coherent 
way?  All of the above, but including all the cultural information that 
is stored out there in other minds, in my society?  All of the above, 
but including simulations of all the related


Where, exactly, does AIXI draw the line when it tries to emulate my 
performance on the test?


(I picked that particular example of an IQ test question in order to 
highlight the way that some tests involve a huge amount of information 
that requires understanding other minds .. my goal being to force AIXI 
into having to go a long way to get its information).


And if it does not draw a clear line around what same initial knowledge 
set means, but the process is open ended, what is to stop the AIXI 
theorems from implictly assuming that AIXI, if it needs to, can simulate 
my brain and the brains of all the other humans, in its attempt to do 
the optimisation?


What I am asking (non-rhetorically) is a question about how far AIXI 
goes along that path.  Do you know AIXI well enough to say?  My 
understanding (poor though it is) is that it appears to allow itself the 
latitude to go that far if the optimization requires it.


If it *does* allow itself that option, it would be parasitic on human 
intelligence, because it would effectively be simulating one in order to 
deconstruct it and use its knowledge to answer the questions.


Can you say, definitively, that AIXI draws a clear line around the 
meaning of same initial knowledge set, and does not allow itself the 
option of implicitly simulating entire human minds as part of its 
infinite computation?


Now, I do have a second line of argument in readiness, in case you can 
confirm that it really is strictly limited, but I don't think I need to 
use it.  (In a nutshell, I would go on to say that if it does draw such 
a line, then I dispute that it really can be proved to perform as well 
as I do, because it redefines what I am trying to do in such a way as 
to weaken my performance, and then proves that it can perform better 
than *that*).






Richard Loosemore


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Apology to the list, and a more serious commentary on AIXI

2007-03-09 Thread Richard Loosemore

Ben Goertzel wrote:


I agree that, to compare humans versus AIXI on an IQ test in a fully 
fair way (that tests only intelligence rather than prior knowledge) 
would be hard, because there is no easy way to supply AIXI with the same 
initial knowledge state that the human has.
Regarding whether AIXI, in order to solve an IQ test, would simulate the 
whole physical universe internally in order to simulate humans and thus 
figure out what a human would say for each question -- I really doubt 
it, actually.  I am very close to certain that simulating a human is NOT 
the simplest possible way to create a software program scoring 100% on 
human-created IQ tests.  So, the Occam prior embodied in AIXI would 
almost surely not cause it to take the strategy you suggest.

-- Ben


Alas, that was not quite the question at issue...

In the proof of AIXI's ability to solve the IQ test, is AIXI *allowed* 
to go so far as to simulate most of the functionality of a human brain 
in order to acquire its ability?


I am not asking you to make a judgment call on whether or not it would 
do so in practice, I am asking whether the structure of the proof allows 
that possibility to occur, should the contingencies of the world oblige 
it to do so.  (I would also be tempted to question your judgment call, 
here, but I don't want to go that route :-)).


If the proof allows even the possibility that AIXI will do this, then 
AIXI has an homunculus stashed away deep inside it (or at least, it has 
one on call and ready to go when needed).


I only need the possibility that it will do this, and my conclusion holds.

So:  clear question.  Does the proof implicitly allow it?


Richard Loosemore.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Apology to the list, and a more serious commentary on AIXI

2007-03-09 Thread Ben Goertzel



Alas, that was not quite the question at issue...

In the proof of AIXI's ability to solve the IQ test, is AIXI *allowed* 
to go so far as to simulate most of the functionality of a human brain 
in order to acquire its ability?


I am not asking you to make a judgment call on whether or not it would 
do so in practice, I am asking whether the structure of the proof 
allows that possibility to occur, should the contingencies of the 
world oblige it to do so.  (I would also be tempted to question your 
judgment call, here, but I don't want to go that route :-)).


If the proof allows even the possibility that AIXI will do this, then 
AIXI has an homunculus stashed away deep inside it (or at least, it 
has one on call and ready to go when needed).


I only need the possibility that it will do this, and my conclusion 
holds.


So:  clear question.  Does the proof implicitly allow it?

Yeah, if AIXI is given initial knowledge or experiential feedback that 
is in principle adequate for internal reconstruction of simulated humans 
... then its learning algorithm may potentially construct simulated humans.


However, it is not at all clear that, in order to do well on an IQ test, 
AIXI would need to be given enough background data or experiential 
feedback to **enable** accurate simulation of humans


It's not right to way AIXI has a homunculus on call and ready to go 
when needed. 

Rather, it's right to say AIXI has the capability to synthesize an 
homunculus if it is given adequate data to infer the properties of one, 
and judges this the best way to approach the problem at hand.


-- Ben G


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Apology to the list, and a more serious commentary on AIXI

2007-03-09 Thread Richard Loosemore

Ben Goertzel wrote:



Alas, that was not quite the question at issue...

In the proof of AIXI's ability to solve the IQ test, is AIXI *allowed* 
to go so far as to simulate most of the functionality of a human brain 
in order to acquire its ability?


I am not asking you to make a judgment call on whether or not it would 
do so in practice, I am asking whether the structure of the proof 
allows that possibility to occur, should the contingencies of the 
world oblige it to do so.  (I would also be tempted to question your 
judgment call, here, but I don't want to go that route :-)).


If the proof allows even the possibility that AIXI will do this, then 
AIXI has an homunculus stashed away deep inside it (or at least, it 
has one on call and ready to go when needed).


I only need the possibility that it will do this, and my conclusion 
holds.


So:  clear question.  Does the proof implicitly allow it?

Yeah, if AIXI is given initial knowledge or experiential feedback that 
is in principle adequate for internal reconstruction of simulated humans 
... then its learning algorithm may potentially construct simulated humans.


However, it is not at all clear that, in order to do well on an IQ test, 
AIXI would need to be given enough background data or experiential 
feedback to **enable** accurate simulation of humans


It's not right to way AIXI has a homunculus on call and ready to go 
when needed.
Rather, it's right to say AIXI has the capability to synthesize an 
homunculus if it is given adequate data to infer the properties of one, 
and judges this the best way to approach the problem at hand.


My overall argument is completely vindicated by what you say here.

(My wording was sometimes ambiguous in that last email, I confess, but 
what I have been targeting is AIXI as proof, not AIXI as actual working 
system).


I only care about where AIXI gets the power of its proof, so it does not 
matter to me whether a practical implementation [sic] of AIXI would 
actually need to build a cognitive system.


It is not important whether it would do so in practice, because if the 
proof says that AIXI is allowed to build a complete cognitive system in 
the course of solving the IQ test problem, then what is the meaning of 
AIXI would equal any other intelligence starting with the same initial 
knowledge set  well, yeah, of course it would, if it was allowed to 
build something as sophisticated as that other intelligence!


It is like me saying I can prove that I can make a jet airliner with my 
bare hands . and then when you delve into the proof you find that 
my definition of make includes the act of putting in a phone call to 
Boeing and asking them to deliver one.  Such a proof is completely 
valueless.


AIXI is valueless.

QED.



Richard Loosemore.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Apology to the list, and a more serious commentary on AIXI

2007-03-09 Thread Ben Goertzel




AIXI is valueless.

Well, I agree that AIXI provides zero useful practical guidance to those 
of us

working on practical AGI systems.

However, as I clarified in a prior longer post, saying that mathematics 
is valueless
is always a risky proposition.  Statements of this nature have been 
proved wrong
plenty of times in the past, in spite of their apparent sensibleness at 
the time of

utterance...

But I think we have all made our views on this topic rather clear, at 
this point ;-)


Time to agree to disagree and move on...

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Apology to the list, and a more serious commentary on AIXI

2007-03-08 Thread Russell Wallace

On 3/8/07, Richard Loosemore [EMAIL PROTECTED] wrote:


Let me put it this way:  would AIXI, in building an implementation of
this function, have to make use of a universe (or universe simulation)
that *implicitly* included intelligences that were capable of creating
the IQ tests?

So, if there were a question like this in the IQ tests:

Anna Nicole is to Monica Lewinsky as Madonna is to ..

Would AIXI have to build a solution by implicitly deconstructing (if you
see what I mean) the entire real universe, including its real human
societies and real (intelligent) human beings and real social
relationships?

If AIXI does a post-hoc deconstruction of some real intelligent
systems as part of building its own intelligent function, it is
parasitic on that intelligence.

You can confirm that it is not parasitic in that way?



If I understand you correctly, you ask two different questions here.

(Context: I'm assuming IQ test means a folder of IQ tests you might
actually buy from a real company today, not some hypothetical function of
arbitrary complexity.)

The first question is, consider the shortest program that would max out the
test. Does it consist of:

A) Start with the Big Bang, run 14 billion years, pick the Everett branch
that evolved English-speaking humans, send a UFO to abduct the smartest
human and present him with the test... (okay I'm being a little facetious
but you get the idea),

B) Some special-purpose hack that treats Anna Nicole etc as arbitrary
symbols without any of the connotations they have to us, _and does not
generalize to anything much other than IQ tests_.

Obviously it's unprovable, but I'm confident the answer is B based on
experience: the shortest program for any _particular_ task is almost always
a special-purpose hack that doesn't generalize.

And in case B, everyone would agree there is no great intelligence involved.

You then seem to be saying that even in case A, the intelligence would
reside in the genius evolved in the simulated universe, and
the apparent intelligence of AIXI would be parasitical on that, i.e. AIXI
itself wouldn't really be intelligent. As I said recently, I agree with
that position, but it's also one of philosophy, of how one chooses to define
the word intelligence, not something amenable to proof or disproof; to
call AIXI intelligent in that scenario would effectively be a form of
pantheism.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Apology to the list, and a more serious commentary on AIXI

2007-03-08 Thread Shane Legg

On 3/8/07, Peter Voss [EMAIL PROTECTED] wrote:


 It's about time that some else said that the AIXI emperor has no clothes.




Infinite computing power arguments prove **nothing**.



That depends on exactly what you mean by prove nothing.
For example, you can use the AIXI model to prove that no
real computable AGI (I'm talking finite computing power now)
is able to solve certain kinds of learning tasks.

I consider that a real result, but I guess you do not.

Shane

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Apology to the list, and a more serious commentary on AIXI

2007-03-08 Thread Shane Legg

On 3/8/07, Peter Voss [EMAIL PROTECTED] wrote:


AIXI certainly doesn't prove that AGI is possible.


I agree.

The human brain is what makes me think that it's possible.

Shane

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983