Re: [singularity] Apology to the list, and a more serious commentary on AIXI

2007-03-09 Thread Richard Loosemore

Ben Goertzel wrote:




Sorry, but I simply do not accept that you can make do really well on 
a long series of IQ tests into a computable function without getting 
tangled up in an implicit homuncular trap (i.e. accidentally assuming 
some real intelligence in the computable function).


Let me put it this way:  would AIXI, in building an implementation of 
this function, have to make use of a universe (or universe simulation) 
that *implicitly* included intelligences that were capable of creating 
the IQ tests?


So, if there were a question like this in the IQ tests:

Anna Nicole is to Monica Lewinsky as Madonna is to ..


Richard, perhaps your point is that IQ tests assume certain implicit 
background knowledge.  I stated in my email that AIXI would equal any 
other intelligence starting with the same initial knowledge set  So, 
your point is that IQ tests assume an initial knowledge set that is part 
and parcel of human culture.



No, that was not my point at all.

My point was much more subtle than that.

You claim that AIXI would equal any other intelligence starting with 
the same initial knowledge set.  I am focussing on the initial 
knowledge set.


So let's compare me, as the other intelligence, with AIXI.  What exactly 
is the same initial knowledge set that we are talking about here? 
Just the words I have heard and read in my lifetime?  The words that I 
have heard, read AND spoken in my lifetime?  The sum total of my sensory 
experiences, down at the neuron-firing level?  The sum total of my 
sensory experiences AND my actions, down at the neuron firing level? 
All of the above, but also including the sum total of all my internal 
mental machinery, so as to relate the other fluxes of data in a coherent 
way?  All of the above, but including all the cultural information that 
is stored out there in other minds, in my society?  All of the above, 
but including simulations of all the related


Where, exactly, does AIXI draw the line when it tries to emulate my 
performance on the test?


(I picked that particular example of an IQ test question in order to 
highlight the way that some tests involve a huge amount of information 
that requires understanding other minds .. my goal being to force AIXI 
into having to go a long way to get its information).


And if it does not draw a clear line around what same initial knowledge 
set means, but the process is open ended, what is to stop the AIXI 
theorems from implictly assuming that AIXI, if it needs to, can simulate 
my brain and the brains of all the other humans, in its attempt to do 
the optimisation?


What I am asking (non-rhetorically) is a question about how far AIXI 
goes along that path.  Do you know AIXI well enough to say?  My 
understanding (poor though it is) is that it appears to allow itself the 
latitude to go that far if the optimization requires it.


If it *does* allow itself that option, it would be parasitic on human 
intelligence, because it would effectively be simulating one in order to 
deconstruct it and use its knowledge to answer the questions.


Can you say, definitively, that AIXI draws a clear line around the 
meaning of same initial knowledge set, and does not allow itself the 
option of implicitly simulating entire human minds as part of its 
infinite computation?


Now, I do have a second line of argument in readiness, in case you can 
confirm that it really is strictly limited, but I don't think I need to 
use it.  (In a nutshell, I would go on to say that if it does draw such 
a line, then I dispute that it really can be proved to perform as well 
as I do, because it redefines what I am trying to do in such a way as 
to weaken my performance, and then proves that it can perform better 
than *that*).






Richard Loosemore


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Apology to the list, and a more serious commentary on AIXI

2007-03-09 Thread Richard Loosemore

Ben Goertzel wrote:


I agree that, to compare humans versus AIXI on an IQ test in a fully 
fair way (that tests only intelligence rather than prior knowledge) 
would be hard, because there is no easy way to supply AIXI with the same 
initial knowledge state that the human has.
Regarding whether AIXI, in order to solve an IQ test, would simulate the 
whole physical universe internally in order to simulate humans and thus 
figure out what a human would say for each question -- I really doubt 
it, actually.  I am very close to certain that simulating a human is NOT 
the simplest possible way to create a software program scoring 100% on 
human-created IQ tests.  So, the Occam prior embodied in AIXI would 
almost surely not cause it to take the strategy you suggest.

-- Ben


Alas, that was not quite the question at issue...

In the proof of AIXI's ability to solve the IQ test, is AIXI *allowed* 
to go so far as to simulate most of the functionality of a human brain 
in order to acquire its ability?


I am not asking you to make a judgment call on whether or not it would 
do so in practice, I am asking whether the structure of the proof allows 
that possibility to occur, should the contingencies of the world oblige 
it to do so.  (I would also be tempted to question your judgment call, 
here, but I don't want to go that route :-)).


If the proof allows even the possibility that AIXI will do this, then 
AIXI has an homunculus stashed away deep inside it (or at least, it has 
one on call and ready to go when needed).


I only need the possibility that it will do this, and my conclusion holds.

So:  clear question.  Does the proof implicitly allow it?


Richard Loosemore.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Apology to the list, and a more serious commentary on AIXI

2007-03-09 Thread Ben Goertzel



Alas, that was not quite the question at issue...

In the proof of AIXI's ability to solve the IQ test, is AIXI *allowed* 
to go so far as to simulate most of the functionality of a human brain 
in order to acquire its ability?


I am not asking you to make a judgment call on whether or not it would 
do so in practice, I am asking whether the structure of the proof 
allows that possibility to occur, should the contingencies of the 
world oblige it to do so.  (I would also be tempted to question your 
judgment call, here, but I don't want to go that route :-)).


If the proof allows even the possibility that AIXI will do this, then 
AIXI has an homunculus stashed away deep inside it (or at least, it 
has one on call and ready to go when needed).


I only need the possibility that it will do this, and my conclusion 
holds.


So:  clear question.  Does the proof implicitly allow it?

Yeah, if AIXI is given initial knowledge or experiential feedback that 
is in principle adequate for internal reconstruction of simulated humans 
... then its learning algorithm may potentially construct simulated humans.


However, it is not at all clear that, in order to do well on an IQ test, 
AIXI would need to be given enough background data or experiential 
feedback to **enable** accurate simulation of humans


It's not right to way AIXI has a homunculus on call and ready to go 
when needed. 

Rather, it's right to say AIXI has the capability to synthesize an 
homunculus if it is given adequate data to infer the properties of one, 
and judges this the best way to approach the problem at hand.


-- Ben G


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Apology to the list, and a more serious commentary on AIXI

2007-03-09 Thread Richard Loosemore

Ben Goertzel wrote:



Alas, that was not quite the question at issue...

In the proof of AIXI's ability to solve the IQ test, is AIXI *allowed* 
to go so far as to simulate most of the functionality of a human brain 
in order to acquire its ability?


I am not asking you to make a judgment call on whether or not it would 
do so in practice, I am asking whether the structure of the proof 
allows that possibility to occur, should the contingencies of the 
world oblige it to do so.  (I would also be tempted to question your 
judgment call, here, but I don't want to go that route :-)).


If the proof allows even the possibility that AIXI will do this, then 
AIXI has an homunculus stashed away deep inside it (or at least, it 
has one on call and ready to go when needed).


I only need the possibility that it will do this, and my conclusion 
holds.


So:  clear question.  Does the proof implicitly allow it?

Yeah, if AIXI is given initial knowledge or experiential feedback that 
is in principle adequate for internal reconstruction of simulated humans 
... then its learning algorithm may potentially construct simulated humans.


However, it is not at all clear that, in order to do well on an IQ test, 
AIXI would need to be given enough background data or experiential 
feedback to **enable** accurate simulation of humans


It's not right to way AIXI has a homunculus on call and ready to go 
when needed.
Rather, it's right to say AIXI has the capability to synthesize an 
homunculus if it is given adequate data to infer the properties of one, 
and judges this the best way to approach the problem at hand.


My overall argument is completely vindicated by what you say here.

(My wording was sometimes ambiguous in that last email, I confess, but 
what I have been targeting is AIXI as proof, not AIXI as actual working 
system).


I only care about where AIXI gets the power of its proof, so it does not 
matter to me whether a practical implementation [sic] of AIXI would 
actually need to build a cognitive system.


It is not important whether it would do so in practice, because if the 
proof says that AIXI is allowed to build a complete cognitive system in 
the course of solving the IQ test problem, then what is the meaning of 
AIXI would equal any other intelligence starting with the same initial 
knowledge set  well, yeah, of course it would, if it was allowed to 
build something as sophisticated as that other intelligence!


It is like me saying I can prove that I can make a jet airliner with my 
bare hands . and then when you delve into the proof you find that 
my definition of make includes the act of putting in a phone call to 
Boeing and asking them to deliver one.  Such a proof is completely 
valueless.


AIXI is valueless.

QED.



Richard Loosemore.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Apology to the list, and a more serious commentary on AIXI

2007-03-09 Thread Ben Goertzel




AIXI is valueless.

Well, I agree that AIXI provides zero useful practical guidance to those 
of us

working on practical AGI systems.

However, as I clarified in a prior longer post, saying that mathematics 
is valueless
is always a risky proposition.  Statements of this nature have been 
proved wrong
plenty of times in the past, in spite of their apparent sensibleness at 
the time of

utterance...

But I think we have all made our views on this topic rather clear, at 
this point ;-)


Time to agree to disagree and move on...

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Why We are Almost Certainly not in a Simulation

2007-03-09 Thread Charles D Hixson

Stathis Papaioannou wrote:



On 3/7/07, *Charles D Hixson* [EMAIL PROTECTED] 
mailto:[EMAIL PROTECTED] wrote:


With so many imponderables, the most reasonable thing to do is to just
ignore the possibility, and, after all, that may well be what is
desired
by the simulation.  (What would our ancestors lives have been
like if
Teddy Roosevelt had won the presidential election?)


While it's quite an assumption that we are in a simulation, it's an 
even more incredible assumption that we are somehow at the centre of 
it. It is analogous to comparing belief in a deistic God to belief in 
Jehovah the sky god, who wants us to make sacrifices to him and eat 
certain things but not others. The more closely we specify something 
of which we can have no knowledge, the more foolish it becomes.


Stathis Papaioannou

Point.  But we *could* be.  If it's a simulation, perhaps only a local 
area of interest is simulated.  In a good simulation, you couldn't 
tell.  Can you can't put a boundary around local area, either.  It 
could be just the internal workings of your brain (well, of my brain, 
since I'm the one active at the moment...but when you are reading, then 
you are the one active, so...).


That's sort of the point.  If it's a simulation, we can't tell what's 
going on, so we (well, I) can't make choices based on that assumption, 
even if it were to seem more plausible...UNLESS the argument that made 
it seem sufficiently plausible made some small sheaf of scenarios seem 
sufficiently most probable.  So far this hasn't happened.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Scenarios for a simulated universe (second thought!)

2007-03-09 Thread Charles D Hixson

Shane Legg wrote:

:-)

No offence taken, I was just curious to know what your position was.

I can certainly understand people with a practical interest not having
time for things like AIXI.  Indeed as I've said before, my PhD is in AIXI
and related stuff, and yet my own AGI project is based on other things.
So even I am skeptical about whether it will lead to practical methods.
That said, I can see that AIXI does have some fairly theoretical uses,
perhaps Friendliness will turn out to be one of them?

Shane

...

As described (I haven't read, and probably couldn't read, the papers on 
AIXI, only on the list, and, when I get that far, in Ben's text) AIXI 
doesn't appear to be anything that a reasonable person would call 
intelligent.  As such, I don't see how it could shed any light on 
Friendliness.  Would you care to elaborate?  Or were the descriptions on 
the list, perhaps, unfair?


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


[singularity] Scenarios for a simulated universe

2007-03-09 Thread Keta Meme

i am familiar with 'simulation argument', various modes of
philosophical/epistemological thinking about the nature of reality and
simulation, and the previous replies to this mailing list.  so am i prepared
to share some brief words about the subject??? X-P

do you ever get the sense that you are [merely] an instance of a (immortal
information) template that describes patterns about your DNA (or physical
hardware description) and thoughts (software)?

in some moments, do you 'wake up' feeling like you've been re-started with a
set of initial conditions (which don't necessarily need to relate to any
'real' past experience)?

do you ever feel like one of 'The Sims' characters in a multi-dimensional
simulation, that has video editing controls like play, pause, rewind,
save, modify?

that reality is not one monolithic continuous chain of events, but rather
simulation fragments that do not mean anything in relation to each other
aside from an extradimensional intelligence (ourselves?) that has
artisitically designed them?

can you understand the concept of determinism (
http://en.wikipedia.org/wiki/Determinism), the block universe model and
virtual free will and no-separate-self ?

how can one defend the position that this reality is NOT THE ONLY ONE?
can every possible reality exists in its own multiverse possibility
branch?


Dr. Keta Meme
http://ketameme.com
[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983