Re: [singularity] Scenarios for a simulated universe (second thought!)

2007-03-09 Thread Charles D Hixson

Shane Legg wrote:

:-)

No offence taken, I was just curious to know what your position was.

I can certainly understand people with a practical interest not having
time for things like AIXI.  Indeed as I've said before, my PhD is in AIXI
and related stuff, and yet my own AGI project is based on other things.
So even I am skeptical about whether it will lead to practical methods.
That said, I can see that AIXI does have some fairly theoretical uses,
perhaps Friendliness will turn out to be one of them?

Shane

...

As described (I haven't read, and probably couldn't read, the papers on 
AIXI, only on the list, and, when I get that far, in Ben's text) AIXI 
doesn't appear to be anything that a reasonable person would call 
intelligent.  As such, I don't see how it could shed any light on 
Friendliness.  Would you care to elaborate?  Or were the descriptions on 
the list, perhaps, unfair?


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


[singularity] Scenarios for a simulated universe

2007-03-09 Thread Keta Meme

i am familiar with 'simulation argument', various modes of
philosophical/epistemological thinking about the nature of reality and
simulation, and the previous replies to this mailing list.  so am i prepared
to share some brief words about the subject??? X-P

do you ever get the sense that you are [merely] an instance of a (immortal
information) template that describes patterns about your DNA (or physical
hardware description) and thoughts (software)?

in some moments, do you 'wake up' feeling like you've been re-started with a
set of initial conditions (which don't necessarily need to relate to any
'real' past experience)?

do you ever feel like one of 'The Sims' characters in a multi-dimensional
simulation, that has video editing controls like play, pause, rewind,
save, modify?

that reality is not one monolithic continuous chain of events, but rather
simulation fragments that do not mean anything in relation to each other
aside from an extradimensional intelligence (ourselves?) that has
artisitically designed them?

can you understand the concept of determinism (
http://en.wikipedia.org/wiki/Determinism), the block universe model and
virtual free will and no-separate-self ?

how can one defend the position that this reality is NOT THE ONLY ONE?
can every possible reality exists in its own multiverse possibility
branch?


Dr. Keta Meme
http://ketameme.com
[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Scenarios for a simulated universe (second thought!)

2007-03-08 Thread Shane Legg

Ben,

So you really think AIXI is totally useless?  I haven't been reading
Richard's comments, indeed I gave up reading his comments some
time before he got himself banned from sl4, however it seems that you
in principle support what he's saying.  I just checked his posts and
can see why they don't make sense, however I know very well that
shouting rather than reasoning on the internet is a waste of time.

My question to you then is a bit different.  If you believe that AIXI is
totally a waste of time, why is it that you recently published a book
with a chapter on AIXI in it, and now think that AIXI and related study
should be a significant part of what the SIAI does in the future?

Shane

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Scenarios for a simulated universe

2007-03-07 Thread Shane Legg

Ben started a new thread about AIXI so I'll switch to there to keep
this discussion in the same place and in sync with the subject line...

Shane

On 3/7/07, Mitchell Porter [EMAIL PROTECTED] wrote:



From: Shane Legg [EMAIL PROTECTED]

For sure.  Indeed my recent paper on whether there exists an elegant
theory
of prediction tries to address that very problem.  In short the paper
says
that if you want to convert something like Solomonoff induction or AIXI
into a nice computable system... well you can't.  Indeed my own work on
building an intelligent machine is taking a neuro science inspired
approach
with just a few bits that are in some sense inspired by AIXI.

I think the value of AIXI is that it gives you a relatively simple set of
equations with which to mathematically study the properties of an ultra
intelligent machine.  In contrast something like Novamente can't be
expressed in a one line equation.  This makes it a much more difficult
mathematical object to work with if you want to do theoretical analysis.

This would be the paper, everyone:
http://www.vetta.org/documents/IDSIA-12-06-1.pdf

Shane - first you smack down the Goedel machine, and now AIXI! Is it
genuinely
useless in practice, do you think? Hutter says one of his current research
priorities
is to shrink it down into something that can run on existing machines...

_
Advertisement: 50% off on Xbox 360, PS and Nintendo Wii titles!
http://www.play-asia.com/SOap-23-83-4lab-71-bn-49-en-84-k-40-extended.html

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Scenarios for a simulated universe

2007-03-07 Thread Russell Wallace

On 3/7/07, Eugen Leitl [EMAIL PROTECTED] wrote:


I realize that this is sarcasm, but detecting the mere presence
of a species (nevermind their critical acclaim) from a trajectory,
then rather give me the infinite simians, and I will personally look
for Shakespeare sonnets in them.



And I've seen equally vigorous handwaving in claims about the wondrous
things AIXI on an infinite computer could do, so the analogy still holds
nicely!

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Scenarios for a simulated universe

2007-03-07 Thread deering
Shane Legg recently (3-5-07) wrote:  ...if you're not careful you may well 
define intelligence
in such a way that humans don't have it either.

I think it would be a serious mistake to degrade the definition of intelligence 
to the point that it included humans.

Mike Deering,
General Editor, http://nano-catalog.com/   
Director, Singularity Action Group 
http://home.mchsi.com/~deering9/index.html  
Email: deering9 at mchsi dot com

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Scenarios for a simulated universe

2007-03-05 Thread Stathis Papaioannou

On 3/5/07, John Ku [EMAIL PROTECTED] wrote:

On 3/4/07, Ben Goertzel [EMAIL PROTECTED]  wrote:



 Richard, I long ago proposed a working definition of intelligence as
 Achieving complex goals in complex environments.  I then went through
 a bunch of trouble to precisely define all the component terms of that
 definition; you can consult the Appendix to my 2006 book The Hidden
 Pattern


I'm not sure if your working definition is supposed to be significantly
less ambitious than a philosophical definition or perhaps you even address
something like this in your appendix, but I'm wondering whether the
hypothetical example of Blockhead from philosophy of mind creates problems
for your definition. Imagine that a computer has a huge memory bank of what
actions to undertake given what inputs. With a big enough memory, it seems
it could be perfectly capable of achieving complex goals in complex
environments. Yet in doing so, there would be very little internal
processing, just the bare minimum needed to look up and execute the part of
its memory corresponding to its current inputs.

I think any intuitive notion of intelligence would not count such a
computer as being intelligent to any significant degree no matter how large
its memory bank is or how complex and diverse an environment its memory
allows it to navigate. There's simply too little internal processing going
on for it to count as much more intelligent than any ordinary database
application, though it might of course, do a pretty good job of fooling us
into thinking it is intelligent if we don't know the details.

I think this example actually poses a problem for any purely behavioristic
definition of intelligence. To fit our ordinary notion of intelligence, I
think there would have to be at least some sort of criteria concerning how
the internal processing for the behavior is being done.

I think the Blockhead example is normally presented in terms of looking up
information from a huge memory bank, but as I'm thinking about it just now
as I'm typing this up, I'm wondering if it could also be run with similar
conclusions for simple brute search algorithms. If instead of a huge memory
bank, it had enormous processing power and speed such that it could just
explore every single chain of possibilities for the one that will lead to
some specified goal, I'm not sure that would really count as intelligent to
any significant degree either.



You seem to be equating intelligence with consciousness. Ned Block also
seems to do this in his original paper. I would prefer to reserve
intelligence for third person observable behaviour, which would make the
Blockhead intelligent, and consciousness for the internal state: it is
possible that the Blockhead is unconscious or at least differently conscious
compared to the human.

Stathis Papaioannou

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Scenarios for a simulated universe

2007-03-05 Thread Stathis Papaioannou

On 3/6/07, John Ku [EMAIL PROTECTED] wrote:


On 3/5/07, Stathis Papaioannou [EMAIL PROTECTED] wrote:


 You seem to be equating intelligence with consciousness. Ned Block also
 seems to do this in his original paper. I would prefer to reserve
 intelligence for third person observable behaviour, which would make the
 Blockhead intelligent, and consciousness for the internal state: it is
 possible that the Blockhead is unconscious or at least differently conscious
 compared to the human.


I think the argument also works for consciousness but I don't think you're
right if you are suggesting that our ordinary notion of intelligence is
merely third person observable behavior. (If you really were just voicing
your own idiosyncratic preference for how you happen to like to use the term
intelligence then I guess I don't really have a problem with that so long
as you are clear about it.)



Our ordinary notion of intelligence involves consciousness, but this term
until relatively recently was taboo in cognitive science, the implication
being that if it's not third person observable it doesn't exist, or at least
we should pretend that it doesn't exist. It was against such a behaviourist
view that the Blockhead argument was aimed.

Stathis Papaioannou

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Scenarios for a simulated universe

2007-03-05 Thread Shane Legg

Ben,

Would such an AIXI system have feelings or awareness?
I have no idea, indeed I don't even know how to define such
things outside of my own subject experience of them...

Or to put it another way, if defining intelligence is hard, then
defining some of these other things seems to be even harder.

Shane

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Scenarios for a simulated universe

2007-03-05 Thread Richard Loosemore

Matt Mahoney wrote:

--- Richard Loosemore [EMAIL PROTECTED] wrote:


Matt Mahoney wrote:

--- Richard Loosemore [EMAIL PROTECTED] wrote:

What I wanted was a set of non-circular definitions of such terms as 
intelligence and learning, so that you could somehow *demonstrate* 
that your mathematical idealization of these terms correspond with the 
real thing, ... so that we could believe that the mathematical 
idealizations were not just a fantasy.

The last time I looked at a dictionary, all definitions are circular.  So

you

win.

Sigh!

This is a waste of time:  you just (facetiously) rejected the 
fundamental tenet of science.  Which means that the stuff you were 
talking about was just pure mathematical fantasy, after all, and nothing 
to do with science, or the real world.



Richard Loosemre.


What does the definition of intelligence have to do with AIXI?  AIXI is an
optimization problem.  The problem is to maximize an accumulated signal in an
unknown environment.  AIXI says the solution is to guess the simplest
explanation for past observation (Occam's razor), and that this solution is
not computable in general.  I believe these principles have broad
applicability to the design of machine learning algorithms, regardless of
whether you consider such algorithms intelligent.


You're going around in circles.

If you were only talking about machine learning in the sense of an 
abstract mathematical formalism that has no relationship to learning, 
intelligence or anything going on in the real world, and in particular 
the real world in which some of us are interested in the problem of 
trying to build an intelligent system, then, fine, all power to you.  At 
*that* level you are talking about a mathematical fantasy, not about 
science.


But you did not do that:  you made claims that went far beyond the 
confines of a pure, abstract mathematical formalism:  you tried to 
relate that to an explanation of why Occam's Razor works (and remember, 
the original meaning of Occam's Razor was all about how an *intelligent* 
being should use its intelligence to best understand the world), and you 
also seemed to make inferences to the possibility that the real world 
was some kind of simulation.


It seems to me that you are trying to have your cake and eat it too.


Richard Loosemore.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Scenarios for a simulated universe

2007-03-05 Thread Matt Mahoney

--- Richard Loosemore [EMAIL PROTECTED] wrote:

 Matt Mahoney wrote:
  --- Richard Loosemore [EMAIL PROTECTED] wrote:
  
  Matt Mahoney wrote:
  --- Richard Loosemore [EMAIL PROTECTED] wrote:
 
  What I wanted was a set of non-circular definitions of such terms as 
  intelligence and learning, so that you could somehow *demonstrate* 
  that your mathematical idealization of these terms correspond with the 
  real thing, ... so that we could believe that the mathematical 
  idealizations were not just a fantasy.
  The last time I looked at a dictionary, all definitions are circular. 
 So
  you
  win.
  Sigh!
 
  This is a waste of time:  you just (facetiously) rejected the 
  fundamental tenet of science.  Which means that the stuff you were 
  talking about was just pure mathematical fantasy, after all, and nothing 
  to do with science, or the real world.
 
 
  Richard Loosemre.
  
  What does the definition of intelligence have to do with AIXI?  AIXI is an
  optimization problem.  The problem is to maximize an accumulated signal in
 an
  unknown environment.  AIXI says the solution is to guess the simplest
  explanation for past observation (Occam's razor), and that this solution
 is
  not computable in general.  I believe these principles have broad
  applicability to the design of machine learning algorithms, regardless of
  whether you consider such algorithms intelligent.
 
 You're going around in circles.
 
 If you were only talking about machine learning in the sense of an 
 abstract mathematical formalism that has no relationship to learning, 
 intelligence or anything going on in the real world, and in particular 
 the real world in which some of us are interested in the problem of 
 trying to build an intelligent system, then, fine, all power to you.  At 
 *that* level you are talking about a mathematical fantasy, not about 
 science.
 
 But you did not do that:  you made claims that went far beyond the 
 confines of a pure, abstract mathematical formalism:  you tried to 
 relate that to an explanation of why Occam's Razor works (and remember, 
 the original meaning of Occam's Razor was all about how an *intelligent* 
 being should use its intelligence to best understand the world), and you 
 also seemed to make inferences to the possibility that the real world 
 was some kind of simulation.
 
 It seems to me that you are trying to have your cake and eat it too.

I claim that AIXI has practical applications to machine learning.  I also
claim (implicitly) that machine learning has practical applications to the
real world.  Therefore, I claim that AIXI has practical applications to the
real world (i.e. as Occam's razor).

Further, because AIXI requires that the unknown environment be computable, I
claim that we cannot exclude the possibility that the universe is a
simulation.  If Occam's razor did not work in practice, then you could claim
that the universe is not computable, and therefore could not be a simulation.

This really has nothing to do with the definition of intelligence.  You can
accept Turing's definition, which would exclude all animals except Homo
Sapiens.  You can accept a broader definition that would include machine
learning.  Both the human brain and linear regression algorithms make use of
Occam's razor.  I don't care if you call them intelligent or not.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Scenarios for a simulated universe

2007-03-05 Thread Russell Wallace

On 3/5/07, Shane Legg [EMAIL PROTECTED] wrote:


Would such an AIXI system have feelings or awareness?
I have no idea, indeed I don't even know how to define such
things outside of my own subject experience of them...



I don't know how to define them either, but I can answer your question.

What programs of the please run this on an infinite computer type (AIXI,
Blockhead, a bunch of others with acronyms and cutesy names that I don't
remember) actually amount to is suppose I am Jehovah, then I will create
all possible universes [or all universes of a certain type] and select the
ones with relevant properties. (Which is mathematically consistent though
of no practical relevance seeing as one is not actually Jehovah.)

In general, some of the universes so created will contain conscious beings.
So a simple infinite computation would contain conscious beings, even though
it would not itself be conscious.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Scenarios for a simulated universe

2007-03-05 Thread deering
Russell Wallace writes:  What programs of the please run this on an infinite 
computer type (AIXI, Blockhead, a bunch of others with acronyms and cutesy 
names that I don't remember) actually amount to is suppose I am Jehovah, then 
I will create all possible universes [or all universes of a certain type] and 
select the ones with relevant properties. (Which is mathematically consistent 
though of no practical relevance seeing as one is not actually Jehovah.) 

It should be a fairly obvious implementation of a nested quantum computer to 
run any of these infinite processing programs.  We will soon have oracle type 
computers that can answer any question with the reservation that the top level 
of the nest will have to be large enough to hold both the question and the 
answer.  Current quantum computers are at the 3 or 4 bit level but scientists 
are confident of exponential advancement in future development.  Of course then 
the problem will be, Are you smart enough to understand the answer?


Mike Deering,
General Editor, http://nano-catalog.com/   
Director, Singularity Action Group 
http://home.mchsi.com/~deering9/index.html  
Email: deering9 at mchsi dot com





-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Scenarios for a simulated universe

2007-03-05 Thread Bruce LaDuke

Excellent point about questions.

The question is the key to all of this, not knowledge or intelligence.

Will these machines answer questions about existing knowledge or about 
knowledge that does not yet exist?


Answering questions about existing knowledge is a search algorithm and 
enables learning.  Automation of logic, e.g., can make this more powerful 
and make existing knowledge more accessible and usable, but it's still a 
search for some knowledge that already exists.


Answering questions about knowledge that does not yet exist is knowledge 
creation and this is the process behind ALL social advance.  Absolutely 
nothing intellectual advances without it.  And there will be no singularity 
without understanding it.  In fact, this understanding will be singularity.


When we create knowledge today it occurs unconsciously or by accident.  Most 
people spend their lives playing in a circular field of 'experts,' bantering 
about existing knowledge, which is often a huge waste of time and effort in 
terms of social advance.


All of you just ask yourselves when you last created new knowledge.  Not 
learned it, or read it, or argued it, or compiled it, or shared it, etc., 
but created it.What was it?  How did you do it?  Where did it come from? 
 Where did it go?  And if, by chance, you're not personally aware of when 
you did it, how could you possibly build a machine that is capable of doing 
it?


Without KC, you're looking at something to make existing knoweldge more 
accessible, or interpretable, or practical/applicable, or learnable, etc., 
and you're leaving the rest up to the humans that by chance stumble upon 
this one process that moves everything intellectual forward.  At best, 
building a helper of stumbling humans.


Kind Regards,

Bruce LaDuke
Managing Director

Instant Innovation, LLC
Indianapolis, IN
[EMAIL PROTECTED]
http://www.hyperadvance.com




Original Message Follows
From: deering [EMAIL PROTECTED]
Reply-To: singularity@v2.listbox.com
To: singularity@v2.listbox.com
Subject: Re: [singularity] Scenarios for a simulated universe
Date: Mon, 5 Mar 2007 16:57:55 -0600

Russell Wallace writes:  What programs of the please run this on an 
infinite computer type (AIXI, Blockhead, a bunch of others with acronyms 
and cutesy names that I don't remember) actually amount to is suppose I am 
Jehovah, then I will create all possible universes [or all universes of a 
certain type] and select the ones with relevant properties. (Which is 
mathematically consistent though of no practical relevance seeing as one is 
not actually Jehovah.) 


It should be a fairly obvious implementation of a nested quantum computer to 
run any of these infinite processing programs.  We will soon have oracle 
type computers that can answer any question with the reservation that the 
top level of the nest will have to be large enough to hold both the question 
and the answer.  Current quantum computers are at the 3 or 4 bit level but 
scientists are confident of exponential advancement in future development.  
Of course then the problem will be, Are you smart enough to understand the 
answer?



Mike Deering,
General Editor, http://nano-catalog.com/
Director, Singularity Action Group
http://home.mchsi.com/~deering9/index.html
Email: deering9 at mchsi dot com





-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983

_
Don’t miss your chance to WIN 10 hours of private jet travel from Microsoft® 
Office Live http://clk.atdmt.com/MRT/go/mcrssaub0540002499mrt/direct/01/


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Scenarios for a simulated universe

2007-03-05 Thread Mitchell Porter



From: deering [EMAIL PROTECTED]

It should be a fairly obvious implementation of a nested quantum computer 
to run any of these infinite processing programs.  We will soon have oracle 
type computers that can answer any question with the reservation that the 
top level of the nest will have to be large enough to hold both the 
question and the answer.  Current quantum computers are at the 3 or 4 bit 
level but scientists are confident of exponential advancement in future 
development.  Of course then the problem will be, Are you smart enough to 
understand the answer?


You radically overstate the expected capabilities of quantum computers. They 
can't even do NP-complete problems in polynomial time.

http://scottaaronson.com/blog/?p=208

_
Advertisement: Fresh jobs daily. Stop waiting for the newspaper. Search Now! 
www.seek.com.au 
http://a.ninemsn.com.au/b.aspx?URL=http%3A%2F%2Fninemsn%2Eseek%2Ecom%2Eau_t=757263760_r=Hotmail_EndText_Dec06_m=EXT


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Scenarios for a simulated universe

2007-03-05 Thread Stathis Papaioannou

On 3/6/07, Mitchell Porter [EMAIL PROTECTED] wrote:



You radically overstate the expected capabilities of quantum computers.
They
can't even do NP-complete problems in polynomial time.
http://scottaaronson.com/blog/?p=208



What about a computer (classical will do) granted an infinity of cycles
through, for example, a Freeman Dyson or Frank Tipler type mechanism? No
matter how many cycles it takes to compute a particular simulated world, any
delay will be transparent to observers in that world. It only matters that
the computation doesn't stop before it is completed.

Stathis Papaioannou

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Scenarios for a simulated universe

2007-03-05 Thread Matt Mahoney

--- Stathis Papaioannou [EMAIL PROTECTED] wrote:

 On 3/6/07, Mitchell Porter [EMAIL PROTECTED] wrote:
 
 
  You radically overstate the expected capabilities of quantum computers.
  They
  can't even do NP-complete problems in polynomial time.
  http://scottaaronson.com/blog/?p=208
 
 
 What about a computer (classical will do) granted an infinity of cycles
 through, for example, a Freeman Dyson or Frank Tipler type mechanism? No
 matter how many cycles it takes to compute a particular simulated world, any
 delay will be transparent to observers in that world. It only matters that
 the computation doesn't stop before it is completed.

The computation would also require infinite memory (a Turing machine), or else
it would cycle.

Although our universe might be the product of a Turing machine, the physics of
our known universe will only allow finite memory.  The number of possible
quantum states of a closed system with finite size and mass is finite.  For
our universe (big bang model), the largest memory you could construct would be
on the order of c^5 T^2/hG ~ 10^122 bits (where c is the speed of light, T is
the age of the universe, h is Planck's constant and G is the gravitational
constant.  (Coincidentally, each bit would occupy about the volume of a proton
or neutron).

A quantum computer is weaker than a finite state machine.  A quantum computer
is restricted to time-reversible computation, so operations like bit
assignment or copying are not allowed.

And even if you had a Turing machine, you still could not compute a solution
to AIXI.  It is not computable, like the halting problem.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Scenarios for a simulated universe

2007-03-04 Thread Richard Loosemore

Matt Mahoney wrote:

--- Richard Loosemore [EMAIL PROTECTED] wrote:

What I wanted was a set of non-circular definitions of such terms as 
intelligence and learning, so that you could somehow *demonstrate* 
that your mathematical idealization of these terms correspond with the 
real thing, ... so that we could believe that the mathematical 
idealizations were not just a fantasy.


The last time I looked at a dictionary, all definitions are circular.  So you
win.


Sigh!

This is a waste of time:  you just (facetiously) rejected the 
fundamental tenet of science.  Which means that the stuff you were 
talking about was just pure mathematical fantasy, after all, and nothing 
to do with science, or the real world.



Richard Loosemre.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Scenarios for a simulated universe

2007-03-04 Thread Richard Loosemore

Ben Goertzel wrote:

Richard Loosemore wrote:

Matt Mahoney wrote:

--- Richard Loosemore [EMAIL PROTECTED] wrote:

What I wanted was a set of non-circular definitions of such terms as 
intelligence and learning, so that you could somehow 
*demonstrate* that your mathematical idealization of these terms 
correspond with the real thing, ... so that we could believe that 
the mathematical idealizations were not just a fantasy.


The last time I looked at a dictionary, all definitions are 
circular.  So you

win.


Richard, I long ago proposed a working definition of intelligence as 
Achieving complex goals in complex environments.  I then went through 
a bunch of trouble to precisely define all the component terms of that 
definition; you can consult the Appendix to my 2006 book The Hidden 
Pattern
Shane Legg and Marcus Hutter have proposed a related definition of 
intelligence in a recent paper...


Anyone can propose a definition.  The point of my objection is that a 
definition has to have some way to be compared against reality.


Suppose I define intelligence to be:

A funtion that maps goals G and world states W onto action states A, 
where G, W and A are any mathematical entities whatsoever.


That would make any function that maps X [cross] Y into Z an intelligence.

Such a definition would be pointless.  The question is *why* would it be 
pointless?  What criteria are applied, in order to determine whether the 
definition has something to the thing that in everyday life we call 
intelligence.


My protest to Matt was that I did not believe his definition could be 
made to lead to anything like a reasonable grounding.  I tried to get 
him to do the grounding, but to no avail:  he eventually resorted to the 
blanket denial that any definition means anything ... which is a cop out 
if he wanted to defend the claim that the formalism was something more 
than a mathematical fantasy.



Richard Loosemore


P.S.  Quick sanity check:  you know the last comment in the quote you 
gave (about loking in the dictionary) was Matt's, not mine, right?




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Scenarios for a simulated universe

2007-03-04 Thread Ben Goertzel




Richard, I long ago proposed a working definition of intelligence as 
Achieving complex goals in complex environments.  I then went 
through a bunch of trouble to precisely define all the component 
terms of that definition; you can consult the Appendix to my 2006 
book The Hidden PatternShane Legg and Marcus Hutter 
have proposed a related definition of intelligence in a recent paper...


Anyone can propose a definition.  The point of my objection is that a 
definition has to have some way to be compared against reality.


Suppose I define intelligence to be:

A funtion that maps goals G and world states W onto action states A, 
where G, W and A are any mathematical entities whatsoever.


That would make any function that maps X [cross] Y into Z an 
intelligence.


Such a definition would be pointless.  The question is *why* would it 
be pointless?  What criteria are applied, in order to determine 
whether the definition has something to the thing that in everyday 
life we call intelligence.


The difficulty in comparing my definition against reality is that my 
definition defines intelligence relative to a complexity measure.


For this reason, it is fundamentally a subjective definition of 
intelligence, except in the unrealistic case where degree of complexity 
tends to infinity (in which case all reasonably general complexity 
measures become equivalent, due to bisimulation of Turing machines).


To qualitatively compare my definition to the everyday life definition 
of intelligence, we can check its consistency with our everyday life 
definition of complexity.   Informally, at least, my definition seems 
to check out to me: intelligence according to an IQ test does seem to 
have something to do with the ability to achieve complex goals; and, the 
reason we think IQ tests mean anything is that we think the ability to 
achieve complex goals in the test-context will correlate with the 
ability to achieve complex goals in various more complex environments 
(contexts).


Anyway, if I accept for instance **Richard Loosemore** as a measurer of 
the complexity of environments and goals, then relative to 
Richard-as-a-complexity-measure, I can assess the intelligence of 
various entities, using my definition


In practice, in building a system like Novamente, I'm relying on modern 
human culture's consensus complexity measure and trying to make a 
system that, according to this measure, can achieve a diverse variety of 
complex goals in complex situations...


P.S.  Quick sanity check:  you know the last comment in the quote you 
gave (about loking in the dictionary) was Matt's, not mine, right?




Yes...

Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Scenarios for a simulated universe

2007-03-04 Thread Bruce LaDuke

Definition is intelligence.

Kind Regards,

Bruce LaDuke
Managing Director

Instant Innovation, LLC
Indianapolis, IN
[EMAIL PROTECTED]
http://www.hyperadvance.com




Original Message Follows
From: Ben Goertzel [EMAIL PROTECTED]
Reply-To: singularity@v2.listbox.com
To: singularity@v2.listbox.com
Subject: Re: [singularity] Scenarios for a simulated universe
Date: Sun, 04 Mar 2007 14:26:33 -0500




Richard, I long ago proposed a working definition of intelligence as 
Achieving complex goals in complex environments.  I then went through a 
bunch of trouble to precisely define all the component terms of that 
definition; you can consult the Appendix to my 2006 book The Hidden 
PatternShane Legg and Marcus Hutter have proposed a related 
definition of intelligence in a recent paper...


Anyone can propose a definition.  The point of my objection is that a 
definition has to have some way to be compared against reality.


Suppose I define intelligence to be:

A funtion that maps goals G and world states W onto action states A, where 
G, W and A are any mathematical entities whatsoever.


That would make any function that maps X [cross] Y into Z an 
intelligence.


Such a definition would be pointless.  The question is *why* would it be 
pointless?  What criteria are applied, in order to determine whether the 
definition has something to the thing that in everyday life we call 
intelligence.


The difficulty in comparing my definition against reality is that my 
definition defines intelligence relative to a complexity measure.


For this reason, it is fundamentally a subjective definition of 
intelligence, except in the unrealistic case where degree of complexity 
tends to infinity (in which case all reasonably general complexity 
measures become equivalent, due to bisimulation of Turing machines).


To qualitatively compare my definition to the everyday life definition of 
intelligence, we can check its consistency with our everyday life definition 
of complexity.   Informally, at least, my definition seems to check out to 
me: intelligence according to an IQ test does seem to have something to do 
with the ability to achieve complex goals; and, the reason we think IQ tests 
mean anything is that we think the ability to achieve complex goals in the 
test-context will correlate with the ability to achieve complex goals in 
various more complex environments (contexts).


Anyway, if I accept for instance **Richard Loosemore** as a measurer of the 
complexity of environments and goals, then relative to 
Richard-as-a-complexity-measure, I can assess the intelligence of various 
entities, using my definition


In practice, in building a system like Novamente, I'm relying on modern 
human culture's consensus complexity measure and trying to make a system 
that, according to this measure, can achieve a diverse variety of complex 
goals in complex situations...


P.S.  Quick sanity check:  you know the last comment in the quote you gave 
(about loking in the dictionary) was Matt's, not mine, right?




Yes...

Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983

_
Play Flexicon: the crossword game that feeds your brain. PLAY now for FREE.  
 http://zone.msn.com/en/flexicon/default.htm?icid=flexicon_hmtagline


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Scenarios for a simulated universe

2007-03-04 Thread Matt Mahoney

--- Richard Loosemore [EMAIL PROTECTED] wrote:

 Matt Mahoney wrote:
  --- Richard Loosemore [EMAIL PROTECTED] wrote:
  
  What I wanted was a set of non-circular definitions of such terms as 
  intelligence and learning, so that you could somehow *demonstrate* 
  that your mathematical idealization of these terms correspond with the 
  real thing, ... so that we could believe that the mathematical 
  idealizations were not just a fantasy.
  
  The last time I looked at a dictionary, all definitions are circular.  So
 you
  win.
 
 Sigh!
 
 This is a waste of time:  you just (facetiously) rejected the 
 fundamental tenet of science.  Which means that the stuff you were 
 talking about was just pure mathematical fantasy, after all, and nothing 
 to do with science, or the real world.
 
 
 Richard Loosemre.

What does the definition of intelligence have to do with AIXI?  AIXI is an
optimization problem.  The problem is to maximize an accumulated signal in an
unknown environment.  AIXI says the solution is to guess the simplest
explanation for past observation (Occam's razor), and that this solution is
not computable in general.  I believe these principles have broad
applicability to the design of machine learning algorithms, regardless of
whether you consider such algorithms intelligent.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Scenarios for a simulated universe

2007-03-04 Thread Jef Allbright

On 3/4/07, Matt Mahoney wrote:


What does the definition of intelligence have to do with AIXI?  AIXI is an
optimization problem.  The problem is to maximize an accumulated signal in an
unknown environment.  AIXI says the solution is to guess the simplest
explanation for past observation (Occam's razor), and that this solution is
not computable in general.  I believe these principles have broad
applicability to the design of machine learning algorithms, regardless of
whether you consider such algorithms intelligent.


Matt, you might want to consider that while Occam's Razor is indeed a
very beautiful and powerful principle, it is a heuristic directly
applicable only to those situations of all else being equal (or made
effectively so by means of infinite computing power.)

[Observant readers may notice than I'm being slightly tongue in cheek
here, drawing a parallel with a recent mismatch of expressed views on
the AGI  and Extropy lists regarding the elegance of the  Principle of
Indifference. The analogy is sublime.]

My point is that nature never directly applies the perfect principle.
Every problem posed to nature carries an implicit bias, and this is
enough to start nature down the path toward a satisficing heuristic.

While the Principle of Parsimony and the Principle of Indifference
play unattainably objective roles in our epistemology, you may want to
consider their subjective cousin, Max Entropy, as one of your star
players in any practical AI.

- Jef

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Scenarios for a simulated universe

2007-03-03 Thread Matt Mahoney
--- Richard Loosemore [EMAIL PROTECTED] wrote:

 What I wanted was a set of non-circular definitions of such terms as 
 intelligence and learning, so that you could somehow *demonstrate* 
 that your mathematical idealization of these terms correspond with the 
 real thing, ... so that we could believe that the mathematical 
 idealizations were not just a fantasy.

The last time I looked at a dictionary, all definitions are circular.  So you
win.

 P.S.   The above definition is broken anyway:  what about unsupervised 
 learning?  What about learning by analogy?

I should have specified supervised learning as an application of AIXI.  There
are subsets, H, of Turing machines for which there are efficient algorithms
for finding a small h in H that is consistent with the training data. 
Examples include decision trees, neural networks, polynomial regression,
clustering, etc.  However AIXI does not necessarily imply learning.  There are
other approaches.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Scenarios for a simulated universe

2007-03-02 Thread Matt Mahoney
--- Ben Goertzel [EMAIL PROTECTED] wrote:
 Matt, I really don't see why you think Hutter's work shows that Occam's 
 Razor holds in any
 context except AI's with unrealistically massive amounts of computing 
 power (like AIXI and AIXItl)
 
 In fact I think that it **does** hold in other contexts (as a strategy 
 for reasoning by modest-resources
 minds like humans or Novamente), but I don't see how Hutter's work shows 
 this...

I admit Hutter did not make claims about machine learning frameworks or
Occam's razor, but we should not view his work in such narrow context. 
Hutter's conclusions about the optimal behavior of rational agents were proven
for the following cases:

1. Unrestricted environments (in which case the solution is not computable),
2. Space and time bounded environments (in which case the solution is
intractable),
3. Subsets of (1) or (2) such that the environment is consistent with past
interaction.

But the same reasoning he used in his proofs could just as well be applied to
practical cases of machine learning for which efficient solutions are known. 
The proofs all use the fact that shorter Turing machines are more likely than
longer ones (a Solomonoff prior).

For example, Hutter does not tell us how to solve linear regression, fitting a
straight line to a set of points.  What Hutter tells us is two other things:

1. Linear regression is a good predictor, even though a higher order
polynomial might have a better fit (because a low order polynomial has lower
algorithmic complexity).
2. Linear regression is useful, even though other machine learning algorithms
might be better predictors (because a general solution is not computable, so
we have to settle for a suboptimal solution).

So what I did was two things.  First, I used the fact that Occam's razor works
in both simulated and real environments (based on extensions of AIXI and
empirical observations respectively) to argue that the universe is consistent
with a simulation.  (This is disturbing because you are not programmed to
think this way).

Second, I used the same reasoning to guess about the nature of the universe
(assuming it is simulated), and the only thing we know is that shorter
simulation programs are more likely than longer ones.  My conclusion was that
bizarre behavior or a sudden end is unlikely, because such events would not
occur in the simplest programs.  This ought to at least be reassuring.

-- Matt Mahoney


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Scenarios for a simulated universe

2007-03-02 Thread Richard Loosemore


Matt,

When you said (in the text below):

  In every practical case of machine learning, whether it is with 
decision trees, neural networks, genetic algorithms, linear 
regression, clustering, or whatever, the problem is you are given 
training pairs (x,y) and you have to choose a hypothesis h from a

hypothesis space H that best classifies novel test instances, h(x) =
y.


... you did *exactly* what I was complaining about.  Correct me if I am 
wrong, but it looks like you just declared learning to be a particular 
class of mathematical optimization problem, without making reference to 
the fact that there is a more general meaning of learning that is 
vastly more complex than your above definition.


What I wanted was a set of non-circular definitions of such terms as 
intelligence and learning, so that you could somehow *demonstrate* 
that your mathematical idealization of these terms correspond with the 
real thing, ... so that we could believe that the mathematical 
idealizations were not just a fantasy.


If what you gave was supposed to be a definition, then it was circular 
(you defined learning to *be* the idealization).


The rest of what you say (about Occam's Razor etc.) is irrelevant if you 
or Hutter cannot prove something more than a hand-waving connection 
between the mathematical idealizations of intelligence, learning, 
etc., and the original meanings of those words.


So my original request stands unanswered.


Richard Loosemore.



P.S.   The above definition is broken anyway:  what about unsupervised 
learning?  What about learning by analogy?





Matt Mahoney wrote:

--- Richard Loosemore [EMAIL PROTECTED] wrote:


Matt Mahoney wrote:

--- Richard Loosemore [EMAIL PROTECTED] wrote:


Matt Mahoney wrote:

As you probably know, Hutter proved that the optimal behavior of a
goal seeking agent in an unknown environment (modeled as a pair of
interacting Turing machines, with the enviroment sending an
additional reward signal to the agent that the agent seeks to
maximize) is for the agent to guess at each step that the environment
is modeled by the shortest program consistent with the observed
interaction so far.  The proof requires the assumption that the
environment be computable.  Essentially, the proof says that Occam's
Razor is the best general strategy for problem solving.  The fact
that this works in practice strongly suggests that the universe is
indeed a simulation.

It suggests nothing of the sort.

Hutter's theory is a mathematical fantasy with no relationship to the 
real world.

Hutter's theory makes a very general statement about the optimal behavior

of

rational agents.  Is this really irrelevant to the field of machine

learning?

Define rational agent.

Define optimal behavior.


In the framework of Hutter's AIXI, optimal behavior is the behavior that
maximizes the accumulated reward signal from the environment.  In general,
this problem is not computable.  (It is equivalent to solving the Kolmogorov
complexity of the environment).  An agent with limited computational resources
is rational if it chooses the best strategy within those limits for maximizing
its accumulated reward signal (in general, a suboptimal solution).

Then prove that a rational agent following optimal behavior is 
actually intelligent (as we in colloquial speech use the word 
intelligent), and do this *without* circularly defining the meaning of 
intelligence to be, in effect, the optimal behavior of a rational agent.


Turing defined an agent as intelligent if communication with it is
indistinguishable from human.  This is not the same as rational behavior, but
it is probably the best definition we have.


One caveat:

Don't come back and ask me to be precise about what we in colloquial 
speech mean when we use the word intelligent, because some of us who 
reject this theory would state that the term does not have an analytic 
definition, only an empirical one.


Your position, on the other hand, is that a precise definition does 
exist and that you know what it is when you say that a rational agent 
following optimal behavior is an intelligent system.


For this reason the onus is on you (and not me) to say what intelligence is.

My claim is that you cannot, without circularity, prove that rational 
agents following optimal behavior are the same thing as intelligent 
systems, and for that reason your use of all of these terms is just 
unsubstantiated speculation.  Labels attached to an abstract 
mathematical formalism with nothing but your intuition in the way of 
justification.


This unsubstantiated speculation then escalates into a zone of complete 
nonsense when it talks about hypothetical systems of infinite size and 
power, without showing in any way why we should believe that the 
properties of such infinitely large systems carry over to systems in the 
real world.


Hence, it is a mathematical fantasy with no relationship to the real world.

QED.



Richard Loosemore.


Hutter realizes 

Re: [singularity] Scenarios for a simulated universe

2007-03-02 Thread Jef Allbright

On 3/2/07, Matt Mahoney [EMAIL PROTECTED] wrote:


Second, I used the same reasoning to guess about the nature of the universe
(assuming it is simulated), and the only thing we know is that shorter
simulation programs are more likely than longer ones.  My conclusion was that
bizarre behavior or a sudden end is unlikely, because such events would not
occur in the simplest programs.  This ought to at least be reassuring.


Consider that while the trunk of the universal tree of the probable
grows increasingly stable, the branches do often swing in the winds,
and many of the thinner branches of the possible do not survive.

Do you assume that humanity is presently nestled in the crook of a
highly probable branch? If our own branch were to break, would you
take comfort in knowing that the tree itself stands strong?

- Jef

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Scenarios for a simulated universe

2007-03-01 Thread Stathis Papaioannou

On 3/1/07, Matt Mahoney [EMAIL PROTECTED] wrote:


As you probably know, Hutter proved that the optimal behavior of a goal
seeking agent in an unknown environment (modeled as a pair of interacting
Turing machines, with the enviroment sending an additional reward signal to
the agent that the agent seeks to maximize) is for the agent to guess at
each step that the environment is modeled by the shortest program consistent
with the observed interaction so far.  The proof requires the assumption
that the environment be computable.  Essentially, the proof says that
Occam's Razor is the best general strategy for problem solving.  The fact
that this works in practice strongly suggests that the universe is indeed a
simulation.

With this in mind, I offer 5 possible scenarios ranked from least to most
likely based on the Kolmogorov complexity of the simulator.  I think this
will allay any fears that our familiar universe might suddenly be switched
off or behave in some radically different way.

1. Neurological level.  Your brain is connected to a computer at all the
input and output points, e.g. the spinal cord, optic and auditory nerves,
etc.  The simulation presents the illusion of a human body and a universe
containing billions of other people like yourself (but not exactly
alike).  The algorithmic complexity of this simulation would be of the same
order as the complexity of your brain, about 10^13 bits (by counting
synapses).

2. Cognitive level.  Rather than simulate the entire brain, the simulation
includes all of the low level sensorimotor processing as part of the
environment.  For example, when you walk you don't think about the
contraction of individual leg muscles.  When you read this, you think about
the words and not the arrangement of pixels in your visual field.  That type
of processing is part of the environment.  You are presented with a universe
at the symbolic level of words and high-level descriptions.  This is about
10^9 bits, based on the amount of verbal information you process in a
lifetime, and estimates of long term memory capacity by Standing and
Landauer.

3. Biological level.  Unlike 1 and 2, you are not the sole intelligent
being in the universe, but there is no life beyond Earth.  The environment
is a model of the Earth with just enough detail to simulate reality.  Humans
are modeled at the biological level.  The complexity of a human model is
that of our DNA.  I estimate 10^7 bits.  I know the genome is 6 x 10^9 bits
uncompressed, but only about 2% of our DNA is biologically active.  Also,
many genes are copied many times, and there are equivalent codons for the
same amino acids, genes can be moved and reordered, etc.

4. Physical level.  A program simulates the fundamental laws of physics,
with the laws tuned to allows life to evolve, perhaps on millions of
planets.  For example, the ratio of the masses of the proton and neutron is
selected to allow the distribution of elements like carbon and oxygen needed
for life to evolve.  (If the neutron were slightly heavier, there would be
no hydrogen fusion in stars.  If it were slightly lighter, the proton would
be unstable and all matter would decay into neutron bodies.)  Likewise the
force of gravity is set just right to allow matter to condense into stars
and planets and not all collapse into black holes.  Wolfram estimates that
the physical universe can be modeled with just a
few lines of code (see http://en.wikipedia.org/wiki/A_New_Kind_of_Science
), on the order of hundreds of bits.  This is comparable to the
information needed to set the free parameters of some string theories.

5. Mathematical level.  The universe we observe is one of an enumeration
of all Turing machines.  Some universes will support life and some
won't.  We must, of course, be in one that will.  The simulation is simply
expressed as N, the set of natural numbers.

Each level increases the computational requirements, while decreasing the
complexity of the program and making the universe more predictable.



You don't need much of a computer for level 5. A single physical state,
perhaps the null state, can be considered an infinitely parallel computer
mapping onto the natural numbers - indeed, mapping onto any computation you
like under the right interpretation. This is sort of trivially obvious, like
the assertion that a short string of symbols contains every possible book in
every possible language if you interpret and re-interpret the symbols in the
right way. In the case of the string, this isn't very interesting because
you need to have the book before you can find the book. But in the case of
computations, those which have observers will, as you suggest, self-select.

Stathis Papaioannou

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Scenarios for a simulated universe

2007-03-01 Thread Richard Loosemore

Matt Mahoney wrote:

As you probably know, Hutter proved that the optimal behavior of a
goal seeking agent in an unknown environment (modeled as a pair of
interacting Turing machines, with the enviroment sending an
additional reward signal to the agent that the agent seeks to
maximize) is for the agent to guess at each step that the environment
is modeled by the shortest program consistent with the observed
interaction so far.  The proof requires the assumption that the
environment be computable.  Essentially, the proof says that Occam's
Razor is the best general strategy for problem solving.  The fact
that this works in practice strongly suggests that the universe is
indeed a simulation.



It suggests nothing of the sort.

Hutter's theory is a mathematical fantasy with no relationship to the 
real world.




Richard Loosemore.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Scenarios for a simulated universe

2007-03-01 Thread Matt Mahoney

--- Richard Loosemore [EMAIL PROTECTED] wrote:

 Matt Mahoney wrote:
  As you probably know, Hutter proved that the optimal behavior of a
  goal seeking agent in an unknown environment (modeled as a pair of
  interacting Turing machines, with the enviroment sending an
  additional reward signal to the agent that the agent seeks to
  maximize) is for the agent to guess at each step that the environment
  is modeled by the shortest program consistent with the observed
  interaction so far.  The proof requires the assumption that the
  environment be computable.  Essentially, the proof says that Occam's
  Razor is the best general strategy for problem solving.  The fact
  that this works in practice strongly suggests that the universe is
  indeed a simulation.
 
 
 It suggests nothing of the sort.
 
 Hutter's theory is a mathematical fantasy with no relationship to the 
 real world.

Hutter's theory makes a very general statement about the optimal behavior of
rational agents.  Is this really irrelevant to the field of machine learning?

As for whether the universe is real or simulated, nobody can prove one way or
the other.  But your brain is programmed through evolution to believe the
universe is real.  If you were programming an autonomous agent for self
survival, wouldn't you program it that way?


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Scenarios for a simulated universe

2007-03-01 Thread Jef Allbright

On 3/1/07, Matt Mahoney [EMAIL PROTECTED] wrote:


What I argue is this: the fact that Occam's Razor holds suggests that the
universe is a computation.


Matt -

Would you please clarify how/why you think B follows from A in your
preceding statement?

- Jef

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Scenarios for a simulated universe

2007-03-01 Thread Jef Allbright

On 3/1/07, Matt Mahoney [EMAIL PROTECTED] wrote:


--- Jef Allbright [EMAIL PROTECTED] wrote:

 On 3/1/07, Matt Mahoney [EMAIL PROTECTED] wrote:

  What I argue is this: the fact that Occam's Razor holds suggests that the
  universe is a computation.

 Matt -

 Would you please clarify how/why you think B follows from A in your
 preceding statement?

Hutter's proof requires that the environment have a computable distribution.
http://www.hutter1.net/ai/aixigentle.htm

So in any universe of this type, Occam's Razor should hold.  If Occam's Razor
did not hold, then we could conclude that the universe is not computable.  The
fact that Occam's Razor does hold means we cannot rule out the possibility
that the universe is simulated.


Matt -

I think this answers my question to you, at least I think I see where
you're coming from.

I would say that you have justification for saying that interaction
with the universe demonstrates mathematically modelable regularities
(in keeping with the principle of parsimony), rather than saying that
it's a simulation (which involves additional assumptions.)

Do you think you have information to warrant taking it further?

- Jef

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Scenarios for a simulated universe

2007-03-01 Thread Matt Mahoney

--- Jef Allbright [EMAIL PROTECTED] wrote:

 On 3/1/07, Matt Mahoney [EMAIL PROTECTED] wrote:
 
  --- Jef Allbright [EMAIL PROTECTED] wrote:
 
   On 3/1/07, Matt Mahoney [EMAIL PROTECTED] wrote:
  
What I argue is this: the fact that Occam's Razor holds suggests that
 the
universe is a computation.
  
   Matt -
  
   Would you please clarify how/why you think B follows from A in your
   preceding statement?
 
  Hutter's proof requires that the environment have a computable
 distribution.
  http://www.hutter1.net/ai/aixigentle.htm
 
  So in any universe of this type, Occam's Razor should hold.  If Occam's
 Razor
  did not hold, then we could conclude that the universe is not computable. 
 The
  fact that Occam's Razor does hold means we cannot rule out the possibility
  that the universe is simulated.
 
 Matt -
 
 I think this answers my question to you, at least I think I see where
 you're coming from.
 
 I would say that you have justification for saying that interaction
 with the universe demonstrates mathematically modelable regularities
 (in keeping with the principle of parsimony), rather than saying that
 it's a simulation (which involves additional assumptions.)
 
 Do you think you have information to warrant taking it further?
 
 - Jef

There is no way to know if the universe is real or simulated.  From our point
of view, there is no difference.  If the simulation is realistic then there is
no experiment we could do to make the distinction.  I am just saying that our
universe is consistent with a simulation in that it appears to be computable.

One disturbing implication is that the simulation might be suddenly turned off
or changed in some radical way you can't anticipate.  You really don't know
anything about the world in which the simulation is being run.  (The movie
The Matrix is based on this idea).  Maybe the Singularity has already
happened and what you observe as the universe is part of the resulting
computation.

My argument is that if the universe is simulated then these possibilities are
unlikely.  My reasoning is that if we know nothing about this computation then
we should assume a universal Solomonoff prior, i.e. a universal Turing machine
programmed by random coin flips.  This is what Hutter did to solve the problem
of rational agents.  I am applying the idea to understanding a universe about
which (if it is not real) we know nothing, except that shorter programs are
more likely than longer ones.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Scenarios for a simulated universe

2007-03-01 Thread Jef Allbright

On 3/1/07, Matt Mahoney [EMAIL PROTECTED] wrote:


--- Jef Allbright [EMAIL PROTECTED] wrote:



 Matt -

 I think this answers my question to you, at least I think I see where
 you're coming from.

 I would say that you have justification for saying that interaction
 with the universe demonstrates mathematically modelable regularities
 (in keeping with the principle of parsimony), rather than saying that
 it's a simulation (which involves additional assumptions.)

 Do you think you have information to warrant taking it further?

 - Jef

There is no way to know if the universe is real or simulated.  From our point
of view, there is no difference.  If the simulation is realistic then there is
no experiment we could do to make the distinction.


I think you mean if the simulation is consistent then there's no
experiment we could do to make the distinction.


I am just saying that our
universe is consistent with a simulation in that it appears to be computable.


I agree with you that it seems there's nothing more we can say in that
case about whether or not it's a simulation.



One disturbing implication is that the simulation might be suddenly turned off
or changed in some radical way you can't anticipate.


Hmm, I thought you just made the perfectly good point that there's
nothing further we can say about whether or not our world is a
simulation, so what basis do you have for worrying about whether the
simulation might be turned off?

In fact, I have it on good faith that it was in fact turned off, for
about 3M real years, just around breakfast time this morning.  It'll
probably be shut down again, but what could possibly be disturbing
about something that can't possibly be detected?

- Jef

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


[singularity] Scenarios for a simulated universe

2007-02-28 Thread Matt Mahoney
As you probably know, Hutter proved that the optimal behavior of a goal seeking 
agent in an unknown environment (modeled as a pair of interacting Turing 
machines, with the enviroment sending an additional reward signal to the agent 
that the agent seeks to maximize) is for the agent to guess at each step that 
the environment is modeled by the shortest program consistent with the observed 
interaction so far.  The proof requires the assumption that the environment be 
computable.  Essentially, the proof says that Occam's Razor is the best general 
strategy for problem solving.  The fact that this works in practice strongly 
suggests that the universe is indeed a simulation.

With this in mind, I offer 5 possible scenarios ranked from least to most 
likely based on the Kolmogorov complexity of the simulator.  I think this will 
allay any fears that our familiar universe might suddenly be switched off or 
behave in some radically different way.

1. Neurological level.  Your brain is connected to a computer at all the input 
and output points, e.g. the spinal cord, optic and auditory nerves, etc.  The 
simulation presents the illusion of a human body and a universe containing 
billions of other people like yourself (but not exactly alike).  The 
algorithmic complexity of this simulation would be of the same order as the 
complexity of your brain, about 10^13 bits (by counting synapses).

2. Cognitive level.  Rather than simulate the entire brain, the simulation 
includes all of the low level sensorimotor processing as part of the 
environment.  For example, when you walk you don't think about the contraction 
of individual leg muscles.  When you read this, you think about the words and 
not the arrangement of pixels in your visual field.  That type of processing is 
part of the environment.  You are presented with a universe at the symbolic 
level of words and high-level descriptions.  This is about 10^9 bits, based on 
the amount of verbal information you process in a lifetime, and estimates of 
long term memory capacity by Standing and Landauer.

3. Biological level.  Unlike 1 and 2, you are not the sole intelligent being in 
the universe, but there is no life beyond Earth.  The environment is a model of 
the Earth with just enough detail to simulate reality.  Humans are modeled at 
the biological level.  The complexity of a human model is that of our DNA.  I 
estimate 10^7 bits.  I know the genome is 6 x 10^9 bits uncompressed, but only 
about 2% of our DNA is biologically active.  Also, many genes are copied many 
times, and there are equivalent codons for the same amino acids, genes can be 
moved and reordered, etc.

4. Physical level.  A program simulates the fundamental laws of physics, with 
the laws tuned to allows life to evolve, perhaps on millions of planets.  For 
example, the ratio of the masses of the proton and neutron is selected to allow 
the distribution of elements like carbon and oxygen needed for life to evolve.  
(If the neutron were slightly heavier, there would be no hydrogen fusion in 
stars.  If it were slightly lighter, the proton would be unstable and all 
matter would decay into neutron bodies.)  Likewise the force of gravity is set 
just right to allow matter to condense into stars and planets and not all 
collapse into black holes.  Wolfram estimates that the physical universe can be 
modeled with just a
few lines of code (see http://en.wikipedia.org/wiki/A_New_Kind_of_Science
), on the order of hundreds of bits.  This is comparable to the information 
needed to set the free parameters of some string theories.

5. Mathematical level.  The universe we observe is one of an enumeration of all 
Turing machines.  Some universes will support life and some won't.  We must, of 
course, be in one that will.  The simulation is simply expressed as N, the set 
of natural numbers.

Each level increases the computational requirements, while decreasing the 
complexity of the program and making the universe more predictable.


-- Matt Mahoney, [EMAIL PROTECTED]


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983