Re: [singularity] Vista/AGI

2008-04-24 Thread Eric B. Ramsay
Samantha Atkins wrote:I have been in conferences of futurists no less where over 70% of the audience raises their hand that they would likely not avail themselves of immortality if it was immediately available!The conservative preservation of the known goes a lot deeper than we credit.That's quite a percentage. I wonder what the number would be for the public at large. Did anyone ask this group of futurists what their major objection to immortality is? Religious reasons? Eric B. Ramsay




  

  
  singularity | Archives

 | Modify
 Your Subscription


  

  




Re: [singularity] Vista/AGI

2008-04-13 Thread Eric B. Ramsay
Mike: 
   
  I am a novice to this AGI business and so I am not being cute with the 
following question: What, in your opinion, would be the first AGI problem to 
tackle. Perhaps theses various problems can't be priority ordered but 
nontheless, which problem stands out for you?. Thanks.
   
  Eric B. Ramsay

Mike Tintner [EMAIL PROTECTED] wrote:
  Samantha:From what you said above $50M will do the entire job. If that is 
all
that is standing between us and AGI then surely we can get on with it in
all haste.

Oh for gawdsake, this is such a tedious discussion. I would suggest the 
following is a reasonable *framework* for any discussions - although it is 
also a framework to end discussions for the moment.

1) Given our general ignorance, everyone is, strictly, entitled to their 
opinions about the future of AGI. Ben is entitled to his view that it will 
only take $50M or thereabouts.

BUT

2) Not a SINGLE problem of AGI has been solved yet. Not a damn one. Is 
anyone arguing different? And until you've solved one, you can hardly make 
*reasonable* predictions about how long it will take to solve the rest - 
predictions that anyone, including yourself should take seriously- 
especially if you've got any sense, any awareness of AI's long, ridiculous 
and incorrigible record of crazy predictions here, (and that's by Minsky's  
Simon's as well as lesser lights) - by people also making predictions 
without having solved any of AGI's problems. All investors beware. Massive 
health  wealth warnings.

MEANWHILE

3)Others - and I'm not the only one here - take a view more like: the human 
brain/body is the most awesomely complex machine in the known universe, the 
product of billions of years of evolution. To emulate it, or parallel its 
powers, is going to take more like many not just trillions but zillions of 
dollars - many times global output, many, many Microsoft's. Now right now 
that's a reasonable POV too.

But until you've solved one, just a measly one of AGI's problems, there's 
not a lot of point in further discussion, is there? Nobody's really gaining 
from it, are they? It's just masturbation, isn't it? 


---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: http://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


RE: [singularity] Vista/AGI

2008-04-08 Thread Eric B. Ramsay

John G. Rose [EMAIL PROTECTED] wrote:

If you look at the state of internet based intelligence now, all the data
and its structure, the potential for chain reaction or a sort of structural
vacuum exists and it is accumulating a potential at an increasing rate.
IMO...

So you see the arrival of a Tipping Point as per  Malcolm Gladwell. Whether I 
physically benefit from the arrival of the Singularity or not, I just want to 
see the damn thing. I would invest some modest sums in AGI if we could get a 
huge collection plate going around (these collection plate amounts add up!).

Eric B. Ramsay

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: Promoting AGI (RE: [singularity] Vista/AGI)

2008-04-08 Thread Eric B. Ramsay
If I understand what I have read in this thread so far, there is Ben on the one 
hand suggesting $10 mil. with 10-30 people in 3 to 10 years and on the other 
there is Matt saying $1quadrillion, using a billion brains in 30 years. I don't 
believe I have ever seen such a divergence of opinion before on what is 
required  for a technological breakthrough (unless people are not being serious 
and I am being naive). I suppose  this sort of non-consensus on such a scale 
could be part of investor reticence.

Eric B. Ramsay

Matt Mahoney [EMAIL PROTECTED] wrote: 
--- Mike Tintner  wrote:

 Matt : a super-google will answer these questions by routing them to
 experts on these topics that will use natural language in their narrow
 domains of expertise.
 
 And Santa will answer every child's request, and we'll all live happily ever
 after.  Amen.

If you have a legitimate criticism of the technology or its funding plan, I
would like to hear it.  I understand there will be doubts about a system I
expect to cost over $1 quadrillion and take 30 years to build.

The protocol specifies natural language.  This is not a hard problem in narrow
domains.  It dates back to the 1960's.  Even in broad domains, most of the
meaning of a message is independent of word order.  Google works on this
principle.

But this is beside the point.  The critical part of the design is an incentive
for peers to provide useful services in exchange for resources.  Peers that
appear most intelligent and useful (and least annoying) are most likely to
have their messages accepted and forwarded by other peers.  People will
develop domain experts and routers and put them on the net because they can
make money through highly targeted advertising.

Google would be a peer on the network with a high reputation.  But Google
controls only 0.1% of the computing power on the Internet.  It will have to
compete with a system that allows updates to be searched instantly, where
queries are persistent, and where a query or message can initiate
conversations with other people in real time.

 Which are these areas of science, technology, arts, or indeed any area of 
 human activity, period, where the experts all agree and are NOT in deep 
 conflict?
 
 And if that's too hard a question, which are the areas of AI or AGI, where 
 the experts all agree and are not in deep conflict?

I don't expect the experts to agree.  It is better that they don't.  There are
hard problem remaining to be solved in language modeling, vision, and
robotics.  We need to try many approaches with powerful hardware.  The network
will decide who the winners are.


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: http://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: Promoting AGI (RE: [singularity] Vista/AGI)

2008-04-08 Thread Eric B. Ramsay
Sure, but Matt is also suggesting that his path is the most viable and so from 
the point of view of an investor, he/she is faced with very divergent opinions 
on the  type of resources needed to get to the AGI expeditiously. It's far 
easier to understand wide price swings in a spaceship to get from here to Mars 
(or wherever) depending on how extravagantly you want to travel but if you 
define the problem as just get there, I am confident the costs will not be 
different by a factor of 100 million.

Eric B. Ramsay

Ben Goertzel [EMAIL PROTECTED] wrote: Well, Matt and I are talking about 
building totally different kinds of
systems...

I believe the system he wants to build would cost a huge amount ...
but I don't think
it's the most interesting sorta thing to build ...

A decent analogue would be spaceships.  All sorts of designs exist, some orders
of magnitude more complex and expensive than others.  It's more
practical to build
the cheaper ones, esp. when they're also more powerful ;-p

ben

On Tue, Apr 8, 2008 at 10:56 PM, Eric B. Ramsay  wrote:
 If I understand what I have read in this thread so far, there is Ben on the
 one hand suggesting $10 mil. with 10-30 people in 3 to 10 years and on the
 other there is Matt saying $1quadrillion, using a billion brains in 30
 years. I don't believe I have ever seen such a divergence of opinion before
 on what is required  for a technological breakthrough (unless people are not
 being serious and I am being naive). I suppose  this sort of non-consensus
 on such a scale could be part of investor reticence.

 Eric B. Ramsay

 Matt Mahoney  wrote:


 --- Mike Tintner wrote:

  Matt : a super-google will answer these questions by routing them to
  experts on these topics that will use natural language in their narrow
  domains of expertise.
 
  And Santa will answer every child's request, and we'll all live happily
 ever
  after. Amen.

 If you have a legitimate criticism of the technology or its funding plan, I
 would like to hear it. I understand there will be doubts about a system I
 expect to cost over $1 quadrillion and take 30 years to build.

 The protocol specifies natural language. This is not a hard problem in
 narrow
 domains. It dates back to the 1960's. Even in broad domains, most of the
 meaning of a message is independent of word order. Google works on this
 principle.

 But this is beside the point. The critical part of the design is an
 incentive
 for peers to provide useful services in exchange for resources. Peers that
 appear most intelligent and useful (and least annoying) are most likely to
 have their messages accepted and forwarded by other peers. People will
 develop domain experts and routers and put them on the net because they can
 make money through highly targeted advertising.

 Google would be a peer on the network with a high reputation. But Google
 controls only 0.1% of the computing power on the Internet. It will have to
 compete with a system that allows updates to be searched instantly, where
 queries are persistent, and where a query or message can initiate
 conversations with other people in real time.

  Which are these areas of science, technology, arts, or indeed any area of
  human activity, period, where the experts all agree and are NOT in deep
  conflict?
 
  And if that's too hard a question, which are the areas of AI or AGI, where
  the experts all agree and are not in deep conflict?

 I don't expect the experts to agree. It is better that they don't. There are
 hard problem remaining to be solved in language modeling, vision, and
 robotics. We need to try many approaches with powerful hardware. The network
 will decide who the winners are.


 -- Matt Mahoney, [EMAIL PROTECTED]

 ---
 singularity
 Archives: http://www.listbox.com/member/archive/11983/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/11983/
 Modify Your Subscription: http://www.listbox.com/member/?;

 Powered by Listbox: http://www.listbox.com


  

  singularity | Archives | Modify Your Subscription



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: http://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-03-16 Thread Eric B. Ramsay
Lol. Calm down fella. You are going to give yourself a stroke.

Eric B. Ramsay

J. Andrew Rogers [EMAIL PROTECTED] wrote:
Few people would define the developments task as hiring hundreds of  
engineers to do things like write device drivers and apps for  
defective Chinese silicon so that little Billy's stuffed purple  
dinosaur with a USB cable coming out its ass can dance along with  
Hannah Montana music videos being streamed from YouTube with built-in  
DRM as a heroic last ditch effort to contain the spread of that  
insipid music while your email-client-and-dishwashing-machine  
forwards your porn collection to everyone in your address book in the  
background because a Russian hacker^H^H^H^H^H^H programmer might find  
that funny^H^H^H^H^H use

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Re: Revised version of Jaron Lanier's thought experiment.

2008-03-05 Thread Eric B. Ramsay


Matt Mahoney [EMAIL PROTECTED] wrote:

[For those not familiar with Richard's style: once he disagrees with something
he will dispute it to the bitter end in long, drawn out arguments, because
nothing is more important than being right.]

What's the purpose for this comment? If the people here are intelligent enough 
to have meaningful discussions on a difficult topic, then they will be able to 
sort out for themselves the styles of others. 

Eric B. Ramsay

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


[singularity] Re: Revised version of Jaron Lanier's thought experiment.

2008-02-22 Thread Eric B. Ramsay
I came across an old Discover magazine this morning with yet another article by 
Lanier on his rainstorm thought experiment. After reading the article it 
occurred to me that what he is saying may be equivalent to:

Imagine a sufficiently large computer that works according to the architecture 
of our ordinary PC's. In the space of Operating Systems (code interpreters), we 
can find an operating system such that it will run the input from the rainstorm 
such that it appears identical to a computer running a brain.

If this is true, then functionalism is not affected since we must not forget to 
combine program + OS. Thus the rainstorm by itself has no emergent properties.

Eric B. Ramsay


---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-19 Thread Eric B. Ramsay
During the late 70's when I was at McGill, I attended a public talk given by 
Feynman on quantum physics. After the talk, and in answer to a question posed 
from a member of the audience, Feynman said something along the lines of : I 
have here in my pocket a prescription from my doctor that forbids me to answer 
questions from or get into discussions with philosophers or something like 
that. After spending the last couple of days reading all the links on the 
outrageous proposition that rocks, rainstorms or plates of spaghetti implement 
the mind, I now understand Feynman's sentiment. What a waste of mental energy. 
A line of discussion as equally fruitless as solipsism. I am in full agreement 
with Richard Loosemore on this one. 
Eric B. Ramsay

Stathis Papaioannou [EMAIL PROTECTED] wrote: On 20/02/2008, Richard Loosemore 
 wrote:

 I am aware of some of those other sources for the idea:  nevertheless,
 they are all nonsense for the same reason.  I especially single out
 Searle:  his writings on this subject are virtually worthless.  I have
 argued with Searle to his face, and I have talked with others
 (Hofstadter, for example) who have also done so, and the consensus among
 these people is that his arguments are built on confusion.

Just to be clear, this is *not* the same as Searle's Chinese Room
argument, which only he seems to find convincing.




-- 
Stathis Papaioannou

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: http://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] MindForth achieves True AI functionality

2008-02-02 Thread Eric B. Ramsay
I noticed that the members of the list have completely ignored this 
pronouncement by A.T. Murray. Is there a reason for this (for example is this 
person considered fringe or worse)? 
   
  Eric B. Ramsay

A. T. Murray [EMAIL PROTECTED] wrote:
  MindForth free open AI source code on-line at
http://mentifex.virtualentity.com/mind4th.html 
has become a True AI-Complete thinking mind 
after years of tweaking and debugging.

On 22 January 2008 the AI Forthmind began to 
think effortlessly and almost flawlessly in 
loops of meandering chains of thought. 

Users are invited to download the AI Mind 
and decide for themselves if what they see 
is machine intelligence and thinking. The 
http://mentifex.virtualentity.com/m4thuser.html 
User Manual explains all the steps involved. 

MindForth is the Model-T of True AI software, 
roughly comparable to the state of the art in 
automobiles one hundred years ago in 1908. 
As such, the AI in Forth will not blow you 
away with any advanced features, but will 
subtly show you the most primitive display 
of spreading activation among concepts.

The world's first publicly available True AI 
achieves meandering chains of thought by 
detouring away from incomplete ideas 
lacking knowledge-base data and by asking 
questions of the human user when the AI is 
unable to complete a sentence of thought. 

The original MindForth program has spawned 
http://AIMind-I.com as the first offspring 
in the evolution of artificial intelligence.

ATM/Mentifex
-- 
http://www.kurzweilai.net/mindx/profile.php?id=26 
http://mentifex.virtualentity.com/mind4th.html 
http://mind.sourceforge.net/aisteps.html 
http://onsingularity.com/user/mentifex 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=93036941-9e51d0

Re: [singularity] World as Simulation

2008-01-12 Thread Eric B. Ramsay
Apart from all this philosophy (non-ending as it seems), Table 1. of the paper 
referred to at the start of this thread gives several consequences of a 
simulation that offer to explain what's behind current physical observations 
such as the upper speed limit of light, relativistic and quantum effects etc. 
Without worrying about whether we are a simulation of a sinmulation of a 
simulation etc, it would be interesting to work out all the 
qualitative/quantitative (?) implications of the idea and see if observations 
strongly or weakly support it. If the only thing we can do with the idea is 
discuss phiosophy then the idea is useless. 

Charles D Hixson [EMAIL PROTECTED] wrote:  Matt Mahoney wrote:
 --- John G. Rose wrote:
 
 In a sim world there are many variables that can overcome other motivators
 so a change in the rate of gene proliferation would be difficult to predict.
 The agents that correctly believe that it is a simulation could say OK this
 is all fake, I'm going for pure pleasure with total disregard for anything
 else. But still too many variables to predict. In humanity there have been
 times in the past where societies have given credence to simulation through
 religious beliefs and weighted more heavily on a disregard for other groups
 existence. A society would say that this is all fake, we all gotta die
 sometime anyway so we are going to take as much as we can from other tribes
 and decimate them for sport. Not saying this was always the reason for
 intertribal warfare but sometimes it was.
 

 The reason we have war is because the warlike tribes annihilated the peaceful
 ones. Evolution favors a brain structure where young males are predisposed to
 group loyalty (gangs or armies), and take an interest in competition and
 weapons technology (e.g. the difference in the types of video games played by
 boys and girls). It has nothing to do with belief in simulation. Cultures
 that believed the world was simulated probably killed themselves, not others. 
 That is why we believe the world is real.
 
Simulation is a new word. In this context, let's use an old word. 
Maya. Have the Buddhist countries and societies gone away?
And let's use an old word for reality. Heaven. Have the Christian 
countries and societies gone away?

Perhaps you need to rethink your suppositions.

 
 But the problem is in the question of what really is a simulation? For the
 agents constrained, it doesn't matter they still have to live in it - feel
 pain, fight for food, get along with other agents... Moving an agent from
 one simulation to the next though, that gives it some sort of extra
 properties...
 

 It is unlikely that any knowledge you now have would be useful in another
 simulation. Knowledge is only useful if it helps propagate your DNA.


 -- Matt Mahoney, [EMAIL PROTECTED]

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;

 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=85267781-e99d20

Re: [singularity] World as Simulation

2008-01-12 Thread Eric B. Ramsay
Matt: I would prefer to analyse something simple such as the double slit 
experiment. If you do an experiment to see which slit the photon goes through 
you get an accumulation of photons in equal numbers behind each slit. If you 
don't make an effort to see which slit the photons go through, you get an 
interference pattern. What, if this is all a simulation, is requiring the 
simulation to behave this way? I assume that this is a forced result based on 
the assumption of using only as much computation as needed to perform the 
simulation. A radioactive atom decays when it decays. All we can say with any 
certainty is what it's probability distribution in time is for decay. Why is 
that? Why would a simulation not maintain local causality (EPR paradox)? I 
think it would be far more interesting (and meaningful) if the simulation 
hypothesis could provide a basis for these observations.

  Eric B. Ramsay
Matt Mahoney [EMAIL PROTECTED] wrote:
  --- Eric B. Ramsay wrote:

 Apart from all this philosophy (non-ending as it seems), Table 1. of the
 paper referred to at the start of this thread gives several consequences of
 a simulation that offer to explain what's behind current physical
 observations such as the upper speed limit of light, relativistic and
 quantum effects etc. Without worrying about whether we are a simulation of a
 sinmulation of a simulation etc, it would be interesting to work out all the
 qualitative/quantitative (?) implications of the idea and see if
 observations strongly or weakly support it. If the only thing we can do with
 the idea is discuss phiosophy then the idea is useless. 

There is plenty of physical evidence that the universe is simulated by a
finite state machine or a Turing machine.

1. The universe has finite size, mass, and age, and resolution. Taken
together, the universe has a finite state, expressible in approximately
hG/c^5T^2 = 1.55 x 10^122 bits ~ 2^406 bits (where h is Planck's constant, G
is the gravitational constant, c is the speed of light, and T is the age of
the universe. By coincidence, if the universe is divided into 2^406 regions,
each is the size of a proton or neutron. This is a coincidence because h, G,
c, and T don't depend on the properties of any particles).

2. A finite state machine cannot model itself deterministically. This is
consistent with the probabilistic nature of quantum mechanics.

3. The observation that Occam's Razor works in practice is consistent with the
AIXI model of a computable environment.

4. The complexity of the universe is consistent with the simplest possible
algorithm: enumerate all Turing machines until a universe supporting
intelligent life is found. The fastest way to execute this algorithm is to
run each of the 2^n universes with complexity n bits for 2^n steps. The
complexity of the free parameters in many string theories plus general
relativity is a few hundred bits (maybe 406).


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=85331123-9c8ee9

Re: [singularity] World as Simulation

2008-01-12 Thread Eric B. Ramsay
Matt: I understand your point #2 but it is a grand sweep without any detail. To 
give you an example of what I have in mind, let's consider the photon double 
slit experiment again. You have a photon emitter operating at very low 
intensity such that photons come out singly. There is an average rate for the 
photons emitted but the point in time for their emission is random - this then 
introduces the non-deterministic feature of nature. At this point, why doesn't 
the emitted photon just go through one or the other slit? Instead, what we find 
is that the photon goes through a specific slit if someone is watching but if 
no one is watching it somehow goes through both slits and performs a self 
interference leading to the interference pattern observed. Now my question: can 
it be demonstrated that this scenario of two alternate behaviour strategies 
minimizes computation resources (or whatever Occam's razor requires) and so is 
a necessary feature of a simulation? We already have a
 probability event at the very start when the photon was emitted, how does the 
other behaviour fit with the simulation scheme? Wouldn't it be computationally 
simpler to just follow the photon like a billiard ball instead of two 
variations in behaviour with observers thrown in?
   
  Eric B. Ramsay

Matt Mahoney [EMAIL PROTECTED] wrote:
  
--- Eric B. Ramsay wrote:

 Matt: I would prefer to analyse something simple such as the double slit
 experiment. If you do an experiment to see which slit the photon goes
 through you get an accumulation of photons in equal numbers behind each
 slit. If you don't make an effort to see which slit the photons go through,
 you get an interference pattern. What, if this is all a simulation, is
 requiring the simulation to behave this way? I assume that this is a forced
 result based on the assumption of using only as much computation as needed
 to perform the simulation. A radioactive atom decays when it decays. All we
 can say with any certainty is what it's probability distribution in time is
 for decay. Why is that? Why would a simulation not maintain local causality
 (EPR paradox)? I think it would be far more interesting (and meaningful) if
 the simulation hypothesis could provide a basis for these observations.

This is what I addressed in point #2. A finite state simulation forces any
agents in the simulation to use a probabilistic model of their universe,
because an exact model would require as much memory as is used for the
simulation itself. Quantum mechanics is an example of a probabilistic model. 
The fact that the laws of physics prevent you from making certain predictions
is what suggests the universe is simulated, not the details of what you can't
predict.

If the universe were simulated by a computer with infinite memory (e.g. real
valued registers), then the laws of physics might have been deterministic,
allowing us to build infinite memory computers that could make exact
predictions even if the universe had infinite size, mass, age, and resolution.
However, this does not appear to be the case.

A finite simulation does not require any particular laws of physics. For all
you know, tomorrow gravity may cease to exist, or time will suddenly have 17
dimensions. However, the AIXI model makes this unlikely because unexpected
changes like this would require a simulation with greater algorithmic
complexity.

This is not a proof that the universe is a simulation, nor are any of my other
points. I don't believe that a proof is possible.

 
 Eric B. Ramsay
 Matt Mahoney wrote:
 --- Eric B. Ramsay wrote:
 
  Apart from all this philosophy (non-ending as it seems), Table 1. of the
  paper referred to at the start of this thread gives several consequences
 of
  a simulation that offer to explain what's behind current physical
  observations such as the upper speed limit of light, relativistic and
  quantum effects etc. Without worrying about whether we are a simulation of
 a
  sinmulation of a simulation etc, it would be interesting to work out all
 the
  qualitative/quantitative (?) implications of the idea and see if
  observations strongly or weakly support it. If the only thing we can do
 with
  the idea is discuss phiosophy then the idea is useless. 
 
 There is plenty of physical evidence that the universe is simulated by a
 finite state machine or a Turing machine.
 
 1. The universe has finite size, mass, and age, and resolution. Taken
 together, the universe has a finite state, expressible in approximately
 hG/c^5T^2 = 1.55 x 10^122 bits ~ 2^406 bits (where h is Planck's constant, G
 is the gravitational constant, c is the speed of light, and T is the age of
 the universe. By coincidence, if the universe is divided into 2^406 regions,
 each is the size of a proton or neutron. This is a coincidence because h, G,
 c, and T don't depend on the properties of any particles).
 
 2. A finite state machine cannot model itself deterministically. This is
 consistent with the probabilistic

[singularity] World as Simulation

2008-01-03 Thread Eric B. Ramsay
Some of you may be interested in this link (if you haven't already seen the 
article).
   
  http://arxiv.org/abs/0801.0337
   
  Eric B. Ramsay
   

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=81675143-ace197

Re: Machine Motivation Gets Distorted Again [WAS Re: [singularity] Help get the 400k SIAI matching challenge on DIGG's front page]

2007-05-15 Thread Eric B. Ramsay
I have a Ph.D. in Nuclear Physics and I don't understand half of what is said 
on this board (as well as the AGI board). I appreciate all simplifications that 
anyone cares to make.

Eric B. Ramsay

Benjamin Goertzel [EMAIL PROTECTED] wrote: 


Shane,

Thankyou for being patronizing.

Some of us do understand the AIXI work in enough depth to make valid 
criticism.

The problem is that you do not understand the criticism well enough to
address it.


Richard Loosemore.

Richard,

While you do have the math background to understand the AIXI material, 
plenty of list members don't.  I think Shane's less-technical summary may
be helpful in helping those with less math background to understand what
AIXI and related ideas are all about.

Having talked to Shane about AGI a fair bit, I venture to suggest he does 
understand your criticism, but just doesn't agree with all of it

(I note that I don't fully agree with either you or Shane, but I think I do
understand both of your positions reasonably well.)
 
-- Ben G


 
-
 This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604user_secret=8eb45b07