Re: relevant probability distribution

2002-06-15 Thread Matthieu Walraet

On 15 Jun 2002, at 14:27, Russell Standish wrote:
 
 
 No the issue concerns any conscious program, rather than any
 particular one. The fact that there are vastly more amoeba than homo
 sapiens tends to argue against amoebae being consious.
 

This remind me of Jack Vance novels Alastor.
One of the characters is the king that rules over a vast area of the 
galaxy. He likes to travel incognito among his subjects, and he often ask 
himself the question: There is billions of men and only one king. How is 
it possible that it happens that I am the king ?

Do your position about this is that subjects are not conscious, only kings?

From a third-person point of view (the reader of the novel), the question 
is simple. There is billions of subjects, and they can all ask themselves 
Why I am me and not someone else ?.

The problem is we have only a first-person point of view on our universe 
(or on the everything). We must use our imagination, to do thought 
experiments, to get a third-person point of view. 

Matthieu.
-- 
http://matthieu.walraet.free.fr




Re: relevant probability distribution

2002-06-14 Thread Saibal Mitra


Russell wrote:

 I take consciousness to be that property essential for the operation
 of the Anthropic Principle. The universe is the way it is because we
 are here observing it as conscious beings.

 The first problem this raises is why does the anthropic principle
 work?  - one can conceive of being immersed in a virtual reality which
 is totally inconsistent with our existence as conscious observers, for
 example.

Tha must be explained by the unlikeliness of such a situation. Why would
anyone simulate me living happily on the surface of Venus? They could have
taken any possible person

 However, let us accept the AP. After all, it has passed observational
 test with flying colours. We should also expect that we should be an
 example of the most likely form of consciousness.

 The second problem is raises is that if ameobae are conscious, then
 why aren't we amoebae? There are many more amoebae on the planet than
 there are human beings. I can well accept that dolphins and
 chimpanzees (for instance) _could_ be conscious, since there are
 vastly greater numbers of humans around today than there are of these
 other species, but there is something special that we have that amoeba
 (or even ants, lets say) don't have.

 Not sure about ant nests (Hofstadter style). Anyone got a good
 estimate of the number of extant ant nests vis a vis human population?

 One possibility is that there is some kind of measure function that
 rates our consciousness as far more likely to be occupied than an
 amoeba's, however I'm personally sceptical of this. Consciousness seem
 to be so much of an either/or thing...

I think that one should first define oneself as a particular program, and
then look at where and how often that program is actually running. Amoebas
are incapable of running me. Maybe artificial intelligent agents pose more
of a problem. Why am I not a robot, that can copy himself as many times as
he pleases?

B.t.w. Ken Olum made an interesting remark in his paper in which he
advocates the Self Indicating Assumption  (SIA) (see arxiv.org). If
universes with more observers are more likely than universes with less
observers, then why don't we live in a universe in which the number was
pre-programmed to be some ridicolously large number, say N = 10^10?
(Ken gave a different example). He concludes that apparently such universes
must be unlikelier by a factor of at least N, to compensate for the factor N
coming from the number of observers. This fits in nice with the idea that
more complex programs should have lower measure. You can see that the
measure of a program must decrease faster than 2^(-p) where p is the length
of the program.

Saibal





Re: relevant probability distribution

2002-06-12 Thread Russell Standish

Saibal Mitra wrote:
 
 So, I am not saying that only certain programs are conscious and others not.
 I am really saying that if you the universe is running (in some
 approximation) a certain program in my head. That program defines me. If you
 run that program on a computer, that computer would have my consciousness,
 i.e. that computer would be me. Since most members of this list (except for
 Russell?) believe that our universe itself a program, you could say that in
 some sense it is conscious. Most of us think, however, that our universe's
 program is very simple. A retarded amoeba would probably be more
 intelligent.
 

I take consciousness to be that property essential for the operation
of the Anthropic Principle. The universe is the way it is because we
are here observing it as conscious beings.

The first problem this raises is why does the anthropic principle
work?  - one can conceive of being immersed in a virtual reality which
is totally inconsistent with our existence as conscious observers, for
example.

However, let us accept the AP. After all, it has passed observational
test with flying colours. We should also expect that we should be an
example of the most likely form of consciousness.

The second problem is raises is that if ameobae are conscious, then
why aren't we amoebae? There are many more amoebae on the planet than
there are human beings. I can well accept that dolphins and
chimpanzees (for instance) _could_ be conscious, since there are
vastly greater numbers of humans around today than there are of these
other species, but there is something special that we have that amoeba
(or even ants, lets say) don't have.

Not sure about ant nests (Hofstadter style). Anyone got a good
estimate of the number of extant ant nests vis a vis human population?

One possibility is that there is some kind of measure function that
rates our consciousness as far more likely to be occupied than an
amoeba's, however I'm personally sceptical of this. Consciousness seem
to be so much of an either/or thing...

Cheers



A/Prof Russell Standish  Director
High Performance Computing Support Unit, Phone 9385 6967, 8308 3119 (mobile)
UNSW SYDNEY 2052 Fax   9385 6965, 0425 253119 ()
Australia[EMAIL PROTECTED] 
Room 2075, Red Centrehttp://parallel.hpc.unsw.edu.au/rks
International prefix  +612, Interstate prefix 02





Re: relevant probability distribution

2002-06-10 Thread joseph00

Hi Saibal, 

 gt; As I said I agree with you. But do you really mean a measure defined
 gt; on a set of computer programs, or a set of computer program *states*?

 I think that you can derive one from the other. I have thought about this
 before, and I now think that the observer should associate himself with a
 (to himself unknown) program, or better, a set of programs, that could
 generate him.
 
 E.g. there exists a program that only calculates me and nothing else. This
 program e.g. could compute me in an infinite dream. Many such (very complex)
 programs must exist. I think that these programs define our identities (or
 vice versa, but then not uniquely). Now, if conscious objects correspond to
 programs then you don't have the paradox that any clock or lookup table has
 intelligence. The fact that I don't live in my own personal universe, but
 that my universe is generated by a simpler one, suggests that simpler
 programs have larger probabilities.
 
 If you now have an a priory probability over the set of all programs, you
 can compute (in principle) the probability that I will observe a certain
 outcome if I perform a certain experiment. At least you can formulate this
 question in a mathematical unambiguous way.

 I have difficulty with the concept of many distinct programs, each 
representing an individual conscious entity. My understanding of modern physics 
is that the concept of an isolated individual is essentially obsolete, in that 
nothing can be defined without relation to everything else. As a result, surely 
the underlying program for each must be similarly connected, so that in fact 
an individual physical object is simply a concentration of processes operating 
in one part of the program?
  The significance of this is that the paradox of intelligent objects 
doesn't arise at all. I work on the assumption that your program is synonymous 
with universal awareness (the abstract form of consciousness), and that 
intelligence would be the result of local information-processing systems. 
Partly because of the view of everything being inter-related, I'm uncomfortable 
with a sharp, intelligent/non-intelligent distinction, and have no problem with 
a mechanical object expressing a very low degree of intelligence. Indeed, 
anything which responds to stimuli could be seen in this way, including a rock 
undergoing thermal expansion. However, an object can only become self-aware 
once the processing centre is reasonably complex, and  based on sufficient 
local inputs to define a boundary to the region of the observer; this, I guess, 
would be the manifestation of a closed (or at least self-referent) processing 
loop within the program. 
As I understand your view, it by-passes the paradox by introducing 
arbitrariness, and any approach of this type seems to me to result in more 
problems. At what point in evolution did an organism first become intelligent? 
Do we then assume that a qualitatively different faculty was introduced? If so, 
how? These sorts of questions seem to be the result of over-reductionism, of 
separating gradations into artificial categories. (Of course, being a 
palaeontologist, I spend much of my time doing just that, but never mind!)
All the best,
Joe


-
Department of Earth Sciences
University of Cambridge
Downing Street
Cambridge CB2 3EQ
Phone: ( +44 ) 1223 333400
Fax: ( +44 ) 1223 333450