RE: computationalism and supervenience

2006-09-14 Thread Stathis Papaioannou

Brent Meeker writes:

  I don't have a clear idea in my mind of disembodied computation except in 
  rather simple cases, 
  like numbers and arithmetic. The number 5 exists as a Platonic ideal, and 
  it can also be implemented 
  so we can interact with it, as when there is a collection of 5 oranges, or 
  3 oranges and 2 apples, 
  or 3 pairs of oranges and 2 triplets of apples, and so on, in infinite 
  variety. The difficulty is that if we 
  say that 3+2=5 as exemplified by 3 oranges and 2 apples is conscious, 
  then should we also say 
  that the pairs+triplets of fruit are also conscious? If so, where do we 
  draw the line? 
 
 I'm not sure I understand your example.  Are you saying that by simply 
 existing, two 
 apples and 3 oranges compute 2+3=5?  If so I would disagree.  I would say it 
 is our 
 comprehending them as individual objects and also as a set that is the 
 computation. 
 Just hanging there on the trees they may be computing apple hanging on a 
 tree, 
 apple hanging on a tree,... but they're not computing 2+3=5.

What about my example in an earlier post of beads on an abacus? You can slide 2 
beads to the left, then another 
3 beads to the left, and count a total of 5 beads; or 2 pairs of beads and 3 
pairs of beads and count a total of 5 
pairs of beads, or any other variation. Perhaps it seems a silly example when 
discussing consciousness, but the most 
elaborate (and putatively conscious) computation can be reduced to a complex 
bead-sliding exercise. And if sliding 
beads computes 2+3=5, why not if 2 birds and then 3 birds happen to land on a 
tree, or a flock of birds of which 2 
are red lands on one tree and another flock of birds of which 3 are red lands 
on an adjacent tree? It is true that these 
birds and beads are not of much consequence computationally unless someone is 
there to observe them and interpret 
them, but what about the computer that is conscious chug-chugging away all on 
its own? 
 
 That is what I mean 
  when I say that any computation can map onto any physical system. 
 
 But as you've noted before the computation is almost all in the mapping.  And 
 not 
 just in the map, but in the application of the map - which is something we 
 do.  That 
 action can't be abstracted away.  You can't just say there's a physical 
 system and 
 there's a manual that would map it into some computation and stop there as 
 though the 
 computation has been done.  The mapping, an action, still needs to be 
 performed.

What if the computer is built according to some ridiculously complex plan, 
plugged in, then all the engineers, manuals, 
etc. disappear. If it was conscious to begin with, does it suddenly cease being 
conscious because no-one is able to 
understand it? It could have been designed according to the radioactive decay 
patterns of a sacred stone, in which 
case without the documentation, its internal states might appear completely 
random. With the documentation, it may be 
possible to understand what it is doing or even interact with it, and you have 
said previously that it is the potential for 
interaction that allows it to be conscious, but does that mean it gradually 
becomes less conscious as pages of the manual 
are ripped out one by one and destroyed, even though the computer itself does 
not change its activity as a result?

 The physical structure and activity 
  of computer A implementing program a may be completely different to that of 
  computer B implementing 
  program b, but program b may be an emulation of program a, which should 
  make the two machines 
  functionally equivalent and, under computationalism, equivalently 
  conscious. 
 
 I don't see any problem with supposing that A and B are equally conscious (or 
 unconscious).

But there is a mapping under which any machine B is emulating a machine A. 
Figuring out this mapping does not change the 
physical activity of either A or B. You can argue that therefore the physical 
activity of A or B is irrelevant and consciousness 
is implemented non-corporeally by virtue of its existence as a Platonic object; 
or you can argue that this is clearly nonsense and 
consciousness is implemented as a result of some special physical property of a 
particular machine.

Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: Russell's book

2006-09-14 Thread Russell Standish

On Wed, Sep 13, 2006 at 02:56:30PM -, David Nyman wrote:
 
 Russell Standish wrote:
 
  If you can demonstrate this as a theorem, or even as a moderately
  convincing argument why this should be so, I'd be most grateful for a
  presentation. I'm all for eliminating unnecessary hypotheses.
 
 'Fraid I don't have a theorem! However, as to 'moderately convincing
 arguments', I think the problem with thinking coherently about temporal
 experience seems to be with mentally flip-flopping between structural
 and implicitly dynamic mental models of 'time'.  I had an exchange with
 Barbour about this because I was convinced that he just introduced
 'time' back into his static Platonia by what I called 'sleight of
 intuition' - i.e. the implicit temporality of our language. He didn't
 disagree, but just felt he wanted to de-emphasise this aspect within
 his project of taking the static function maximally seriously.
 
 However, I'm not so certain about the intuition now. It seems plausible
 that the content of 1st-person experience is represented structurally
 within time capsules - including those aspects that would appear as 'in
 relation to' the content of other capsules. This by itself would yield
 a 'picture' of time from the pov of any capsule (i.e. 'time' as
 information, and particularly as defined by information 'horizons') if
 only we could account for the experience of dynamism. Here I'm much
 less clear, but I have a sort of 'intuition pump'. It seems to me that
 we must consider who or what is the 'experiencer'.  For dynamism one
 needs contrast, and such contrast is to be found between the 0-person
 'pov' of the multiverse and individual 1st-person capsules. So if the
 multiverse is the experiencer, the dynamism of time may emerge simply
 from the global/ local contrast of its 0-person/ 1st-person povs.
 
 Clear as mud.
 
 David
 

If you note in sect. 9.2 of my book, I am quite clear that time must
emerge from a timeless underlying reality somehow - whether by Barbour's
time capsules, or by some completely different mechanism, I don't think
is all that pertinent.

That the experience of time is necessarily experienced by all conscious
points of view is to my knowledge not even addressed by other
philosophers. Even Bruno seems to skirt the issue, although there is
an appearance of temporality with the S4Grz logic.

So I've simply made a conjecture that experience of time is necessary
for consciousness, and tried to dilute the strength of that conjecture
as far as possible.

Hopefully some bright spark will either prove the conjecture (in some
form), or even more interestingly disprove it. But I won't hold my
breath.

Cheers

-- 
*PS: A number of people ask me about the attachment to my email, which
is of type application/pgp-signature. Don't worry, it is not a
virus. It is an electronic signature, that may be used to verify this
email came from me if you have PGP or GPG installed. Otherwise, you
may safely ignore this attachment.


A/Prof Russell Standish  Phone 0425 253119 (mobile)
Mathematics  
UNSW SYDNEY 2052 [EMAIL PROTECTED] 
Australiahttp://parallel.hpc.unsw.edu.au/rks
International prefix  +612, Interstate prefix 02



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



RE: Russell's book

2006-09-14 Thread Stathis Papaioannou


Johnathan Corgan writes: 

 David Nyman wrote:
 
 [re: QTI]
  This has obvious
  implications for retirement planning in general and avoidance of the
  more egregious cul-de-sac situations. On the other hand, short of
  outright lunacy vis-a-vis personal safety, it also seems to imply that
  from the 1st-person pov we are likely to come through (albeit possibly
  in less-than-perfect shape) even apparently minimally survivable
  situations. This struck me particularly forcibly while watching the
  9/11 re-runs on TV last night.
 
 It's the cul-de-sac situations that interest me.  Are there truly any?
 Are there moments of consciousness which have no logically possible
 continuation (while remaining conscious?)
 
 It seems the canonical example is surviving a nearby nuclear detonation.
  One logical possibility is that all your constituent particles
 quantum-tunnel away from the blast in time.

Don't forget the Omega Point possibility, which sees you vapourised today 
but resurrected in simulation in the far future. Or perhaps at the moment of 
detonation it will be revealed to you that you are already living in a 
simulation, 
and the disaster is averted at the last moment by the programmers. It doesn't 
matter whether you are currently in the simulation or in the real world since 
the 
only thing that matters is where your *next moment* comes from. Your stream 
of consciousness would be the same if all the separate moments of your life 
were completely disconnected and mixed up in time, space or across separate 
real and simulated universes. 
 
 This would be of extremely low measure in absolute terms, but what about
 an aside, it wasn't always so. Apparently, in the early years of Christianity 
 the proportion of continuations that contain you as a conscious entity?
 
 This also touches on a recent thread about how being of low measure
 feels. If QTI is true, and I'm subject to a nuclear detonation, does it
 matter if my possible continuations are of such a low relative measure?
 Once I'm in them, would I feel any different and should I care?
 
 These questions may reduce to something like, Is there a lower limit to
 the amplitude of the SWE?
 
 If measure is infinitely divisible, then is there any natural scale to
 its absolute value?
 
 I raised a similar question on the list a few months ago when Tookie
 Wiliams was in the headlines and was eventually executed by the State of
 California.  What possible continuations exist in this situation?
 
  In effect, we are being presented with a kind of 'yes doctor' in
  everyday life. Do you find that these considerations affect your own
  behaviour in any way?
 
 A very interesting question.
 
 If my expectation is that QTI is true and I'll be living for a very long
 time, I may adjust my financial planning accordingly.  But QTI only
 applies to my own first-person view; I'll be constantly shedding
 branches where I did indeed die.  If I have any financial dependents, do
 I provide for their welfare, even if they'll only exist forever outside
 my ability to interact with?

Don't discount force of habit and social conditioning. Christians believe that 
when they die they will go to heaven, so logically they should be pleased, or 
at least only minimally upset, at the prospect of an asteroid instantly and 
painlessly 
wiping out all life on Earth. However, all but the craziest Christians would 
hope 
that such a thing does not happen, and essentially live their lives as if death 
is a 
bad thing for them and the people they care about. Maybe it's just a question 
of faith, 
as the September 11 terrorists did not have such qualms.

Stathis Papaioannou

_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



RE: computationalism and supervenience

2006-09-14 Thread Stathis Papaioannou

Brent meeker writes:

  We would understand it in a third person sense but not in a first person 
  sense, except by analogy with our 
  own first person experience. Consciousness is the difference between what 
  can be known by observing an 
  entity and what can be known by being the entity, or something like the 
  entity, yourself. 
  
  Stathis Papaioannou
 
 But you are simply positing that there is such a difference.  That's easy to 
 do 
 because we know so little about how brains work.  But consider the engine in 
 your 
 car.  Do you know what it's like to be the engine in your car?  You know a 
 lot about 
 it, but how do you know that you know all of it?  Does that mean your car 
 engine is 
 conscious?  I'd say yes it is (at a very low level) and you *can* know what 
 it's like.

No, I don't know what it's like to be the engine in my car. I would guess it 
isn't like anything, but I might be wrong. 
If I am wrong, then my car engine may indeed be conscious, but in a completely 
alien way, which I cannot 
understand no matter how much I learn about car mechanics, because I am not 
myself a car engine. I think 
the same would happen if we encountered an alien civilization. We would 
probably assume that they were 
conscious because we would observe that they exhibit intelligent behaviour, but 
only if by coincidence they 
had sensations, emotions etc. which reminded us of our own would we be able to 
guess what their conscious 
experience was actually like, and even then we would not be sure.

Stathis Papaioannou
_
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: computationalism and supervenience

2006-09-14 Thread 1Z


Stathis Papaioannou wrote:
 Brent meeker writes:


  I don't recall anything about all computations implementing consciousness?
 
  Brent Meeker

 OK, this is the basis of our disagreement. I understood computationalism as 
 the idea that it is the
 actual computation that gives rise to consciousness. For example, if you have 
 a conscious robot
 shovelling coal, you could take the computations going on in the robot's 
 processor and run it on
 another similar computer with sham inputs and the same conscious experience 
 would result. And
 if the program runs on one computer, it can run on another computer with the 
 appropriate emulation
 software (the most general case of which is the UTM), which should also 
 result in the same conscious
 experience. I suppose it is possible that *actually shovelling the coal* is 
 essential for the coal-shovelling
 experience, and an emulation of that activity just wouldn't do it. However, 
 how can the robot tell the
 difference between the coal and the simulated coal, and how can it know if it 
 is running on Windows XP
 or Mac OS emulating Windows XP?


That has nothing to do with all computations implementing consciousness


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: Russell's book

2006-09-14 Thread Tom Caylor

Stathis Papaioannou wrote:
 Tom Caylor writes:

  After many life-expectancy-spans worth of narrow escapes, after
  thousands or millions of years, wouldn't the probability be pretty high
  for my personality/memory etc. to change so much that I wouldn't
  recognize myself, or that I could be more like another person than my
  original self, and so for all practical purposes wouldn't I be another
  person?  How do I know this hasn't happened already?  If it has, what
  difference does it make?  Isn't it true that the only realities that
  matter are the ones that make any difference to my reality?  (almost a
  tautology)

 The only guarantee fom QTI is that you will experience a next moment:
 that there exists an observer moment in the universe which considers your
 present moment to be its predecessor.

And this guarantee of a next experience is based on what?

Also, if an observer moment can consider, this must be a very special
observer moment.

 This leads to difficulties with partial
 memory loss, which are not unique to QTI but might actually occur in real 
 life.
 For example, if you are in a car crash and end up in a vegetative state, this
 is usually taken as being effectively the same as ending up dead. If you wake
 up after the accident mentally intact except you have forgotten what you had
 for breakfast that morning then you have survived in much the same way you
 would have if you had never had the accident. If you consider that the world
 splits and there are only these two outcomes, or if you consider a 
 teleportation
 experiment in which you are reconstituted in these two states at separate
 receiving stations, the conclusion seems straightforward enough: you will 
 survive
 the ordeal having lost only your memory of what you had for breakfast.

 Now, consider a situation where there are 10 possible outcomes, or 10 possible
 teleportation destinations, ranging from #1 vegetative state (or headless 
 corpse)
 to #10 intact except for memory of breakfast. In this scheme, #8 might be 
 intact
 except you have forgotten 10% of what you have done in the past year, while
 #3 might be you have forgotten everything except what you learned before the
 age of two years. What is your expectation of survival in this situation?

 Stathis Papaioannou
 _
 Be one of the first to try Windows Live Mail.
 http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: computationalism and supervenience

2006-09-14 Thread Brent Meeker

Stathis Papaioannou wrote:
 Brent Meeker writes:
 
 
I don't have a clear idea in my mind of disembodied computation except in 
rather simple cases, 
like numbers and arithmetic. The number 5 exists as a Platonic ideal, and it 
can also be implemented 
so we can interact with it, as when there is a collection of 5 oranges, or 3 
oranges and 2 apples, 
or 3 pairs of oranges and 2 triplets of apples, and so on, in infinite 
variety. The difficulty is that if we 
say that 3+2=5 as exemplified by 3 oranges and 2 apples is conscious, then 
should we also say 
that the pairs+triplets of fruit are also conscious? If so, where do we draw 
the line? 

I'm not sure I understand your example.  Are you saying that by simply 
existing, two 
apples and 3 oranges compute 2+3=5?  If so I would disagree.  I would say it 
is our 
comprehending them as individual objects and also as a set that is the 
computation. 
Just hanging there on the trees they may be computing apple hanging on a 
tree, 
apple hanging on a tree,... but they're not computing 2+3=5.
 
 
 What about my example in an earlier post of beads on an abacus? You can slide 
 2 beads to the left, then another 
 3 beads to the left, and count a total of 5 beads; or 2 pairs of beads and 3 
 pairs of beads and count a total of 5 
 pairs of beads, or any other variation. Perhaps it seems a silly example when 
 discussing consciousness, but the most 
 elaborate (and putatively conscious) computation can be reduced to a complex 
 bead-sliding exercise. And if sliding 
 beads computes 2+3=5, why not if 2 birds and then 3 birds happen to land on a 
 tree, or a flock of birds of which 2 
 are red lands on one tree and another flock of birds of which 3 are red lands 
 on an adjacent tree? It is true that these 
 birds and beads are not of much consequence computationally unless someone is 
 there to observe them and interpret 
 them, but what about the computer that is conscious chug-chugging away all on 
 its own? 

No it's not a silly example; it's just that it seems that you are hypothesizing 
that 
I am providing the computation by seeing the apples as a pair, by seeing the 
beads as 
a triple and a pair and then as a quintuple.  Above, this exchange began with 
you 
posing this as an example of a disembodied computation - but then the examples 
seem 
to depend on some (embodied) person witnessing them in order that the *be* 
computations.  I guess I'm not convinced that it makes sense to say that 
anything can 
be a computation; other than in the trivial sense that it's a simulation of 
itself. 
  I agree that there is a mapping to a computation - but in most cases the 
mapping is 
such that it seems more reasonable to say the computation is in the application 
of 
the mapping.  And I dont' mean  that the mapping is complex - a mapping from my 
brain 
states to yours would no doubt be very complex.  I think the characteristic 
that 
would allow us to say the thinking was not in the mapping is something like 
whether 
it was static (like a look-up table) and not to large in some sense.

 
That is what I mean 
when I say that any computation can map onto any physical system. 

But as you've noted before the computation is almost all in the mapping.  And 
not 
just in the map, but in the application of the map - which is something we 
do.  That 
action can't be abstracted away.  You can't just say there's a physical 
system and 
there's a manual that would map it into some computation and stop there as 
though the 
computation has been done.  The mapping, an action, still needs to be 
performed.
 
 
 What if the computer is built according to some ridiculously complex plan, 
 plugged in, then all the engineers, manuals, 
 etc. disappear. If it was conscious to begin with, does it suddenly cease 
 being conscious because no-one is able to 
 understand it? It could have been designed according to the radioactive decay 
 patterns of a sacred stone, in which 
 case without the documentation, its internal states might appear completely 
 random. With the documentation, it may be 
 possible to understand what it is doing or even interact with it, and you 
 have said previously that it is the potential for 
 interaction that allows it to be conscious, but does that mean it gradually 
 becomes less conscious as pages of the manual 
 are ripped out one by one and destroyed, even though the computer itself does 
 not change its activity as a result?
 
 
The physical structure and activity 
of computer A implementing program a may be completely different to that of 
computer B implementing 
program b, but program b may be an emulation of program a, which should make 
the two machines 
functionally equivalent and, under computationalism, equivalently conscious. 

I don't see any problem with supposing that A and B are equally conscious (or 
unconscious).
 
 
 But there is a mapping under which any machine B is emulating a machine A. 

But when is this mapping doing the computing 

Re: computationalism and supervenience

2006-09-14 Thread Brent Meeker

Stathis Papaioannou wrote:
 Brent meeker writes:
 
 
We would understand it in a third person sense but not in a first person 
sense, except by analogy with our 
own first person experience. Consciousness is the difference between what 
can be known by observing an 
entity and what can be known by being the entity, or something like the 
entity, yourself. 

Stathis Papaioannou

But you are simply positing that there is such a difference.  That's easy to 
do 
because we know so little about how brains work.  But consider the engine in 
your 
car.  Do you know what it's like to be the engine in your car?  You know a 
lot about 
it, but how do you know that you know all of it?  Does that mean your car 
engine is 
conscious?  I'd say yes it is (at a very low level) and you *can* know what 
it's like.
 
 
 No, I don't know what it's like to be the engine in my car. I would guess it 
 isn't like anything, but I might be wrong. 
 If I am wrong, then my car engine may indeed be conscious, but in a 
 completely alien way, which I cannot 
 understand no matter how much I learn about car mechanics, because I am not 
 myself a car engine. 

Then doesn't the same apply to your hypothetical conscious, but alien computer 
whose
interpretative manuals are all lost?

I think 
 the same would happen if we encountered an alien civilization. We would 
 probably assume that they were 
 conscious because we would observe that they exhibit intelligent behaviour, 
 but only if by coincidence they 
 had sensations, emotions etc. which reminded us of our own would we be able 
 to guess what their conscious 
 experience was actually like, and even then we would not be sure.

How could their inner experiences - sensations, emotions, etc - remind us of
anything?  We don't have access to them.  It would have to be their 
interactions with
the world and us that would cause us to infer their inner experiences; just as I
infer when my dog is happy or fearful.

Brent Meeker


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: Proof that QTI is false

2006-09-14 Thread Saibal Mitra


- Original Message - 
From: Brent Meeker [EMAIL PROTECTED]
To: everything-list@googlegroups.com
Sent: Wednesday, September 13, 2006 5:47 AM
Subject: Re: Proof that QTI is false



 Saibal Mitra wrote:
  QTI in the way defined in this list contradicts quantum mechanics. The
  observable part of the universe can only be in a finite number of
quantum
  states. So, it can only harbor a finite number of observer moments or
  experiences a  person can have, see here for details:
 
  http://arxiv.org/abs/gr-qc/0102010
 
  If there can only be a finite number of observer moments you can only
  experience a finite amount of time.
 
  QED.

 So that would imply that when predicting states at some fixed finite time
in the
 future there is a smallest, non-zero probability that is realizable.  So
if our
 prediction, using continuum variables as an approximation, indicates a
probability
 lower than this value we should set it to zero??

 Brent Meeker

Yes, but you don't have to set anything to zero by hand. What happens is
that if there are only a finite number of quantum states there is one which
has the smallest non zero probability.

Saibal


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---



Re: Proof that QTI is false

2006-09-14 Thread Saibal Mitra

Yes, I agree that you could still have some form of QTI if there are only a
finite number of states. I just don't believe in it, because I don't think
the use of the relative measure is justified in case the observer isn't
conserved. In all other case the absolute measure and the relative measure
lead to the same predictions.




 Actually, in standard quantum mechanics, there is an infinity of
 observer moments, 2^{\aleph_0} of them in fact.

 What you are talking about are various quantum gravity theories, such
 as string theory, which appear to have a finite number of observer
 moments.

 However, even if as observers we are locked into a Nietschian cycle at
 some point in time due to finiteness of the number of possible states,
 the number will be so large that the practical effects of QTI will
 still need to be considered.

 Cheers


- Original Message - 
From: Russell Standish [EMAIL PROTECTED]
To: everything-list@googlegroups.com
Sent: Wednesday, September 13, 2006 4:31 AM
Subject: Re: Proof that QTI is false

 On Tue, Sep 12, 2006 at 11:58:14PM +0200, Saibal Mitra wrote:
 
  QTI in the way defined in this list contradicts quantum mechanics. The
  observable part of the universe can only be in a finite number of
quantum
  states. So, it can only harbor a finite number of observer moments or
  experiences a  person can have, see here for details:
 
  http://arxiv.org/abs/gr-qc/0102010
 
  If there can only be a finite number of observer moments you can only
  experience a finite amount of time.
 
  QED.
 
 
 
 -- 
 *PS: A number of people ask me about the attachment to my email, which
 is of type application/pgp-signature. Don't worry, it is not a
 virus. It is an electronic signature, that may be used to verify this
 email came from me if you have PGP or GPG installed. Otherwise, you
 may safely ignore this attachment.

 --
--
 A/Prof Russell Standish  Phone 0425 253119 (mobile)
 Mathematics
 UNSW SYDNEY 2052  [EMAIL PROTECTED]
 Australia
http://parallel.hpc.unsw.edu.au/rks
 International prefix  +612, Interstate prefix 02
 --
--


 --~--~-~--~~~---~--~~
Saibal


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Everything List group.
To post to this group, send email to everything-list@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list
-~--~~~~--~~--~--~---