Re: Many Pasts? Not according to QM...

2005-06-03 Thread Saibal Mitra

- Original Message - 
From: Hal Finney [EMAIL PROTECTED]
To: everything-list@eskimo.com
Sent: Friday, June 03, 2005 05:00 AM
Subject: Re: Many Pasts? Not according to QM...


 Stephen Paul King writes:
  I really do not want to be a stick-in-the-mud here, but what do we
base
  the idea that copies could exist upon? What if I, or any one else's
1st
  person aspect, can not be copied? If the operation of copying is
impossible,
  what is the status of all of these thought experiments?
  If, and this is a HUGE if, there is some thing irreducibly quantum
  mechanical to this 1st person aspect then it follows from QM that
copying
  is not allowed. Neither a quantum state nor a qubit can be copied
without
  destroying the original.

 According to the Bekenstein bound, which is a result from quantum gravity,
 any finite sized system can only hold a finite amount of information.
 That means that it can only be in a finite number of states.  If you
 made a large enough number of systems in every possible state, you would
 be guaranteed to have one that matched the state of your target system.
 However you could not in general know which one matched it.

 Nevertheless this shows that even if consciousness is a quantum
 phenomenon, it is possible to have copies of it, at the expense of
 some waste.


This is actualy another argument against QTI. There are only a finite number
of different versions of observers. Suppose a 'subjective' time evolution on
the set of all possible observers exists that is always well defined.
Suppose we start with observer O1, and under time evolution it evolves to
O2, which then evolves to O3 etc. Eventually an On will be mapped back to O1
(if this never happened that would contradict the fact that there are only a
finite number of O's). But mapping back to the initial state doesn't
conserve memory. You can thus only subjectively experience yourself evolving
for a finite amount of time.


Saibal



Re: Many Pasts? Not according to QM...

2005-06-03 Thread Stephen Paul King

Dear Stathis,

- Original Message - 
From: Stathis Papaioannou [EMAIL PROTECTED]

To: [EMAIL PROTECTED]; everything-list@eskimo.com
Sent: Thursday, June 02, 2005 11:55 PM
Subject: Re: Many Pasts? Not according to QM...


snip

It is true that nature is quantum mechanical rather than classical, but I 
am not aware that anyone has proved that the brain is not a classical 
computer. If it is, then it should in theory be possible to get a 
functionally equivalent copy by copying the computational state, rather 
than exactly emulating the quantum state; rather as one can transfer the 
operating system and files from one electronic computer to another, 
without copying the original machine atom for atom.


   I would not be so hasty to swallow Tegmark's argument that the brain can 
not be anything other than a classical computer: 
http://physicsweb.org/articles/news/3/7/19 But that is really not the point 
I was trying to make. As you admit, Nature is quantum mechanical and 
thus we have to be sure what our ideas about what subset of Nature is or is 
not classical. The rules are different for these two realms. When we are 
musing about copying our 1st person experience and considering the 
implications, are we merely only required to copy the informational 
content of those 1st person viewpoints, like some tape recording or MP3, or 
are we also requiring tacitly that the means that those particular 
information structured can to be ordered as they are?


   We can wax Scholastically about the properties of relationships between 
numbers forever and ever but unless our theoretics make contact with the 
tangible world and represent faithfully those aspects that we have verified 
experimentally, are we merely generating material for the next episode of 
Sliders?


Stephen 



Re: Functionalism and People as Programs

2005-06-03 Thread Stephen Paul King

Dear Lee,

- Original Message - 
From: Lee Corbin [EMAIL PROTECTED]

To: EverythingList everything-list@eskimo.com
Sent: Friday, June 03, 2005 12:20 AM
Subject: Functionalism and People as Programs



Stephen writes

I really do not want to be a stick-in-the-mud here, but what do we 
base

the idea that copies could exist upon?


It is a conjecture called functionalism (or one of its close variants).
I guess the strong AI view is that the mind can be emulated on a
computer. And yes, just because many people believe this---not 
surprisingly

many computer scientists---does not make it true.


[SPK]

   I am aware of those ideas and they seem, at least to me, to be supported 
by an article of Faith and not any kind of empirical evidence. Maybe that is 
why I have such an allergy to the conjecture. ;-)



[LC]
An aspect of this belief is that a robot could act indistinguishably
from humans. At first glance, this seems plausible enough; certainly
many early 20th century SF writers thought it reasonable. Even Searle
concedes that such a robot could at least appear intelligent and
thoughtful to Chinese speakers.

I suspect that Turing also believed it: after all, he proposed that
a program one day behave indistinguishably from humans. And why not,
exactly?  After all, the robot undertakes actions, performs calculations,
has internal states, and should be able to execute a repertoire as fine
as that of any human.  Unless there is some devastating reason to the
contrary.


[SPK]

   What I seem to rest my skepticism upon is the fact that in all of these 
considerations there remains, tacitly or not, the assumption that these 
internal states have an entity to whom they have a particular valuation. 
I see this expressed in the MWI, more precisely, in the relative state way 
of thinking within an overall QM multiverse. Additionally, we are still 
embroiled in debate over the sufficiency of a Turing Test to give us 
reasonable certainty to claim that we can reduce 1st person aspects from 3rd 
person, Searle's Chinese Room being one example.



What if I, or any one else's 1st person aspect, can not be copied?
If the operation of copying is impossible, what is the status of all
of these thought experiments?


I notice that many people seek refuge in the no-copying theorem of
QM. Well, for them, I have that automobile travel also precludes
survival.  I can prove that to enter an automobile, drive it somewhere,
and then exit the automobile invariably changes the quantum state of
the person so reckless as to do it.


[SPK]

   Come on, Lee, your trying to evade the argument. ;-)


[LC]
If someone can teleport me back and forth from work to home, I'll
be happy to go along even if 1 atom in every thousand cells of mine
doesn't get copied. Moreover---I am not really picky about the exact
bound state of each atom, just so long as it is able to perform the
role approximately expected of it. (That is, go ahead and remove any
carbon atom you like, and replace it by another carbon atom in a
different state.)


[SPK]

   If you care to look into teleportation, as it has been researched so 
far, it has been shown that the original - that system or state of a 
system - that is teleported is not copied like some Xerox of an original 
document.


http://www.research.ibm.com/quantuminfo/teleportation/

   Such can not be done because *all* of the information about the system 
or state must be simultaneously measured and that act itself destroys the 
original. If *all* of the information is not measured, then one is not 
copying or teleporting, one is just measurering. This is not overly 
complicated!



If, and this is a HUGE if, there is some thing irreducibly quantum
mechanical to this 1st person aspect then it follows from QM that 
copying
is not allowed. Neither a quantum state nor a qubit can be copied 
without

destroying the original.


This is being awfully picky about permissible transformations. I
have even survived mild blows to the head, which have enormously
changed my quantum state.


[SPK]

   Again, you are begging the point! The impact of air molecules change 
one's quantum state! Perhaps we are stuck on this because we are assuming a 
still frame by still frame kind of representation of the situation. The 
quantum state of a system is continuously changing, that is why there is a 
variable t in the Schroedinger eqation for a wavefunction! I am commenting 
about the absurdity of copying the quantum mechanical system itself, or some 
subset or trace of it, other that that implied by the rules of QM.




falsified, by the same experiments that unassailably imply that Nature 
is,
at its core, Quantum Mechanical and not Classical and thus one wonders: 
Why

do we persist in this state of denial?


Probably for the same reason that some people continue to be Libertarians.
It's a belief thing---the way you see the world.



[SPK]

   Sure, and I hope that even Liberals can admit to errors in their beliefs 

Re: Equivalence

2005-06-03 Thread Stephen Paul King

Dear R.,

   You make a very good point, one that I was hoping to communicate but 
failed. The notion of making copies is only coherent if and when we can 
compare the copied produce to each other. Failing to be able to do this, 
what remains? Your suggestion seems to imply that precognition, coincidence 
and synchronicity are some form resonance between decohered QM systems. 
Could it be that decoherence is not an all or nothing process; could it be 
that some 'parts' of a QM system decohere with respect to each other while 
others do not and/or that decoherence might occur at differing rates within 
a QM system?


Stephen

- Original Message - 
From: rmiller [EMAIL PROTECTED]
To: Stathis Papaioannou [EMAIL PROTECTED]; 
[EMAIL PROTECTED]; everything-list@eskimo.com

Sent: Friday, June 03, 2005 1:07 AM
Subject: Equivalence



Equivalence
If the individual exists simultaneously across a many-world manifold, then 
how can one even define a copy?  If the words match at some points and 
differ at others, then the personality would at a maximum, do 
likewise---though this is not necessary---or, for some perhaps, not even 
likely.  It's been long established that the inner world we navigate is an 
abstraction of the real thing---even if the real world only consists of 
one version.  If it consists of several versions, blended into one 
another, then how can we  differentiate between them?  From a mathematical 
POV, 200 worlds that are absolute copies of themselves, are equivalent to 
one world. If these worlds differ minutely in areas *not encountered or 
interacted with by the percipient (individual), then again we have one 
percipient, one world-equivalent.   I suspect it's not as though we're all 
run through a Xerox and distributed to countless (infinite!) places that 
differ broadly from one another.  I rather think the various worlds we 
inhabit are equivalent--and those that differ from one another do by 
small--though perceptible---degrees.  Some parts of the many-world 
spectrum are likely equivalent and others are not.  In essence, there are 
probably zones of equivalence (your room where there are no outside 
interferences) and zones of difference.  Even if we did manage to make the 
copies, then there would still be areas on the various prints that would 
be equivalent, i.e. the same.   Those that are different, we would notice 
and possibly tag these differences with a term: decoherence.  Perhaps that 
is all there is to it.   If this is the case, it would certainly explain a 
few things: i.e. precognition, coincidence and synchronicity.


R. Miller





Re: Equivalence

2005-06-03 Thread rmiller

At 10:23 AM 6/3/2005, Stephen Paul King wrote:

Dear R.,

   You make a very good point, one that I was hoping to communicate but 
failed. The notion of making copies is only coherent if and when we can 
compare the copied produce to each other. Failing to be able to do this, 
what remains? Your suggestion seems to imply that precognition, 
coincidence and synchronicity are some form resonance between 
decohered QM systems. Could it be that decoherence is not an all or 
nothing process; could it be that some 'parts' of a QM system decohere 
with respect to each other while others do not and/or that decoherence 
might occur at differing rates within a QM system?


Stephen


Yes, that's what I am suggesting.  The rates may remain constant---i.e. 
less than a few milliseconds (as Patrick L. earlier noted) however, I 
suspect there is a topology where regions of decoherence coexist and border 
regions of coherence.  An optics experiment might be able to test this (if 
it hasn't been done already), and it might be experimentally testable as a 
psychology experiment.


RM






- Original Message - From: rmiller [EMAIL PROTECTED]
To: Stathis Papaioannou [EMAIL PROTECTED]; 
[EMAIL PROTECTED]; everything-list@eskimo.com

Sent: Friday, June 03, 2005 1:07 AM
Subject: Equivalence



Equivalence
If the individual exists simultaneously across a many-world manifold, 
then how can one even define a copy?  If the words match at some points 
and differ at others, then the personality would at a maximum, do 
likewise---though this is not necessary---or, for some perhaps, not even 
likely.  It's been long established that the inner world we navigate is 
an abstraction of the real thing---even if the real world only consists 
of one version.  If it consists of several versions, blended into one 
another, then how can we  differentiate between them?  From a 
mathematical POV, 200 worlds that are absolute copies of themselves, are 
equivalent to one world. If these worlds differ minutely in areas *not 
encountered or interacted with by the percipient (individual), then again 
we have one percipient, one world-equivalent.   I suspect it's not as 
though we're all run through a Xerox and distributed to countless 
(infinite!) places that differ broadly from one another.  I rather think 
the various worlds we inhabit are equivalent--and those that differ from 
one another do by small--though perceptible---degrees.  Some parts of the 
many-world spectrum are likely equivalent and others are not.  In 
essence, there are probably zones of equivalence (your room where there 
are no outside interferences) and zones of difference.  Even if we did 
manage to make the copies, then there would still be areas on the various 
prints that would be equivalent, i.e. the same.   Those that are 
different, we would notice and possibly tag these differences with a 
term: decoherence.  Perhaps that is all there is to it.   If this is the 
case, it would certainly explain a few things: i.e. precognition, 
coincidence and synchronicity.


R. Miller







Re: Equivalence

2005-06-03 Thread rmiller

At 11:27 AM 6/3/2005, rmiller wrote:

At 10:23 AM 6/3/2005, Stephen Paul King wrote:

Dear R.,

   You make a very good point, one that I was hoping to communicate but 
failed. The notion of making copies is only coherent if and when we can 
compare the copied produce to each other. Failing to be able to do this, 
what remains? Your suggestion seems to imply that precognition, 
coincidence and synchronicity are some form resonance between 
decohered QM systems. Could it be that decoherence is not an all or 
nothing process; could it be that some 'parts' of a QM system decohere 
with respect to each other while others do not and/or that decoherence 
might occur at differing rates within a QM system?


Stephen


Yes, that's what I am suggesting.  The rates may remain constant---i.e. 
less than a few milliseconds (as Patrick L. earlier noted) however, I 
suspect there is a topology where regions of decoherence coexist and 
border regions of coherence.  An optics experiment might be able to test 
this (if it hasn't been done already), and it might be experimentally 
testable as a psychology experiment.\\


More to the point---Optical experiments in QM often return counterintuitive 
results, but they support the QM math (of course).  No one has 
satisfactorily resolved the issue of measurement to everyone's liking, but 
most would agree that in some brands of QM consciousness plays a role.  On 
one side we have Fred Alan Wolf and Sarfatti who seem to take the qualia 
approach, while on the other side we have those like Roger Penrose who (I 
think) take a mechanical view (microtubules in the brain harbor 
Bose-Einstein condensates.)   All this model-building (and discussion) is 
fine, of course, but there are a number of psychological experiments out 
there that consistently return counterintuitive and heretofore 
unexplainable results.  Among them, is Helmut Schmidt's retro pk 
experiment which consistently returns odd results.  The PEAR lab at 
Princeton has some startling remote viewing results, and of course, 
there's Rupert Sheldrake's work.   As far as I know, Sheldrake is the only 
one who has tried to create a model (morphic resonance), and most QM 
folks typically avoid discussing the experiments--except to deride them as 
nonscientific.  I think it may be time to revisit some of these ESP 
experiments to see if the results are telling us something in terms of QM, 
i.e. decoherence.   Changing our assumptions about decoherence, then 
applying the model to those strange experiments may clarify things.


RM



RM






- Original Message - From: rmiller [EMAIL PROTECTED]
To: Stathis Papaioannou [EMAIL PROTECTED]; 
[EMAIL PROTECTED]; everything-list@eskimo.com

Sent: Friday, June 03, 2005 1:07 AM
Subject: Equivalence



Equivalence
If the individual exists simultaneously across a many-world manifold, 
then how can one even define a copy?  If the words match at some 
points and differ at others, then the personality would at a maximum, do 
likewise---though this is not necessary---or, for some perhaps, not even 
likely.  It's been long established that the inner world we navigate is 
an abstraction of the real thing---even if the real world only 
consists of one version.  If it consists of several versions, blended 
into one another, then how can we  differentiate between them?  From a 
mathematical POV, 200 worlds that are absolute copies of themselves, are 
equivalent to one world. If these worlds differ minutely in areas *not 
encountered or interacted with by the percipient (individual), then 
again we have one percipient, one world-equivalent.   I suspect it's not 
as though we're all run through a Xerox and distributed to countless 
(infinite!) places that differ broadly from one another.  I rather think 
the various worlds we inhabit are equivalent--and those that differ from 
one another do by small--though perceptible---degrees.  Some parts of 
the many-world spectrum are likely equivalent and others are not.  In 
essence, there are probably zones of equivalence (your room where there 
are no outside interferences) and zones of difference.  Even if we did 
manage to make the copies, then there would still be areas on the 
various prints that would be equivalent, i.e. the same.   Those that are 
different, we would notice and possibly tag these differences with a 
term: decoherence.  Perhaps that is all there is to it.   If this is the 
case, it would certainly explain a few things: i.e. precognition, 
coincidence and synchronicity.


R. Miller








Re: Many Pasts? Not according to QM...

2005-06-03 Thread Hal Finney
Saibal Mitra writes:
 This is actualy another argument against QTI. There are only a finite number
 of different versions of observers. Suppose a 'subjective' time evolution on
 the set of all possible observers exists that is always well defined.
 Suppose we start with observer O1, and under time evolution it evolves to
 O2, which then evolves to O3 etc. Eventually an On will be mapped back to O1
 (if this never happened that would contradict the fact that there are only a
 finite number of O's). But mapping back to the initial state doesn't
 conserve memory. You can thus only subjectively experience yourself evolving
 for a finite amount of time.

Unless... you constantly get bigger!  Then you could escape the
limitations of the Bekenstein bound.

Hal Finney



Re: Equivalence

2005-06-03 Thread Jesse Mazer

rmiller wrote:


At 11:27 AM 6/3/2005, rmiller wrote:

At 10:23 AM 6/3/2005, Stephen Paul King wrote:

Dear R.,

   You make a very good point, one that I was hoping to communicate but 
failed. The notion of making copies is only coherent if and when we can 
compare the copied produce to each other. Failing to be able to do this, 
what remains? Your suggestion seems to imply that precognition, 
coincidence and synchronicity are some form resonance between 
decohered QM systems. Could it be that decoherence is not an all or 
nothing process; could it be that some 'parts' of a QM system decohere 
with respect to each other while others do not and/or that decoherence 
might occur at differing rates within a QM system?


Stephen


Yes, that's what I am suggesting.  The rates may remain constant---i.e. 
less than a few milliseconds (as Patrick L. earlier noted) however, I 
suspect there is a topology where regions of decoherence coexist and 
border regions of coherence.  An optics experiment might be able to test 
this (if it hasn't been done already), and it might be experimentally 
testable as a psychology experiment.\\


More to the point---Optical experiments in QM often return counterintuitive 
results, but they support the QM math (of course).  No one has 
satisfactorily resolved the issue of measurement to everyone's liking, but 
most would agree that in some brands of QM consciousness plays a role.  On 
one side we have Fred Alan Wolf and Sarfatti who seem to take the qualia 
approach


What do you mean by the qualia approach? Do you mean a sort of dualistic 
view of the relationship between mind and matter? From the discussion at 
http://www.fourmilab.ch/rpkp/rhett.html it seems that Sarfatti suggests some 
combination of Bohm's interpretation of QM (where particles are guided by a 
'pilot wave') with the idea of adding a nonlinear term to the Schrodinger 
equation (contradicting the existing 'QM math', which is entirely linear), 
and he identifies the pilot wave with the mind and has some hand-wavey 
notion that life involves some kind of self-organizing feedback loop between 
the pilot wave and the configuration of particles (normally Bohm's 
interpretation says the configuration of particles has no effect on the 
pilot wave, but that's where the nonlinear term comes in I guess). Since 
Bohm's interpretation is wholly deterministic, I'd think Sarfatti's altered 
version would be too, the nonlinear term shouldn't change this.


while on the other
side we have those like Roger Penrose who (I think) take a mechanical view 
(microtubules in the brain harbor Bose-Einstein condensates.)


Penrose's proposal has nothing to do with consciousness collapsing the 
wavefunction, he just proposes that when a system in superposition crosses a 
certain threshold of *mass* (probably the Planck mass), then it collapses 
automatically. The microtubule idea is more speculative, but he's just 
suggesting that the brain somehow takes advantage of not-yet-understood 
quantum gravity effects to go beyond what computers can do, but the collapse 
of superposed states in the brain would still be gravitationally-induced.


  All this model-building (and discussion) is fine, of
course, but there are a number of psychological experiments out there that 
consistently return counterintuitive and heretofore unexplainable results.  
Among them, is Helmut Schmidt's retro pk experiment which consistently 
returns odd results.  The PEAR lab at Princeton has some startling remote 
viewing results, and of course, there's Rupert Sheldrake's work.   As far 
as I know, Sheldrake is the only one who has tried to create a model 
(morphic resonance), and most QM folks typically avoid discussing the 
experiments--except to deride them as nonscientific.  I think it may be 
time to revisit some of these ESP experiments to see if the results are 
telling us something in terms of QM, i.e. decoherence.   Changing our 
assumptions about decoherence, then applying the model to those strange 
experiments may clarify things.


RM


Here's a skeptical evaluation of some of the ESP experiments you mention:

http://web.archive.org/web/20040603153145/www.btinternet.com/~neuronaut/webtwo_features_psi_two.htm

Anyway, if it were possible for the mind to induce even a slight statistical 
bias in the probability of a bit flipping 1 or 0, then simply by picking a 
large enough number of trials it would be possible to very reliably insure 
that the majority would be the number the person was focusing on. So by 
doing multiple sets with some sufficiently large number N of trials in each 
set, it would be possible to actually send something like a 10-digit bit 
string (for example, if the majority of digits in the first N trials came up 
1, you'd have the first digit of your 10-digit string be a 1), something 
which would not require a lot of tricky statistical analysis to see was very 
unlikely to occur by chance. If the retro-PK effect you mentioned was 
real, 

Do things constantly get bigger?

2005-06-03 Thread Norman Samish
Hal,
Your phrase . . . constantly get bigger reminds me of Mark 
McCutcheon's The Final Theory where he revives a notion that gravity is 
caused by the expansion of atoms.
Norman

- Original Message - 
From: Hal Finney [EMAIL PROTECTED]
To: everything-list@eskimo.com
Sent: Friday, June 03, 2005 8:59 AM
Subject: Re: Many Pasts? Not according to QM...


Saibal Mitra writes:
 This is actualy another argument against QTI. There are only a finite 
 number
 of different versions of observers. Suppose a 'subjective' time evolution 
 on
 the set of all possible observers exists that is always well defined.
 Suppose we start with observer O1, and under time evolution it evolves to
 O2, which then evolves to O3 etc. Eventually an On will be mapped back to 
 O1
 (if this never happened that would contradict the fact that there are only 
 a
 finite number of O's). But mapping back to the initial state doesn't
 conserve memory. You can thus only subjectively experience yourself 
 evolving
 for a finite amount of time.

Unless... you constantly get bigger!  Then you could escape the
limitations of the Bekenstein bound.

Hal Finney 



Observer-Moment Measure from Universe Measure

2005-06-03 Thread Hal Finney
Some time back Lee Corbin posed the question of which was more
fundamental: observer-moments or universes?  I would say, with more
thought, that observer-moments are more fundamental in terms of explaining
the subjective appearance of what we see, and what we can expect.
An observer-moment is really all we have as our primary experience of
the world.  The world around us may be fake; we may be in the Matrix or
a brain in a vat.  Even our memories may be fake.  But the fact that we
are having particular experiences at a particular moment cannot be faked.

But the universe is fundamental, in my view, in terms of the ontology,
the physical reality of the world.  Universes create and contain observers
who experience observer-moments.  This is the Schmidhuber/Tegmark model.
(I think Bruno Marchal may invert this relationship.)

In terms of measure, Schmidhuber (and possibly Tegmark) provides a means
to estimate the measure of a universe.  Consider the fraction of all bit
strings that create that universe as its measure.  In practice this is
roughly 1/2^n where n is the size of the shortest program that outputs
that universe.  The Tegmark model may allow for similar reasoning,
applied to mathematical structures rather than computer programs.

Now, how to get from universe measure to observer-moment (OM) measure?
This is what I want to write about.

First, the measure of an OM should be the sum of contributions from
each of the universes that instantiate that OM.  Generally there are
many possible universes that may create or contain a particular OM.
Some are variants of our own, where things are different that we have
not yet observed.  For example, a universe which is like ours except
for some minor change in a galaxy billions of light years away could
contain a copy of us experiencing the same OMs.  Even bigger changes
may not matter; for example if you flip a coin but haven't yet looked
at the result, this may not change your OM.  Then there are even more
drastic universes, like The Matrix where we are living in a simulation
created in some kind of future or alien world.

Perhaps the most extreme case is a universe which only creates that OM.
Think of it as a universe which only exists for a brief moment and
which only contains a brain, or a computer or some such system, which
contains the state associated with that OM.  This is the brain in a
vat model taken to the most extreme, where there isn't anything else,
and there isn't even a vat, there is just a brain.  We would hope,
if our multiverse models are going to amount to anything, that such
universes would only contribute a small measure to each of our OMs.
Otherwise the AUH can't explain what we see.

But all of these universes contribute to the measure of our OMs.
We are living in all of them.  The measure of the OM is the sum of the
contribution from each universe.

However, and here is the key point, the contribution to an OM from a
universe cannot just be taken as equal to the measure of that universe.
Otherwise we reach some paradoxical conclusions.  For one thing,
a universe may instantiate a particular OM more than once.  What do
we do in that case?  For another, intuitively it might seem that the
contribution of a universe to an OM should depend to some extent on how
much of the universe's resources are devoted to that OM.  An enormous
universe which managed to instantiate a particular OM in some little
corner might be said to contribute less of its measure to that OM than
if a smaller universe instantiates the same OM.

The most extreme case is a trivial universe (equivalently, a program,
in Schmidhuber terms) which simply counts.  It outputs 1, 2, 3, 4, ...
forever.  This is a small program and has large measure.  At some point
it will output a number corresponding to any given OM.  Should we count
the entire measure of this small program (one of the smallest programs
that can exist) to this OM?  If so, it will seem that for every OM we
should assume that we exist as part of such a counting program, which
is another variant on the brain-in-a-vat scenario.  This destroys the
AUH as a predictive model.

Years ago Wei Dai on this list suggested a better approach.  He proposed
a formula for determining how much of a universe's measure contributes to
an OM that it instantiates.  It is very specific and also illustrates
some problems in the rather loose discussion so far.  For example,
what does it really mean to instantiate an OM?  How would we know if a
universe is really instantiating a particular OM?  Aren't there fuzzy
cases where a universe is only sort of instantiating one?  What about
the longstanding problem that you can look at the atomic vibrations in
a crystal, select a subset of them to pay attention to, and have that
pattern match the pattern of any given OM?  Does this mean that every
crystal instantiates every OM?  (Hans Moravec sometimes seems to say yes!)

To apply Wei's method, first we need to get serious about what is an OM.
We 

Re: Equivalence

2005-06-03 Thread rmiller


At 01:46 PM 6/3/2005, rmiller wrote:

(snip)


What do you mean by the qualia approach? Do you mean a sort of 
dualistic view of the relationship between mind and matter? From the 
discussion at http://www.fourmilab.ch/rpkp/rhett.html it seems that 
Sarfatti suggests some combination of Bohm's interpretation of QM (where 
particles are guided by a 'pilot wave') with the idea of adding a 
nonlinear term to the Schrodinger equation (contradicting the existing 
'QM math', which is entirely linear), and he identifies the pilot wave 
with the mind and has some hand-wavey notion that life involves some 
kind of self-organizing feedback loop between the pilot wave and the 
configuration of particles (normally Bohm's interpretation says the 
configuration of particles has no effect on the pilot wave, but that's 
where the nonlinear term comes in I guess). Since Bohm's interpretation 
is wholly deterministic, I'd think Sarfatti's altered version would be 
too, the nonlinear term shouldn't change this.



Seems to me you've described the qualia approach pretty well.




while on the other
side we have those like Roger Penrose who (I think) take a mechanical 
view (microtubules in the brain harbor Bose-Einstein condensates.)


Penrose's proposal has nothing to do with consciousness collapsing the 
wavefunction, he just proposes that when a system in superposition 
crosses a certain threshold of *mass* (probably the Planck mass), then it 
collapses automatically. The microtubule idea is more speculative, but 
he's just suggesting that the brain somehow takes advantage of 
not-yet-understood quantum gravity effects to go beyond what computers 
can do, but the collapse of superposed states in the brain would still be 
gravitationally-induced.


Penrose has a *lot* of things to say about QM---and his new book has the 
best description of fibre bundles I've seen in quite a while---but no, I 
didn't mean to suggest his entire argument was based on BECs in the 
microtubules.  I suggested Penrose because his approach seems diametrically 
opposed to the qualia guys.





  All this model-building (and discussion) is fine, of
course, but there are a number of psychological experiments out there 
that consistently return counterintuitive and heretofore unexplainable 
results.
Among them, is Helmut Schmidt's retro pk experiment which consistently 
returns odd results.  The PEAR lab at Princeton has some startling 
remote viewing results, and of course, there's Rupert Sheldrake's 
work.   As far as I know, Sheldrake is the only one who has tried to 
create a model (morphic resonance), and most QM folks typically avoid 
discussing the experiments--except to deride them as nonscientific.  I 
think it may be time to revisit some of these ESP experiments to see 
if the results are telling us something in terms of QM, i.e. 
decoherence.   Changing our assumptions about decoherence, then applying 
the model to those strange experiments may clarify things.


RM


Here's a skeptical evaluation of some of the ESP experiments you mention:

http://web.archive.org/web/20040603153145/www.btinternet.com/~neuronaut/webtwo_features_psi_two.htm

Anyway, if it were possible for the mind to induce even a slight 
statistical bias in the probability of a bit flipping 1 or 0, then simply 
by picking a large enough number of trials it would be possible to very 
reliably insure that the majority would be the number the person was 
focusing on. So by doing multiple sets with some sufficiently large 
number N of trials in each set, it would be possible to actually send 
something like a 10-digit bit string (for example, if the majority of 
digits in the first N trials came up 1, you'd have the first digit of 
your 10-digit string be a 1), something which would not require a lot of 
tricky statistical analysis to see was very unlikely to occur by chance. 
If the retro-PK effect you mentioned was real, this could even be used 
to reliably send information into the past!


I spoke with Schmidt in '96.  He told me that it is very unlikely that 
causation can be reversed, but rather that the retropk results suggest many 
worlds.


When these ESP researchers are able to do a straightforward demonstration 
like this, that's when I'll start taking these claims seriously, until 
then extraordinary claims require extraordinary evidence.


The extraordinary claims---evidence rule is good practical guidance, but 
it's crummy science.  Why should new results require an astronomical Z 
score, when proven results need only a Z of 1.96?  Think about the poor 
fellow who discovered that ulcers were caused by helicobacter 
pylori---took him ten years for science to take him seriously, and then 
only after he drank a vial of h.pylori broth himself.   Then there's the 
fellow at U of I (Ames) who believed that Earth is being pummeled by 
snowballs--as big as houses--from space.  He was thoroughly derided (some 
demanded he be fired) for ten years or so---until a UV 

Re: Do things constantly get bigger?

2005-06-03 Thread rmiller

At 01:28 PM 6/3/2005, Norman Samish wrote:

Hal,
Your phrase . . . constantly get bigger reminds me of Mark
McCutcheon's The Final Theory where he revives a notion that gravity is
caused by the expansion of atoms.
Norman


That's the excuse I use.
RM




- Original Message -
From: Hal Finney [EMAIL PROTECTED]
To: everything-list@eskimo.com
Sent: Friday, June 03, 2005 8:59 AM
Subject: Re: Many Pasts? Not according to QM...


Saibal Mitra writes:
 This is actualy another argument against QTI. There are only a finite
 number
 of different versions of observers. Suppose a 'subjective' time evolution
 on
 the set of all possible observers exists that is always well defined.
 Suppose we start with observer O1, and under time evolution it evolves to
 O2, which then evolves to O3 etc. Eventually an On will be mapped back to
 O1
 (if this never happened that would contradict the fact that there are only
 a
 finite number of O's). But mapping back to the initial state doesn't
 conserve memory. You can thus only subjectively experience yourself
 evolving
 for a finite amount of time.

Unless... you constantly get bigger!  Then you could escape the
limitations of the Bekenstein bound.

Hal Finney





Re: Equivalence

2005-06-03 Thread Jesse Mazer


rmiller wrote:



At 01:46 PM 6/3/2005, rmiller wrote:

(snip)


What do you mean by the qualia approach? Do you mean a sort of 
dualistic view of the relationship between mind and matter? From the 
discussion at http://www.fourmilab.ch/rpkp/rhett.html it seems that 
Sarfatti suggests some combination of Bohm's interpretation of QM (where 
particles are guided by a 'pilot wave') with the idea of adding a 
nonlinear term to the Schrodinger equation (contradicting the existing 
'QM math', which is entirely linear), and he identifies the pilot wave 
with the mind and has some hand-wavey notion that life involves some 
kind of self-organizing feedback loop between the pilot wave and the 
configuration of particles (normally Bohm's interpretation says the 
configuration of particles has no effect on the pilot wave, but that's 
where the nonlinear term comes in I guess). Since Bohm's interpretation 
is wholly deterministic, I'd think Sarfatti's altered version would be 
too, the nonlinear term shouldn't change this.



Seems to me you've described the qualia approach pretty well.


But why do you call it that? It seems like it's just a philosophical add-on 
to interpret the pilot wave as mind and the particles guided by it as 
matter, even if Sarfatti's nonlinear QM theory were correct, and the idea 
that life depends on a self-organizing feedback loop between the pilot wave 
and particles could get beyond the pure hand-wavey stage (both of which seem 
very unlikely), there'd be no obligation to interpret the pilot wave in 
terms of mind/qualia.







while on the other
side we have those like Roger Penrose who (I think) take a mechanical 
view (microtubules in the brain harbor Bose-Einstein condensates.)


Penrose's proposal has nothing to do with consciousness collapsing the 
wavefunction, he just proposes that when a system in superposition 
crosses a certain threshold of *mass* (probably the Planck mass), then it 
collapses automatically. The microtubule idea is more speculative, but 
he's just suggesting that the brain somehow takes advantage of 
not-yet-understood quantum gravity effects to go beyond what computers 
can do, but the collapse of superposed states in the brain would still be 
gravitationally-induced.


Penrose has a *lot* of things to say about QM---and his new book has the 
best description of fibre bundles I've seen in quite a while---but no, I 
didn't mean to suggest his entire argument was based on BECs in the 
microtubules.  I suggested Penrose because his approach seems diametrically 
opposed to the qualia guys.


But you brought him up in the context of the consciousness plays a critical 
role in understanding QM idea, when Penrose doesn't fall into this camp at 
all.







  All this model-building (and discussion) is fine, of
course, but there are a number of psychological experiments out there 
that consistently return counterintuitive and heretofore unexplainable 
results.
Among them, is Helmut Schmidt's retro pk experiment which consistently 
returns odd results.  The PEAR lab at Princeton has some startling 
remote viewing results, and of course, there's Rupert Sheldrake's 
work.   As far as I know, Sheldrake is the only one who has tried to 
create a model (morphic resonance), and most QM folks typically avoid 
discussing the experiments--except to deride them as nonscientific.  I 
think it may be time to revisit some of these ESP experiments to see 
if the results are telling us something in terms of QM, i.e. 
decoherence.   Changing our assumptions about decoherence, then applying 
the model to those strange experiments may clarify things.


RM


Here's a skeptical evaluation of some of the ESP experiments you mention:

http://web.archive.org/web/20040603153145/www.btinternet.com/~neuronaut/webtwo_features_psi_two.htm

Anyway, if it were possible for the mind to induce even a slight 
statistical bias in the probability of a bit flipping 1 or 0, then simply 
by picking a large enough number of trials it would be possible to very 
reliably insure that the majority would be the number the person was 
focusing on. So by doing multiple sets with some sufficiently large 
number N of trials in each set, it would be possible to actually send 
something like a 10-digit bit string (for example, if the majority of 
digits in the first N trials came up 1, you'd have the first digit of 
your 10-digit string be a 1), something which would not require a lot of 
tricky statistical analysis to see was very unlikely to occur by chance. 
If the retro-PK effect you mentioned was real, this could even be used 
to reliably send information into the past!


I spoke with Schmidt in '96.  He told me that it is very unlikely that 
causation can be reversed, but rather that the retropk results suggest many 
worlds.


But that is presumably just his personal intuition, not something that's 
based on any experimental data (like getting a message from a possible 
future or alternate world, for 

Re: Equivalence

2005-06-03 Thread rmiller

At 04:40 PM 6/3/2005, rmiller wrote:

At 03:25 PM 6/3/2005, you wrote:



(snip)
I spoke with Schmidt in '96.  He told me that it is very unlikely that 
causation can be reversed, but rather that the retropk results suggest 
many worlds.


But that is presumably just his personal intuition, not something that's 
based on any experimental data (like getting a message from a possible 
future or alternate world, for example).


Actually, he couldn't say why the result came out the way it did.  His 
primary detractor back then, was Henry Stapp---whom Schmidt invited to take 
part in the experiment.  After which Stapp modified his views somewhat.





When these ESP researchers are able to do a straightforward 
demonstration like this, that's when I'll start taking these claims 
seriously, until then extraordinary claims require extraordinary evidence.

(snip)


The issue is not the Z score in isolation, it's 1) whether we trust that 
the correct statistical analysis has been done on the data to obtain that 
Z score (whether reporting bias has been eliminated, for example)--that's 
why I suggested the test of trying to transmit a 10-digit number using 
ESP, which would be a lot more transparent--and 2) whether we trust that 
the possibility of cheating has been kept small enough, which as the 
article I linked to suggested, may not have been met in the PEAR results:



Suspicions have hardened as sceptics have looked more closely at the 
fine detail of Jahn's results. Attention has focused on the fact that one 
of the experimental subjects - believed actually to be a member of the 
PEAR lab staff - is almost single-handedly responsible for the 
significant results of the studies. It was noted as long ago as 1985, in 
a report to the US Army by a fellow parapsychologist, John Palmer of 
Durham University, North Carolina, that one subject - known as operator 
10 - was by far the best performer. This trend has continued. On the most 
recently available figures, operator 10 has been involved in 15 percent 
of the 14 million trials yet contributed a full half of the total excess 
hits. If this person's figures are taken out of the data pool, scoring in 
the low intention condition falls to chance while high intention 
scoring drops close to the .05 boundary considered weakly significant in 
scientific results.


First, you're right about that set of the PEAR results, but operator 10 was 
involved in the original anomalies experiments---she was not involved in 
the remote viewing (as I understand).  But p0.05 is weakly 
significant?  Hm. It was good enough for Fisher. . .it's good enough for 
the courts (Daubert).



Sceptics like James Alcock and Ray Hyman say naturally it is a serious 
concern that PEAR lab staff have been acting as guinea pigs in their own 
experiments. But it becomes positively alarming if one of the staff - 
with intimate knowledge of the data recording and processing procedures - 
is getting most of the hits.


I agree, but again, I don't think Operator 10 was involved in all the 
experiments. Have any of these skeptics tried to replicate?  I believe Ray 
Hyman is an Oregon State English Prof, so he probably couldn't replicate 
some of the PEAR lab work, but surely there are others who could.



Alcock says t(snip) . . . distort Jahn's results. 


If Hyman and Alcock believe Jahn et al were cheating, then they shouldn't 
mince words; instead, they should file a complaint with Princeton.




Of course, both these concerns would be present in any statistical test, 
even one involving something like the causes of ulcers like in the quote 
you posted above, but here I would use a Bayesian approach and say that 
we should start out with some set of prior probabilities, then update 
them based on the data. Let's say that in both the tests for ulcer causes 
and the tests for ESP our estimate of the prior probability for either 
flawed statistical analysis or cheating on the part of the experimenters 
is about the same. But based on what we currently know about the way the 
world works, I'd say the prior probability of ESP existing should be far, 
far lower than the prior probability that ulcers are caused by bacteria. 
It would be extremely difficult to integrate ESP into what we currently 
know about the laws of physics and neurobiology. If someone can propose a 
reasonable theory of how it could work without throwing everything else 
we know out the window, then that could cause us to revise these priors 
and see ESP as less of an extraordinary claim, but I don't know of any 
good proposals (Sarfatti's seems totally vague on the precise nature of 
the feedback loop between the pilot wave and particles, for example, and 
on how this would relate to ESP phenomena...if he could provide a 
mathematical model or simulation showing how a simple brain-like system 
could influence the outcome of random quantum events in the context of 
his theory, then it'd be a different story).


A couple of 

Questions on Russell's Why Occam paper

2005-06-03 Thread Hal Finney
Russell Standish recently mentioned his paper Why Occam's Razor which
can be found at http://parallel.hpc.unsw.edu.au/rks/docs/occam/ .  Among
other things he aims to derive quantum mechanics from a Schmidhuber type
ensemble.  I have tried to read this paper but never really understood it.
Here I will try to ask some questions, taking it slowly.

On this page, http://parallel.hpc.unsw.edu.au/rks/docs/occam/node2.html ,
things get started.  Russell describes a set of infinite bit strings he
calls descriptions.  He writes:

By contrast to Schmidhuber, I assume a uniform measure over these
descriptions -- no particular string is more likely than any other.

This surprises me.  I thought that Schmidhuber assumed a uniform measure
over bit strings considered as programs for his universal computer.  So
what is the contrast to his work?

It seems that the greater contrast is that while Schmidhuber assumed that
the bit strings would be fed into a computer that would produce outputs,
Russell is taking the bit strings directly as raw data.

But I am confused about their role.

Since some of these descriptions describe self aware substructures...

Whoa!  This is a big leap for me.  First, I am not too happy that mere bit
strings have been elevated with the title descriptions.  A bit string on
its own doesn't seem to have the inherent meaning necessary for it to be
considered a description.  And now we find not only that the bit string is
a description, but it is a complex enough description to describe SAS's?
How does that work?

It's especially confusing to read the introductory word since as though
this is all quite obvious and need not be explained.  To me it is very
confusing.

The page goes on to identify these SAS's as observers.  Now they are mappings,
or equivalently Turing Machines, which map finite bit strings to integers.
These integers are the meanings of the bit strings.

I believe the idea here is that the bit strings are taken as prefixes
of the description bit strings in the ensemble.  It is as though the
observers are observing the descriptions a bit at a time, and mapping
them to a sequence of integer meanings.  Is that correct?

So here is another confusion about the role of the description bit
strings in the model.  Are they things that observer TM's observe and
map to integers?  Or are they places where observers live, as suggested
by the Since line quoted above?  Or both?

Now it gets a little more complicated: Under the mapping O(x), some
descriptions encode for identical meanings as other descriptions, so
one should equivalence class the descriptions.

The problem I have is, O takes only finite bit strings.  So technically
a description, which is an infinite bit string, does not encode a
meaning.  What I think is meant here, though, is that two descriptions
(i.e. infinite bit strings) will be considered equivalent if for every
finite prefix of the strings, the O() mapping is the same.  So if we
think of O as observing the description bit strings one by one,
it will go through precisely the same sequence of integer meanings
in each case.  Is that right?

In particular, strings where the bits after some bit number n are
``don't care'' bits, are in fact equivalence classes of all strings that
share the first n bits in common.

I think what this considers is a special O() and a special string prefix
such that if O sees that particular n-bit prefix, all extensions of
that prefix get mapped to the same meaning integer.  In that case the
condition described in my previous paragraph would be met, and all
strings with this n-bit prefix would be equivalent.

One can see that the size of the equivalence class drops off
exponentially with the amount of information encoded by the string.

That seems a little questionable because the size of the equivalence class
is infinite in all cases.  However I think Russell means to use a uniform
measure where the collection of all strings with a particular n-bit
prefix have a measure of 1/2^n.  It's not clear how well this measure
really works or whether it applies to all sets of infinite strings.

Under O(x), the amount of information is not necessarily equal to the
length of the string, as some of the bits may be redundant.

Now we have this new concept of the amount of information which has
not previously been defined.  This sentence is really hard for me.
What does it mean for bits to be redundant?  We just discussed strings
where all those after bit n are don't care, but this sentence seems
to be envisioning other kinds of redundancies.

The sum P_O(s) = [sum over p such that O(p)=s of] 2^(-|p|)
where |p| means the number of bits of p consumed by O in returning s,
gives the size of the equivalence class of all descriptions having
meaning s.

Boy, that's a tough one now.  We consider all bit strings p such that O(p)
= s.  Now, is this supposed to just be those cases described earlier where
the bits after |p| are don't care bits?  Or is it all strings p such
that 

Re: Many Pasts? Not according to QM...

2005-06-03 Thread Stathis Papaioannou

Saibal Mitra writes:


 Stephen Paul King writes:
  I really do not want to be a stick-in-the-mud here, but what do we
base
  the idea that copies could exist upon? What if I, or any one 
else's

1st
  person aspect, can not be copied? If the operation of copying is
impossible,
  what is the status of all of these thought experiments?
  If, and this is a HUGE if, there is some thing irreducibly quantum
  mechanical to this 1st person aspect then it follows from QM that
copying
  is not allowed. Neither a quantum state nor a qubit can be copied
without
  destroying the original.

 According to the Bekenstein bound, which is a result from quantum 
gravity,

 any finite sized system can only hold a finite amount of information.
 That means that it can only be in a finite number of states.  If you
 made a large enough number of systems in every possible state, you would
 be guaranteed to have one that matched the state of your target system.
 However you could not in general know which one matched it.

 Nevertheless this shows that even if consciousness is a quantum
 phenomenon, it is possible to have copies of it, at the expense of
 some waste.


This is actualy another argument against QTI. There are only a finite 
number
of different versions of observers. Suppose a 'subjective' time evolution 
on

the set of all possible observers exists that is always well defined.
Suppose we start with observer O1, and under time evolution it evolves to
O2, which then evolves to O3 etc. Eventually an On will be mapped back to 
O1
(if this never happened that would contradict the fact that there are only 
a

finite number of O's). But mapping back to the initial state doesn't
conserve memory. You can thus only subjectively experience yourself 
evolving

for a finite amount of time.


This is Nietsche's eternal return argument. One response is to note that 
most people would be more than satisfied with the prospect that they will 
experience everything a human being can possibly experience, even though 
this is not actually immortality; and the information processing limit of a 
human brain is far, far smaller than the theoretical limit imposed by the 
Beckenstein bound. Another response is that the universe may actually 
contain an infinite amount of matter which can therefore be used to process 
an infinite amount of information. If the universe we see is finite, then 
there will always be another parallel universe  somewhere which is larger, 
and another one which is larger than that, and so on to infinity. Finally, 
if we don't actually need a physical computer to process information, the 
resources of Platonia are of course infinite.


Literal immortality, without repetition of mental states, at its limit would 
give us the cognitive capacity of God.


--Stathis Papaioannou

_
REALESTATE: biggest buy/rent/share listings   
http://ninemsn.realestate.com.au




RE: Equivalence

2005-06-03 Thread Brent Meeker
-Original Message-
From: rmiller [mailto:[EMAIL PROTECTED]
Sent: Friday, June 03, 2005 4:59 PM
To: Stephen Paul King; everything-list@eskimo.com
Subject: Re: Equivalence


At 11:27 AM 6/3/2005, rmiller wrote:
At 10:23 AM 6/3/2005, Stephen Paul King wrote:
Dear R.,

You make a very good point, one that I was hoping to communicate but
 failed. The notion of making copies is only coherent if and when we can
 compare the copied produce to each other. Failing to be able to do this,
 what remains? Your suggestion seems to imply that precognition,
 coincidence and synchronicity are some form resonance between
 decohered QM systems. Could it be that decoherence is not an all or
 nothing process; could it be that some 'parts' of a QM system decohere
 with respect to each other while others do not and/or that decoherence
 might occur at differing rates within a QM system?

Stephen

Yes, that's what I am suggesting.  The rates may remain constant---i.e.
less than a few milliseconds (as Patrick L. earlier noted) however, I
suspect there is a topology where regions of decoherence coexist and
border regions of coherence.

Coherence is assumed to always remain, since QM evolution is unitary.  It just
gets entangled with the environment so there is no practical way to detect it.
Just google decoherence Zeh.

An optics experiment might be able to test
this (if it hasn't been done already), and it might be experimentally
testable as a psychology experiment.\\

More to the point---Optical experiments in QM often return counterintuitive
results, but they support the QM math (of course).  No one has
satisfactorily resolved the issue of measurement to everyone's liking, but
most would agree that in some brands of QM consciousness plays a role.  On
one side we have Fred Alan Wolf and Sarfatti who seem to take the qualia
approach, while on the other side we have those like Roger Penrose who (I
think) take a mechanical view (microtubules in the brain harbor
Bose-Einstein condensates.)   All this model-building (and discussion) is
fine, of course, but there are a number of psychological experiments out
there that consistently return counterintuitive and heretofore
unexplainable results.

And they consistently fail when someone tries to replicate them or have them
performed so as to eliminate fraud and self-deception.

Brent Meeker




Re: Functionalism and People as Programs

2005-06-03 Thread Stathis Papaioannou

R. Miller writes (quoting Lee Corbin):


If someone can teleport me back and forth from work to home, I'll
be happy to go along even if 1 atom in every thousand cells of mine
doesn't get copied.


Exposure to a nuclear detonation at 4000 yds typically kills about 1 in a 
million cells.  When that happens, you die.   I would suggest that is a bad 
metaphor.


Losing one atom in every thousand cells is not the same as losing the cell 
itself. Cells are a constant work in progress. Bits fall off, transcription 
errors occur in the process of making proteins, radiation or noxious 
chemicals damage subcellular components, and so on. The machinery of the 
cell is constantly at work repairing all this damage. It is like a building 
project where the builders only just manage to keep up with the wreckers. 
Eventually, errors accumulate or the blueprints are corrupted and the cell 
dies. Taking the organism as a whole, the effect of all this activity is 
like the ship of Theseus: over time, even though it looks like the same 
organism, almost all the matter in it has been replaced.


--Stathis Papaioannou

_
FREE pop-up blocking with the new MSN Toolbar – get it now! 
http://toolbar.msn.click-url.com/go/onm00200415ave/direct/01/




RE: Do things constantly get bigger?

2005-06-03 Thread Brent Meeker
You are constantly getting bigger.  Photons emitted from you, and hence
entangled with your atomic states, form an shell expanding at the speed of
light.  Eventually beings on other planets will be able to see you via these
photons.

Brent Meeker

-Original Message-
From: Norman Samish [mailto:[EMAIL PROTECTED]
Sent: Friday, June 03, 2005 6:29 PM
To: everything-list@eskimo.com
Subject: Do things constantly get bigger?


Hal,
Your phrase . . . constantly get bigger reminds me of Mark
McCutcheon's The Final Theory where he revives a notion that gravity is
caused by the expansion of atoms.
Norman

- Original Message -
From: Hal Finney [EMAIL PROTECTED]
To: everything-list@eskimo.com
Sent: Friday, June 03, 2005 8:59 AM
Subject: Re: Many Pasts? Not according to QM...


Saibal Mitra writes:
 This is actualy another argument against QTI. There are only a finite
 number
 of different versions of observers. Suppose a 'subjective' time evolution
 on
 the set of all possible observers exists that is always well defined.
 Suppose we start with observer O1, and under time evolution it evolves to
 O2, which then evolves to O3 etc. Eventually an On will be mapped back to
 O1
 (if this never happened that would contradict the fact that there are only
 a
 finite number of O's). But mapping back to the initial state doesn't
 conserve memory. You can thus only subjectively experience yourself
 evolving
 for a finite amount of time.

Unless... you constantly get bigger!  Then you could escape the
limitations of the Bekenstein bound.

Hal Finney





Re: Functionalism and People as Programs

2005-06-03 Thread rmiller


At 10:58 PM 6/3/2005, you wrote:

R. Miller writes (quoting Lee Corbin):


If someone can teleport me back and forth from work to home, I'll
be happy to go along even if 1 atom in every thousand cells of mine
doesn't get copied.


Exposure to a nuclear detonation at 4000 yds typically kills about 1 in a 
million cells.  When that happens, you die.   I would suggest that is a 
bad metaphor.


Losing one atom in every thousand cells is not the same as losing the cell 
itself. Cells are a constant work in progress. Bits fall off, 
transcription errors occur in the process of making proteins, radiation or 
noxious chemicals damage subcellular components, and so on. The machinery 
of the cell is constantly at work repairing all this damage. It is like a 
building project where the builders only just manage to keep up with the 
wreckers. Eventually, errors accumulate or the blueprints are corrupted 
and the cell dies. Taking the organism as a whole, the effect of all this 
activity is like the ship of Theseus: over time, even though it looks like 
the same organism, almost all the matter in it has been replaced.


That's correct, of course.  I'm finishing up a book on nuclear fallout, 
and most of my selves were obviously immersed in radiation issues rather 
than simple mathematics.  Sorry.



RM