Re: Functionalism and People as Programs

2005-06-07 Thread Bruno Marchal


Le 05-juin-05, à 19:45, Lee Corbin a écrit :


Bruno provides the exercise


I notice that many people seek refuge in the "no-copying" theorem of
QM.


Exercise: 1) Show by a qualitative informal reasoning that if we are
Turing emulable then a no-cloning theorem is a necessity.


My best guess right now?  Your challenge would be a futile exercise
in word play!  "no-cloning" involves quantum mechanics.




I don't think QM has a copyright on the non copy theorems!
I have been prudent enough to talk on "a", and not "the"  
Zurek-Wootters-Diecks non copy theorems. In france there is a law of  
non cloning of embryo. What are you saying?






So far as
I know, computability theory (e.g. Rogers 1967) says ABSOLUTELY
NOTHING about quantum mechanics, and they are in two completely
different intellectual domains.




Why does the David Deutsch FOR book mention comp/turing as an important  
thread of the book. Everett uses comp in its derivation of the  
collapse. I'm glad you mention Rogers' classical introduction to  
computability theory, which is excellent, but is a book on pure  
computability theory, not on applied computability theory. Since when  
is it forbidden to apply one field on another. I really don't  
understand your remark.


All what I say is that someone who understands the 8 UDA steps should  
easily be able to convince him/herself that whatever matter could *be*,  
it cannot be cloned. This can be shown in a short sentence. It is  
almost trivial. (once UDA is thoroughly understood: this is probably  
less trivial, and  from your conversation with Stathis I can infer you  
have trouble at the step 3, in the SANE paper:
http://iridia.ulb.ac.be/~marchal/publications/ 
SANE2004MARCHALAbstract.html







Let me know when the newspapers announce that you've derived
QM from computability theory.



I have only show that if comp is true then QM is derivable from comp  
theory.

I do have derive a little bit of QM.




Or any blasted physics equation
whatsoever. (I'm sorry that I have neither the time nor the
expertise to digest your technical papers.)




Except the step 7 which need  a passive understanding of Church's  
thesis, there is nothing technical in the UDA reasoning. I agree the  
interview of the machine on UDA is more difficult, because it  
presupposes some background in logic and computability theory. But it  
is not necessary for the understanding of the UDA (universal dovetailer  
argument) which gives the main result.







In another thread, Bruno wrote


This is the central problem from those who are deeply concerned as
to *why* 1st person experiences exist.  Too bad that to me, it's
just obvious that they must.  I literally cannot conceive of how
it could be different!  (Poor me, I suppose---in some ways some
of us just have too little imagination, I truly guess.)



The problem is not so much "why" 1-person experiences exist, but how
they are related to 3-person descriptions, and which one.
How do *you* explain the relation?


If I were a great novelist, I might be able to convey certain 1st
person experiences to you (but that is possible *only* because the
two organisms Bruno and Lee are so similar).  But I'm not a great
novelist, and so I can't.




Here you make the category error which is made by so many physicalists  
or materialists. I am not asking you to convey some of your first  
person experience (well actually with your aargh and other humbug you  
do succeed but that's beyond the poin :).
But at least you accept the existence of those 1-experience (unlike the  
materialist eliminativist a-la-Churchland). My question is how do you  
related them to third person describable things?







Therefore, we can only talk about what is in the world, from tables
to trees to mountains and stars.



Ah ? This is a so ambiguous statement that I cannot comment it.



*People* occupy an infinitesimal
portion of what's out there. The maze of internal events which make
each one of them feel and think is interesting, but is a very difficult
physiological problem.



It is a physiological problem once you are both computationalist and  
physicalist, but UDA shows those two options are incompatible. Please  
tell me where I am wrong. If you want we can go step by step with  
little posts. I am afraid you take for granted, perhaps unconsciously  
Aristotle theory of mind and substance. It just does not work with  
comp.





I claim that it has nothing to do with serious
philosophy,



You talk of serious philosophy, but are explicitly against the use of  
definition and postulates (axioms) in some of your posts. How could we  
progress. Beside I don't believe in some clear boundary between science  
and philosophy. Those are, imo, purely conventional construct. In  
France and Belgium philosophy belongs to literature.






and is just a hideous distraction, possibly stemming from
confusion at the semantic level and disturbed sr. Bad epistemology,
in a phrase.



I could agree if

RE: Functionalism and People as Programs

2005-06-05 Thread Lee Corbin
Bruno provides the exercise

> > I notice that many people seek refuge in the "no-copying" theorem of
> > QM.
> 
> Exercise: 1) Show by a qualitative informal reasoning that if we are 
> Turing emulable then a no-cloning theorem is a necessity.

My best guess right now?  Your challenge would be a futile exercise
in word play!  "no-cloning" involves quantum mechanics. So far as
I know, computability theory (e.g. Rogers 1967) says ABSOLUTELY
NOTHING about quantum mechanics, and they are in two completely
different intellectual domains.

Let me know when the newspapers announce that you've derived
QM from computability theory.  Or any blasted physics equation
whatsoever. (I'm sorry that I have neither the time nor the
expertise to digest your technical papers.)

In another thread, Bruno wrote

> > This is the central problem from those who are deeply concerned as
> > to *why* 1st person experiences exist.  Too bad that to me, it's
> > just obvious that they must.  I literally cannot conceive of how
> > it could be different!  (Poor me, I suppose---in some ways some
> > of us just have too little imagination, I truly guess.)

> The problem is not so much "why" 1-person experiences exist, but how
> they are related to 3-person descriptions, and which one.
> How do *you* explain the relation?

If I were a great novelist, I might be able to convey certain 1st
person experiences to you (but that is possible *only* because the
two organisms Bruno and Lee are so similar).  But I'm not a great
novelist, and so I can't.

Therefore, we can only talk about what is in the world, from tables
to trees to mountains and stars. *People* occupy an infinitesimal
portion of what's out there. The maze of internal events which make
each one of them feel and think is interesting, but is a very difficult
physiological problem.  I claim that it has nothing to do with serious
philosophy, and is just a hideous distraction, possibly stemming from
confusion at the semantic level and disturbed sr. Bad epistemology,
in a phrase.

THERE ISN'T A PROBLEM!  (Yes, okay, to computer scientists and
physiologists there is, but not to philosophers or others interested
in getting their ontology and epistemology straight.) The evolved
creatures all have their responses and their internal workings;
and that's *all* there is to it!

I am an evolved creature; and if I can't understand that that same
conclusions apply just as much to me as to the creatures I study,
then I'm yielding to nonsense.

Lee



Re: Existence of Copies (was RE: Functionalism and People as Programs)

2005-06-05 Thread Bruno Marchal


Le 05-juin-05, à 01:04, Lee Corbin a écrit :


This is the central problem from those who are deeply concerned as
to *why* 1st person experiences exist.  Too bad that to me, it's
just obvious that they must.  I literally cannot conceive of how
it could be different!  (Poor me, I suppose---in some ways some
of us just have too little imagination, I truly guess.)


The problem is not so much "why" 1-person experience exist, but how
they are related to 3-person description, and which one.
How do *you* explain the relation?

Bruno



http://iridia.ulb.ac.be/~marchal/




Re: Functionalism and People as Programs

2005-06-05 Thread Bruno Marchal


Le 03-juin-05, à 06:20, Lee Corbin a écrit :



[Stephen:] What if "I", or any one else's 1st person aspect, can not 
be copied?

If the operation of copying is impossible, what is the status of all
of these thought experiments?


I notice that many people seek refuge in the "no-copying" theorem of
QM.



Exercise: 1) Show by a qualitative informal reasoning that if we are 
Turing emulable then a no-cloning theorem is a necessity. Show more 
precisely that IF I am duplicable at some description level THEN if I 
observe myself below that substitution level I will discover that I am 
made from "object" relying on an infinity (a continuum) of information 
states/histories (hardly duplicable "stuff"). Hint: (re)read the UDA.


Exercise 2) (For Stephen :) Show that the 1-person is not 1-duplicable, 
show that it is not even 1-nameable. (This can be done also by a 
qualitative informal reasoning, but it is also beautifully obtainable 
with G and G*, S4Grz, ...)


Bruno


http://iridia.ulb.ac.be/~marchal/




Existence of Copies (was RE: Functionalism and People as Programs)

2005-06-04 Thread Lee Corbin
Stephen writes

> > Stephen writes
> >
> > > I really do not want to be a stick-in-the-mud here, but
> > > what do we base the idea that "copies" could exist upon?

Don't worry about not going along with someone's program ;-)
I think that you're just being polite by calling yourself
a stick-in-the-mud.  Why, if I had to interpose such a disclaimer
every time that I was stubborn and mule-headed, 90% of my posts
would be consist of nothing but apologies!!  :-)

To prevent most of us from feeling inadquate, you should suppress
some of your southern politeness in these discussions  :-)

> > It is a conjecture called "functionalism" (or one of its close variants).
> > I guess the "strong AI" view is that the mind can be emulated on a
> > computer. And yes, just because many people believe this---not
> > surprisingly many computer scientists---does not [necessarily]
> > make it true.   [though I myself (Lee and his copies) believe it]
>
> [SPK]
>
> I am aware of those ideas and they seem, at least to me, to be supported
> by an article of Faith and not any kind of empirical evidence. Maybe that is
> why I have such an allergy to the conjecture. ;-)

Well for Pete's sake!  Of *course* there is some faith here---as
you wryly note, you yourself are hardly exempt from indulging in
a little (or a lot) of speculation. What you have written is not
even an argument. Whereas what Brent Meeker wrote

"I think there is considerable evidence to
support the view that human level intelligence
could be achieved by a (non-quantum) computer
and that human intelligence and consciousness
are dependent on brain processes; e.g. see the
many studies of brain damaged patients.  Also,
I think it is well established that consciousness
corresponds to only a small part of the information
processing in the brain.

definitely constitutes a strong argument, even if from your point
of view it does not constitute evidence.  (Thanks, Brent!)

> >[LC]
> > An aspect of this belief is that a robot could act indistinguishably
> > from humans. At first glance, this seems plausible enough; certainly
> > many early 20th century SF writers thought it reasonable. Even Searle
> > concedes that such a robot could at least appear intelligent and
> > thoughtful to Chinese speakers.
> >
> > I suspect that Turing also believed it: after all, he proposed that
> > a program one day behave indistinguishably from humans. And why not,
> > exactly?  After all, the robot undertakes actions, performs calculations,
> > has internal states, and should be able to execute a repertoire as fine
> > as that of any human.  Unless there is some devastating reason to the
> > contrary.
>
> [SPK]
>
> What I seem to rest my skepticism upon is the fact that in all of these
> considerations there remains, tacitly or not, the assumption that these
> "internal states" have an entity "to whom" they have a particular valuation.

This is the central problem from those who are deeply concerned as
to *why* 1st person experiences exist.  Too bad that to me, it's
just obvious that they must.  I literally cannot conceive of how
it could be different!  (Poor me, I suppose---in some ways some
of us just have too little imagination, I truly guess.)

> I see this expressed in the MWI, more precisely, in the "relative state" way
> of thinking within an overall QM multiverse.

Okay; On closer reading, I think that you are talking about
the way that many people cannot stand MWI because it seems
to require that they observe both outcomes of an experiment.

> Additionally, we are still embroiled in debate over the
> sufficiency of a Turing Test to give us reasonable certainty
> to claim that we can reduce 1st person aspects from 3rd
> person, Searle's Chinese Room being one example.
>
> >> What if "I", or any one else's 1st person aspect, can not be copied?
> >> If the operation of copying is impossible, what is the status of all
> >> of these thought experiments?
> >
> > I notice that many people seek refuge in the "no-copying" theorem of
> > QM. Well, for them, I have that automobile travel also precludes
> > survival.  I can prove that to enter an automobile, drive it somewhere,
> > and then exit the automobile invariably changes the quantum state of
> > the person so reckless as to do it.
>
> [SPK]
>
> Come on, Lee, your trying to evade the argument. ;-)

Am not!  If the shoe doesn't fit, then don't wear it. I thought
(mistakenly, it appears) that you were seeking refuge in the
no-clone QM theorem. Sorry for the misattribution. What you
are saying---PLEASE CORRECT ME IF I AM WRONG---is that copying
just might not be possible at all.  But I don't understand!

Surely you admit that it is conceivable that a machine might
scan your brain and body and create a duplicate. (As I say,
it doesn't have to be **exact**.)  But didn't you see it happen many
times on Star Trek?  Or were you in the other room (as I often
was) vi

RE: Functionalism and People as Programs

2005-06-04 Thread rmiller

At 12:36 PM 6/4/2005, Lee Corbin wrote:

R. Miller writes

> Lee Corbin wrote:

>
> Exposure to a nuclear detonation at 4000 yds typically kills about 1 in a
> million cells.  When that happens, you die.   I would suggest that is a 
bad

> metaphor.

Well, my numbers, above, are *entirely* different from yours. One in a million
cells is a *terrible* loss. But one atom?  There are 10^14 atoms per cell.
(And 10^14 cells in a typical human.)  I would stick with my numbers.
But in case you are somehow right, and that each cell would be wrecked
by the loss of a single atom, my point can be made by relaxing the
numbers:  replace what I've written by "I'll be happy to teleport even
if 100 trillion atoms are destroyed: a whole cell, gone".


Lee,
As I indicated earlier, I was out to lunch on that one-in-a-million 
cells/atoms deal.  As I understand it, one cell killed out of a million is 
lethal, however.



R.





RE: Functionalism and People as Programs

2005-06-04 Thread Lee Corbin
R. Miller writes

> Lee Corbin wrote:
> >Stephen writes
> >
> > > I really do not want to be a stick-in-the-mud here,
> > > but what do we base the idea that "copies" could
> > > exist upon?
> >
> > It is a conjecture called "functionalism" (or one of its close variants).
> 
> "Functionalism," at least, in the social sciences refers to the proposition 
> that everything exists because it has a function (use).

Well, that is *not* at all the meaning of the term in philosophy. To
put it simply, "if it behaves like a duck in every particular, it is
a duck".

> I notice that many people seek refuge in the "no-copying" theorem of
> QM. Well, for them, I have that automobile travel also precludes
> survival.  I can prove that to enter an automobile, drive it somewhere,
> and then exit the automobile invariably changes the quantum state of
> the person so reckless as to do it.
> 
> >If someone can teleport me back and forth from work to home, I'll
> >be happy to go along even if 1 atom in every thousand cells of mine
> >doesn't get copied.
> 
> Exposure to a nuclear detonation at 4000 yds typically kills about 1 in a 
> million cells.  When that happens, you die.   I would suggest that is a bad 
> metaphor.

Well, my numbers, above, are *entirely* different from yours. One in a million
cells is a *terrible* loss. But one atom?  There are 10^14 atoms per cell.
(And 10^14 cells in a typical human.)  I would stick with my numbers.
But in case you are somehow right, and that each cell would be wrecked
by the loss of a single atom, my point can be made by relaxing the
numbers:  replace what I've written by "I'll be happy to teleport even
if 100 trillion atoms are destroyed: a whole cell, gone".

Lee

P.S. Thanks for the interesting fact that death of 1/10^6 cells kills one.



Re: Functionalism and People as Programs

2005-06-03 Thread rmiller


At 10:58 PM 6/3/2005, you wrote:

R. Miller writes (quoting Lee Corbin):


If someone can teleport me back and forth from work to home, I'll
be happy to go along even if 1 atom in every thousand cells of mine
doesn't get copied.


Exposure to a nuclear detonation at 4000 yds typically kills about 1 in a 
million cells.  When that happens, you die.   I would suggest that is a 
bad metaphor.


Losing one atom in every thousand cells is not the same as losing the cell 
itself. Cells are a constant work in progress. Bits fall off, 
transcription errors occur in the process of making proteins, radiation or 
noxious chemicals damage subcellular components, and so on. The machinery 
of the cell is constantly at work repairing all this damage. It is like a 
building project where the builders only just manage to keep up with the 
wreckers. Eventually, errors accumulate or the blueprints are corrupted 
and the cell dies. Taking the organism as a whole, the effect of all this 
activity is like the ship of Theseus: over time, even though it looks like 
the same organism, almost all the matter in it has been replaced.


That's correct, of course.  I'm finishing up a book on nuclear fallout, 
and most of my selves were obviously immersed in radiation issues rather 
than simple mathematics.  Sorry.



RM





Re: Functionalism and People as Programs

2005-06-03 Thread Stathis Papaioannou

R. Miller writes (quoting Lee Corbin):


If someone can teleport me back and forth from work to home, I'll
be happy to go along even if 1 atom in every thousand cells of mine
doesn't get copied.


Exposure to a nuclear detonation at 4000 yds typically kills about 1 in a 
million cells.  When that happens, you die.   I would suggest that is a bad 
metaphor.


Losing one atom in every thousand cells is not the same as losing the cell 
itself. Cells are a constant work in progress. Bits fall off, transcription 
errors occur in the process of making proteins, radiation or noxious 
chemicals damage subcellular components, and so on. The machinery of the 
cell is constantly at work repairing all this damage. It is like a building 
project where the builders only just manage to keep up with the wreckers. 
Eventually, errors accumulate or the blueprints are corrupted and the cell 
dies. Taking the organism as a whole, the effect of all this activity is 
like the ship of Theseus: over time, even though it looks like the same 
organism, almost all the matter in it has been replaced.


--Stathis Papaioannou

_
FREE pop-up blocking with the new MSN Toolbar – get it now! 
http://toolbar.msn.click-url.com/go/onm00200415ave/direct/01/




RE: Functionalism and People as Programs

2005-06-03 Thread Brent Meeker


>-Original Message-
>From: Stephen Paul King [mailto:[EMAIL PROTECTED]
>Sent: Friday, June 03, 2005 3:16 PM
>To: everything-list@eskimo.com
>Subject: Re: Functionalism and People as Programs
>
>
>Dear Lee,
>
>- Original Message -
>From: "Lee Corbin" <[EMAIL PROTECTED]>
>To: "EverythingList" 
>Sent: Friday, June 03, 2005 12:20 AM
>Subject: Functionalism and People as Programs
>
>
>> Stephen writes
>>
>>> I really do not want to be a stick-in-the-mud here, but what do we
>>> base
>>> the idea that "copies" could exist upon?
>>
>> It is a conjecture called "functionalism" (or one of its close variants).
>> I guess the "strong AI" view is that the mind can be emulated on a
>> computer. And yes, just because many people believe this---not
>> surprisingly
>> many computer scientists---does not make it true.
>
>[SPK]
>
>I am aware of those ideas and they seem, at least to me, to be supported
>by an article of Faith and not any kind of empirical evidence. Maybe that is
>why I have such an allergy to the conjecture. ;-)

I think there is considerable evidence to support the view that human level
intelligence could be achieved by a (non-quantum) computer and that human
intelligence and consciousness are dependent on brain processes; e.g. see the
many studies of brain damaged patients.  Also, I think it is well established
that consciousness corresponds to only a small part of the information
processing in the brain.  That's something that bother's mean about the
discussion of "observer moments" with the implication that only the conscious
"observation" matters.

>
>>[LC]
>> An aspect of this belief is that a robot could act indistinguishably
>> from humans. At first glance, this seems plausible enough; certainly
>> many early 20th century SF writers thought it reasonable. Even Searle
>> concedes that such a robot could at least appear intelligent and
>> thoughtful to Chinese speakers.
>>
>> I suspect that Turing also believed it: after all, he proposed that
>> a program one day behave indistinguishably from humans.

Interestingly, Turing's actual proposal was to test whether a computer do as
well posing as a woman as could a man.

>And why not,
>> exactly?  After all, the robot undertakes actions, performs calculations,
>> has internal states, and should be able to execute a repertoire as fine
>> as that of any human.  Unless there is some devastating reason to the
>> contrary.
>
>[SPK]
>
>What I seem to rest my skepticism upon is the fact that in all of these
>considerations there remains, tacitly or not, the assumption that these
>"internal states" have an entity "to whom" they have a particular valuation.
>I see this expressed in the MWI, more precisely, in the "relative state" way
>of thinking within an overall QM multiverse. Additionally, we are still
>embroiled in debate over the sufficiency of a Turing Test to give us
>reasonable certainty to claim that we can reduce 1st person aspects from 3rd
>person, Searle's Chinese Room being one example.
>
>>> What if "I", or any one else's 1st person aspect, can not be copied?
>>> If the operation of copying is impossible, what is the status of all
>>> of these thought experiments?

I agree that for copying to be successful requires that what is copied is
something classical.  Tegmark makes more than an argument that brain processes
are classical, he makes a calculation, quant-ph/9907009.  So I don't think
that's an in-principle barrier to copying.  However, there might be other
limits based on thermal noise etc that forbid copying finer than some crude
level.

>>
>> I notice that many people seek refuge in the "no-copying" theorem of
>> QM. Well, for them, I have that automobile travel also precludes
>> survival.  I can prove that to enter an automobile, drive it somewhere,
>> and then exit the automobile invariably changes the quantum state of
>> the person so reckless as to do it.
>
>[SPK]
>
>Come on, Lee, your trying to evade the argument. ;-)
>
>> [LC]
>> If someone can teleport me back and forth from work to home, I'll
>> be happy to go along even if 1 atom in every thousand cells of mine
>> doesn't get copied. Moreover---I am not really picky about the exact
>> bound state of each atom, just so long as it is able to perform the
>> role approximately expected of it. (That is, go ahead and remove any
>> carbon atom you like, and replace it by another carbon atom in a
>> differen

Re: Functionalism and People as Programs

2005-06-03 Thread Stephen Paul King

Dear Lee,

- Original Message - 
From: "Lee Corbin" <[EMAIL PROTECTED]>

To: "EverythingList" 
Sent: Friday, June 03, 2005 12:20 AM
Subject: Functionalism and People as Programs



Stephen writes

I really do not want to be a stick-in-the-mud here, but what do we 
base

the idea that "copies" could exist upon?


It is a conjecture called "functionalism" (or one of its close variants).
I guess the "strong AI" view is that the mind can be emulated on a
computer. And yes, just because many people believe this---not 
surprisingly

many computer scientists---does not make it true.


[SPK]

   I am aware of those ideas and they seem, at least to me, to be supported 
by an article of Faith and not any kind of empirical evidence. Maybe that is 
why I have such an allergy to the conjecture. ;-)



[LC]
An aspect of this belief is that a robot could act indistinguishably
from humans. At first glance, this seems plausible enough; certainly
many early 20th century SF writers thought it reasonable. Even Searle
concedes that such a robot could at least appear intelligent and
thoughtful to Chinese speakers.

I suspect that Turing also believed it: after all, he proposed that
a program one day behave indistinguishably from humans. And why not,
exactly?  After all, the robot undertakes actions, performs calculations,
has internal states, and should be able to execute a repertoire as fine
as that of any human.  Unless there is some devastating reason to the
contrary.


[SPK]

   What I seem to rest my skepticism upon is the fact that in all of these 
considerations there remains, tacitly or not, the assumption that these 
"internal states" have an entity "to whom" they have a particular valuation. 
I see this expressed in the MWI, more precisely, in the "relative state" way 
of thinking within an overall QM multiverse. Additionally, we are still 
embroiled in debate over the sufficiency of a Turing Test to give us 
reasonable certainty to claim that we can reduce 1st person aspects from 3rd 
person, Searle's Chinese Room being one example.



What if "I", or any one else's 1st person aspect, can not be copied?
If the operation of copying is impossible, what is the status of all
of these thought experiments?


I notice that many people seek refuge in the "no-copying" theorem of
QM. Well, for them, I have that automobile travel also precludes
survival.  I can prove that to enter an automobile, drive it somewhere,
and then exit the automobile invariably changes the quantum state of
the person so reckless as to do it.


[SPK]

   Come on, Lee, your trying to evade the argument. ;-)


[LC]
If someone can teleport me back and forth from work to home, I'll
be happy to go along even if 1 atom in every thousand cells of mine
doesn't get copied. Moreover---I am not really picky about the exact
bound state of each atom, just so long as it is able to perform the
role approximately expected of it. (That is, go ahead and remove any
carbon atom you like, and replace it by another carbon atom in a
different state.)


[SPK]

   If you care to look into teleportation, as it has been researched so 
far, it has been shown that the "original" - that system or state of a 
system - that is teleported is not copied like some Xerox of an original 
document.


http://www.research.ibm.com/quantuminfo/teleportation/

   Such can not be done because *all* of the information about the system 
or state must be simultaneously measured and that act itself destroys the 
original. If *all* of the information is not measured, then one is not 
copying or teleporting, one is just measurering. This is not overly 
complicated!



If, and this is a HUGE if, there is some thing irreducibly quantum
mechanical to this "1st person aspect" then it follows from QM that 
copying
is not allowed. Neither a quantum state nor a "qubit" can be copied 
without

destroying the "original".


This is being awfully picky about permissible transformations. I
have even survived mild blows to the head, which have enormously
changed my quantum state.


[SPK]

   Again, you are begging the point! The impact of air molecules change 
one's quantum state! Perhaps we are stuck on this because we are assuming a 
"still frame by still frame" kind of representation of the situation. The 
quantum state of a system is continuously changing, that is why there is a 
variable "t" in the Schroedinger eqation for a wavefunction! I am commenting 
about the absurdity of copying the quantum mechanical system itself, or some 
subset or trace of it, other that that implied by the rules of QM.




falsified, by the same experiments that unassailably imply that Nature 
is,
at its core, Quantum Mechanical and not Classical and thus one wonders: 
"Why

do we persist in this state of denial?&qu

Re: Functionalism and People as Programs

2005-06-02 Thread rmiller

At 11:20 PM 6/2/2005, Lee Corbin wrote:

Stephen writes

> I really do not want to be a stick-in-the-mud here, but what do we 
base

> the idea that "copies" could exist upon?

It is a conjecture called "functionalism" (or one of its close variants).


"Functionalism," at least, in the social sciences refers to the proposition 
that everything exists because it has a function (use).  When that notion 
came under attack in the 1960s, structural functionalists responded that 
some things have "latent functions"--uses that we have yet to 
divine.  Functionalism follows Scholasticism which follows teleology.  Not 
particularly good science---or at least, not *modern* science.




> What if "I", or any one else's 1st person aspect, can not be copied?
> If the operation of copying is impossible, what is the status of all
> of these thought experiments?


Still pretty robust.  If you accept that a chronon has a dimension equal to 
about 10^-43 seconds, then you'd have to concede that we exist as a "deck" 
of copies through time. No big deal, but we ARE copies of the individual we 
were 1 x 10-^43 seconds ago.  If not, where's the "glue"?




I notice that many people seek refuge in the "no-copying" theorem of
QM. Well, for them, I have that automobile travel also precludes
survival.  I can prove that to enter an automobile, drive it somewhere,
and then exit the automobile invariably changes the quantum state of
the person so reckless as to do it.

If someone can teleport me back and forth from work to home, I'll
be happy to go along even if 1 atom in every thousand cells of mine
doesn't get copied.


Exposure to a nuclear detonation at 4000 yds typically kills about 1 in a 
million cells.  When that happens, you die.   I would suggest that is a bad 
metaphor.



Moreover---I am not really picky about the exact
bound state of each atom, just so long as it is able to perform the
role approximately expected of it.


Structural functionalism.  When physicists converse at a bar, they talk the 
language of sociology.




(That is, go ahead and remove any
carbon atom you like, and replace it by another carbon atom in a
different state.)

> If, and this is a HUGE if, there is some thing irreducibly quantum
> mechanical to this "1st person aspect" then it follows from QM that 
copying
> is not allowed. Neither a quantum state nor a "qubit" can be copied 
without

> destroying the "original".


What if there is *no* original copy?  Those that are familiar with 
Photoshop would probably argue that each layer created is still an integral 
part of the image.  If you accept Cramer's transactional model, then what 
*will* take place in the future will affect the state of the past.   You 
don't suppose Julian Barbour is on to something?


R. Miller





Functionalism and People as Programs

2005-06-02 Thread Lee Corbin
Stephen writes

> I really do not want to be a stick-in-the-mud here, but what do we base 
> the idea that "copies" could exist upon?

It is a conjecture called "functionalism" (or one of its close variants).
I guess the "strong AI" view is that the mind can be emulated on a 
computer. And yes, just because many people believe this---not surprisingly
many computer scientists---does not make it true.

An aspect of this belief is that a robot could act indistinguishably
from humans. At first glance, this seems plausible enough; certainly
many early 20th century SF writers thought it reasonable. Even Searle
concedes that such a robot could at least appear intelligent and
thoughtful to Chinese speakers.

I suspect that Turing also believed it: after all, he proposed that
a program one day behave indistinguishably from humans. And why not,
exactly?  After all, the robot undertakes actions, performs calculations,
has internal states, and should be able to execute a repertoire as fine
as that of any human.  Unless there is some devastating reason to the
contrary.

> What if "I", or any one else's 1st person aspect, can not be copied?
> If the operation of copying is impossible, what is the status of all
> of these thought experiments?

I notice that many people seek refuge in the "no-copying" theorem of
QM. Well, for them, I have that automobile travel also precludes
survival.  I can prove that to enter an automobile, drive it somewhere,
and then exit the automobile invariably changes the quantum state of
the person so reckless as to do it.

If someone can teleport me back and forth from work to home, I'll
be happy to go along even if 1 atom in every thousand cells of mine
doesn't get copied. Moreover---I am not really picky about the exact
bound state of each atom, just so long as it is able to perform the
role approximately expected of it. (That is, go ahead and remove any
carbon atom you like, and replace it by another carbon atom in a
different state.)

> If, and this is a HUGE if, there is some thing irreducibly quantum 
> mechanical to this "1st person aspect" then it follows from QM that copying 
> is not allowed. Neither a quantum state nor a "qubit" can be copied without 
> destroying the "original".

This is being awfully picky about permissible transformations. I
have even survived mild blows to the head, which have enormously
changed my quantum state.

> falsified, by the same experiments that unassailably imply that Nature is, 
> at its core, Quantum Mechanical and not Classical and thus one wonders: "Why 
> do we persist in this state of denial?"

Probably for the same reason that some people continue to be Libertarians.
It's a belief thing---the way you see the world.

Lee