RE: R�p : Thought Experiment #269-G (Duplicates)

2005-07-10 Thread Stathis Papaioannou

Lee Corbin writes:

[quoting Bruno Marchal]

 Why not choose D, that is I will see 0 on the wall OR I will see 1
 on the wall.

Okay, now you have switched back to the prior (prediction)
level.

Here is the reason not to say that.  As the person who is about
to be duplicated knows all the facts, he is aware (from a 3rd
person point of view) that scientifically there will be *two*
processes both of which are very, very similar.  It will be
false that one of them will be more him than the other.
Therefore he must identify equally with them.  Therefore,
it is wrong to imply that he I will be one of them but not
the other of them.

But if you answer I will see 0 on the wall OR I will see 1 on the wall
then it makes it sound as though one of those cases will obtain but
not the other.  (This is usually how we talk when Bruno admits, for
example, that tonight he either will watch TV *or* he will not watch
TV.  But the case of duplicates is not like that.  In the case of
duplicates, it is a scientific fact that Bruno will watch TV (in one
room) and will not watch TV (in the other room).  In short, it will
be true that Bruno will watch TV and will not watch TV---simply because
there will be two instances of Bruno.)


Is there any way of asking the question such that the answer is there is an 
even chance that I will see either a 1 or a 0? For example, every time I 
flip a coin it *seems* that I get either heads or tails, and not both. The 
objective truth may well be that coin-tossing causes duplication and I do, 
in fact, experience both, but don't realise it. I am interested in asking 
and/or answering the question assuming this sort of ignorance. Can it be 
done, or is it linguistically as well as physically and logically 
impossible?


--Stathis Papaioannou

_
FREE pop-up blocking with the new MSN Toolbar – get it now! 
http://toolbar.msn.click-url.com/go/onm00200415ave/direct/01/




Re: where do copies come from?

2005-07-10 Thread Eugen Leitl
On Sun, Jul 10, 2005 at 11:49:53PM +1000, Stathis Papaioannou wrote:

 3) Combining General and Particular Architectures
 Fusing information to combine apriori knowledge of general architecture 
 brain functions, and particular architecture data obtained from in situ 
 functional measurements (e.g. fMRI), neurological and psychological 
 measurements, as well as self-analysis, it may be possible to reconstruct 
 a functional copy of the brain close enough as to be indinstinguishable 
 from the original by the owner. How does the owner knows it is 
 indistinguishable? This is a whole topic. He could for example do a series 
 of  partial substitutions to find out if it feels the same or not. For 
 example, he could substitute in sequence the visual cortex, the auditory 
 cortex, some of the motor functions
 
 We may be closer to this goal than you think.
 
 OK, I agree it is possible, and I'm glad nobody is insisting that just the 
 arrangement of neurons and their connections, such as could in theory have 
 been determined by a 19th century histologist, is enough information to 

Exactly; it's a strawman position. Nobody claims a 5 m resolution satellite
photo shows you what brands of pizza that shop on the corner is selling.

 emulate a brain. I think we would need to have scanning resolution close to 
 the atomic level, and very detailed modelling of the behaviour of cellular 
 subsystems and components down to the same level. I don't know how long it 

You need this level of detail only initially, to obtain empiric system
parameters for an abstracted system level. You might want to reach down to ab
initio level of theory, to obtain missing parameters for an MD simulation, to 
obtain
switching behaviour of an ion channel, depending on modification, to obtain
computational behaviour of a piece of dendrite (of course, you can also obtain
that empirically from e.g. voltage-sensitive dye/patch-clamping). Even then,
the actual simulation unit could be a few layers up, at abstract neocortex
columns, or similiar.

In the end, you have to destructively scan an animal to obtain your very
large set of numbers, to enter into your simulation. Transiently, that 
disassembly
might involve sampling some voxels at a high level of resolution, very
possibly submolecular. That level of detail might be present in the voxel
buffer, transiently, before being processed by algorithms, and destilled into
a much smaller set of small integers.

 would take to achieve this, but I know that we are nowhere near it now. For 
 example, consider our understanding of schizophrenia, an illness which 

If we had fully functional (discrete, fully introspective, traceable)
models of individuals having schizophrenia, and controls, finding structural
and functional deficits resulting in the phenotype would be effectively
trivial.

 drastically changes almost every aspect of cognition. For half a century we 
 have had drugs which ameliorate the psychotic symptoms patients with this 
 illness experience, and we have been able to determine which receptors 
 these drugs target. But despite decades of research, we still have no idea 
 what the basic defect in schizophrenia is, how the drugs work, or any 

We don't have methods with sufficient resolution, that's all.

 clinically useful investigation which helps with diagnosis. Although fMRI 
 and PET scans can show differences in cerebral blood flow compared to 

fMRI has voxel sizes at several mm^3, and temporal resolution of seconds. MRI
microscopy does much better, but only works on insect/mouse-sized samples.
Nondestructive methods do not scale into the volume.

 control subjects, this is a secondary effect. The brains of schizophrenia 
 sufferers, looked at with any tools available to us, are essentially the 
 same as normal brains. In other words, a very subtle, at present 
 undetectable, change in the brains of these patients can cause gross 
 cognitive and behavioural changes.

http://www.google.com/search?num=100hl=enlr=safe=offclient=firefox-arls=org.mozilla%3Aen-US%3Aofficialq=molecular+schizophreniabtnG=Search

would seem to disagree. 

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a
__
ICBM: 48.07100, 11.36820http://www.leitl.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE


signature.asc
Description: Digital signature


Probabilistic Thinking (was Thought Experiment #269-G)

2005-07-10 Thread Lee Corbin
Stathis writes

  But if you answer I will see 0 on the wall OR I will see 1 on the wall
  then it makes it sound as though one of those cases will obtain but
  not the other.  (This is usually how we talk when Bruno admits, for
  example, that tonight he either will watch TV *or* he will not watch
  TV.  But the case of duplicates is not like that.  In the case of
  duplicates, it is a scientific fact that Bruno will watch TV (in one
  room) and will not watch TV (in the other room).  In short, it will
  be true that Bruno will watch TV and will not watch TV---simply because
  there will be two instances of Bruno.)
 
 Is there any way of asking the question such that the answer is there is an 
 even chance that I will see either a 1 or a 0? For example, every time I 
 flip a coin it *seems* that I get either heads or tails, and not both. The 
 objective truth may well be that coin-tossing causes duplication and I do, 
 in fact, experience both, but don't realise it. I am interested in asking 
 and/or answering the question assuming this sort of ignorance. Can it be 
 done, or is it linguistically as well as physically and logically 
 impossible?

Great question!  I go so far as to agree with your sense here.

Here is an example: suppose that a million copies of you are to be
made every day, and for each of them, on the following day yet another
million copies are made. (Thus after N days there are (10^6)^N copies.)
Further, suppose that 1 of them will be 1000 feet under water, and the
others simply find themselves at STP.

Your choice every day is whether to don the very bulky and time-
consuming diving equipment or not. It takes about half an hour to put
it  on, and after you have been copied, about half an hour to take it off.

One day you are in a special hurry, and think, well, it's true that
in 1 case I will die a rather ghastly death which will be very uncomfortable
for about fifteen or twenty seconds, but in the other 999,999 cases I
can get about my important affairs and save an hour of fiddling with
the equipment.

So that day you decide not to go through the ritual of putting on
and taking off the equipment. And, sure enough, one of you finds
himself in an unpleasant situation... *and in some sense regrets
the decision*.  Now logically, during the fifteen or twenty seconds
it takes him to die he realizes that his duplicates live, and that
in some sense it really was a good decision. But he also cannot help
but feel that he was unlucky.   A part of him must ask, now what
was the chance of this???

At this point, he has relapsed into thinking of himself as an instance.
(Torture is another way that instance-thinking can be aroused even
in those with the broadest usage of I.) You know intellectually
that you are doing just fine almost everywhere, but in this peculiar
case you're dead.

Yet it's probably *not* a good idea to put on the equipment everyday.
There are two reasons (that should be regarded as completely equivalent):
(1) in almost all cases I will save time (about a million hours all
together) and (2) the odds are very small that I'll need the equipment.

I contend that (2) should be taken as really meaning just (1) and that
literally, (2) is incorrect.

We all make these same choices every day:  Should I drive to work this
morning even though one in a million of me is going to die in a traffic
accident?  But to MWI devotees, the answer should be clear: it only
*seems* unlucky that I'm *just* here with a broken body in a heap of
twisted metal. The reality is that in most universes, today was not
unlike all the other days.

To quote you again,

 Is there any way of asking the question such that the answer is
 there is an even chance that I will see either a 1 or a 0? 
 The objective truth may well be that coin-tossing causes duplication
 and I do, in fact, experience both, but don't realise it. I am
 interested in asking and/or answering the question assuming this
 sort of ignorance. Can it be done, or is it linguistically as well
 as physically and logically impossible?

So I'm translating your question as, Is there any way of asking the
question such that the answer is 'there is only a tiny chance that
I'll be killed this morning on the way to work'?.

Chance seems to be overridden by MWI, and also in the cases of duplicates.
It's replaced by fractional thinking, I guess.  We might still use
the language of probability, but it should be just shorthand for the
better description of the situation in terms of fractions (of me).

Sound right?

Lee



RE: where do copies come from?

2005-07-10 Thread Jesse Mazer

Stathis Papaioannou wrote:

Nevertheless, I still think it would be *extremely* difficult to emulate a 
whole brain. Just about every physical parameter for each neuron would be 
relevant, down to the atomic level. If any of these parameters are slightly 
off, or if the mathematical model is slightly off, the behaviour of a 
single neuron may seem to be unaffected, but the error will be amplified 
enormously by the cascade as one neuron triggers another.


I don't think that follows. After all, we maintain the same personality 
despite the fact that these detailed parameters are constantly varying in 
our own neurons (and the neurons themselves are being completely replaced 
every few months or so); neural networks are not that brittle, they tend 
to be able to function in broadly the same way even when damaged in various 
ways, and slight imperfections in the simulated behavior of individual 
neurons could be seen as a type of damage. As long as the behavior of each 
simulated neuron is close enough to how the original neuron would have 
behaved in the same circumstances, I don't think occasional slight 
deviations would be fatal to the upload (but perhaps the first uploads will 
act like people who are slightly drunk or have a chemical imbalance or 
something, and they'll have to experiment with tweaking various high-level 
parameters--the equivalent of giving themselves simulated prozac or 
something--until they feel 'normal' again).


Jesse




Re: where do copies come from?

2005-07-10 Thread Bruno Marchal



I agree with Jesse. Nature (if that exists) build on redundancies. (As 
the UD). So if the substitution level is at the neural neurons, 
``slight changes don't matter.


Of course we don't really know our substitution level. It is consistent 
with comp the level is far lower. But then at that level the same rule 
operates.


It probably converges to the linear.


Bruno   (PS: I will answer other posts asap).


Le 10-juil.-05, à 20:22, Jesse Mazer a écrit :


Stathis Papaioannou wrote:

Nevertheless, I still think it would be *extremely* difficult to 
emulate a whole brain. Just about every physical parameter for each 
neuron would be relevant, down to the atomic level. If any of these 
parameters are slightly off, or if the mathematical model is slightly 
off, the behaviour of a single neuron may seem to be unaffected, but 
the error will be amplified enormously by the cascade as one neuron 
triggers another.


I don't think that follows. After all, we maintain the same 
personality despite the fact that these detailed parameters are 
constantly varying in our own neurons (and the neurons themselves are 
being completely replaced every few months or so); neural networks are 
not that brittle, they tend to be able to function in broadly the 
same way even when damaged in various ways, and slight imperfections 
in the simulated behavior of individual neurons could be seen as a 
type of damage. As long as the behavior of each simulated neuron is 
close enough to how the original neuron would have behaved in the 
same circumstances, I don't think occasional slight deviations would 
be fatal to the upload (but perhaps the first uploads will act like 
people who are slightly drunk or have a chemical imbalance or 
something, and they'll have to experiment with tweaking various 
high-level parameters--the equivalent of giving themselves simulated 
prozac or something--until they feel 'normal' again).


Jesse




http://iridia.ulb.ac.be/~marchal/




Re: where do copies come from?

2005-07-10 Thread Quentin Anciaux
Hi stathis, 

Le Dimanche 10 Juillet 2005 13:22, Stathis Papaioannou a écrit :
 Nevertheless, I still think
 it would be *extremely* difficult to emulate a whole brain.

while I agree with you about the difficulty to emulate a brain that already 
exists (such as emulate you or me for example), I don't think it is as such 
difficult as to emulate a conscious being. I remenber not so long ago the 
project mindpixel which was about to learn the common sense to a machine.

I do think that passing the turing test is possible, and if it is one day 
succesfully passed by a machine (and not once but several time), It will be a 
proof that we are indeed turing emulable... if it is so, Bruno's theory will 
not be far from the truth ;)

Quentin



RE: The Time Deniers

2005-07-10 Thread Hal Finney
Again travel has forced me to take an absence from this list for a while,
but I think I will be home for several weeks so hopefully I will be able
to catch up at last.

One question I would ask with regard to the role of time is, is there
something about time (and perhaps causality) that goes over and above
the equations or natural laws that control and define a given universe?

Let us imagine a Cellular Automaton based universe; for simplicity, let it
be a 1-dimensional CA such as those studied in detail in Wolfram's book.
We have an x dimension and a t dimension, and some rules which are the
natural laws of that universe.  A sample rule might be
s[x,t+1] = s[x,t] XOR (s[x-1,t] OR s[x+1],t]).  This means that the
state at position x and time t+1 is the exclusive-or of the state at the
previous time (s[x,t]) and the OR of the left and right neighbor states.
In other words, a cell reverses its state if either of its neighbors is
on.

Wolfram investigates all 256 possible rules which determine a cell's
next state from the previous state of the cell and its two neighbors.
Some lead to surprisingly complex patterns and it is conceivable that such
universes might even be complex enough to allow life and consciousness
to evolve.

So we have a notion of time, t, and space, x.  The question is this.
If we don't *call* it time, does that change things?  Suppose we have
a universe with 2 spatial dimensions, x and y.  But it is governed by
the same rule: s[x,y+1] = s[x,y] XOR (s[x-1,y] OR s[x+1],y]).  Here
I have replaced t in the rule above by y.

Does this make a difference?  I think we will agree that it does not.
Changing the letter t to the letter y does not change the fundamental
nature of this universe.  It only changes how we describe it.

Then we can ask, is this rather abstract description of the universe,
in terms of its natural laws, enough for us to know whether the
consciousnesses that exist in it are really conscious?  Or do we need
to know more?  Do we need to know details about how the universe was
created (whatever that means!)?  Do we need to know if there is a flow
of causality in this universe?

My answer is that the natural laws ought to be enough.  If we can find
a reasonable interpretation (defined rigorously as a mapping whose
information content is significantly smaller than the pattern itself) of
a pattern in the universe as something that we would consider a conscious
observer in our own universe, then we would be right to say that this
CA universe has consciousness.  (More precisely, that this CA universe
contributes measure to these instances and kinds of conscious observers.)

I don't think it makes sense to demand more information than the natural
laws (like, what kind of universal-computer is running to interpret
these laws, what algorithm it uses, how sequential is it, is it allowed
to backtrack and change things, etc.).  The laws themselves define
the universe.  The two are, in a sense, equivalent.  That is all the
information there is.  The laws should be, in fact they must be, enough
to answer the question about whether the consciousness which appears in
such a universe is real.  That's how it appears to me.

In our own universe, we too have natural laws that relate to space and
time.  One such law is from Newton: d2x/dt2 = Force/Mass (i.e. F=ma).
Relativity and QM have their own laws that refer to x, y, z, and t.
Generally, t is treated differently than the other coordinates, which
are all treated the same.  But obviously we could substitute some other
letter, say q, for t and it would not make a difference.  A universe
with quime instead of time would be the same.

So again, is it enough to look at the natural laws of our universe in
order to decide whether the consciousnesses within it are real?  Or do we
need more?  Can we imagine a universe like ours, which follows exactly the
same natural laws, but where time doesn't really exist (in some sense),
where there is no actual causality?  I have trouble with this idea, but
I'd be interested to hear from those who think that such a distinction
exists.

Hal Finney



RE: The Time Deniers

2005-07-10 Thread Lee Corbin
Hal Finney writes

 Can we imagine a universe like ours, which follows exactly the
 same natural laws, but where time doesn't really exist (in some
 sense), where there is no actual causality?

You yourself have already provided the key example in imagining
a two dimensional CA where the second dimension can be taken as
y instead of t.

 If we can find a reasonable interpretation... of a pattern in
 the [this CA] universe as something that we would consider a
 conscious observer in our own universe, then we would be right
 to say that this CA universe has consciousness.

I would be VERY HAPPY to abandon my belief that somehow time is
special. It's very annoying to suspect myself of simply having
a failure of imagination, in that I could not---as Einstein
perhaps did---see our 4-D block universe as just any old 4-D
continuum.  But I encounter a runaway reductio that smashes up
my attempt to *believe*.

Okay, so suppose we have a book one trillion times as large as
Wolfram's (or as big as we need to have), and we cut out all the
pages and line them up so that we have a two dimensional layout
that is recognizably a conscious entity. This now, as you know,
no longer exhibits any *time* at all; it is a succession of
frozen states, that is, each horizontal line of the CA is, as
you describe, connected to the next only by..., only by what?

Well, it seems that it is *we* who spot the connection. We guess
and then accept that there is a rule that associates each horizontal
line with the next one. Not so simple as the rule you give (i.e.,
s[x,t+1] = s[x,t] XOR (s[x-1,t] OR s[x+1],t]), of course, but
nonetheless entirely objective after we see it.

We can call it time---or not---, just as you also point out.

(I will later claim that what is missing is the underlying
continuous machinery, but to do so right now would be to miss
the point of your argument.)

So we have this sequence of horizontal lines which are connected
by a rule. The input to the rule is line N and the output is
line N+1. Indeed, I am tortured by the resemblance to quantum
states: we seem in our own comfy universe to have a succession
of states connected only by the Schrödinger equation.

One interesting point about this two dimensional consciousness
is that it's not clear (to me) whether it needs to persist in
our time. That is, would it make any difference if we destroy
this large two dimensional map?  On the one hand, since it
seems to be independent of time, the answer would be no,
but on the other, what if Hal Finney and Wei and whoever, is
right about UDs and measure, and destruction of the 2-D layout
makes it harder to find when all the OMs are being counted
up by Heaven?  I don't know.

But anyway, for me, the horrid reductio always kicks in at this
point: what should it matter if these 1-D lines composing the
layout are scattered in space? What does it matter if they're
chopped up?

Is it really only the case that they're harder to find?  That
they're less manifest in Everything? It's too hard to believe.

Do we not need a *continuous* parameter?  Are we not back with
Zeno wondering how the arrow can move if it's just in a
succession of instants?  It seems to me that Zeno would have
been right for any *finite* number of locations (or instants),
and there would have been no such thing as true motion.

Lee



RE: The Time Deniers

2005-07-10 Thread Jesse Mazer

Hal Finney wrote:



So again, is it enough to look at the natural laws of our universe in
order to decide whether the consciousnesses within it are real?  Or do we
need more?  Can we imagine a universe like ours, which follows exactly the
same natural laws, but where time doesn't really exist (in some sense),
where there is no actual causality?  I have trouble with this idea, but
I'd be interested to hear from those who think that such a distinction
exists.



For me, it's not that I think it's meaningful to imagine a universe just 
like ours but without causality, rather it's that I think causality is 
probably important to deciding whether a particular system in our universe 
counts as a valid instantiation of some observer-moment, and thus 
contributes to the measure of that observer-moment (which in turn affects 
the likelihood that I will experience that observer-moment in the future). I 
think if you run a simulation of an observer, and record the output and 
write it down in a book which you then make thousands of copies of, the 
static description in all the books most likely would not have any effect on 
the measure of that observer, since these descriptions lack the necessary 
causal structure. I sort of vaguely imagine all of spacetime as an 
enormous graph showing the causal links between primitive events, with the 
number of instantiations basically being the number of spots you could find 
a particular sub-graph representing an observer-moment embedded in the 
entire graph; the graphs corresponding to the physical process that we label 
a book would not have the same structure as graphs corresponding to the 
physical process that we label as a simulation of a particular observer. Of 
course, as I've discussed with you earlier, I'd also speculate that the 
appearance of an objective physical universe (the graph representing all of 
spacetime) somehow emerges from a more basic theory that assigns both 
absolute and conditional measures to every possible observer-moment (each 
represented in my visual picture by a sub-graph).


Jesse




RE: where do copies come from?

2005-07-10 Thread Stathis Papaioannou

Jesse Mazer wrote:

[quoting Stathis Papaioannou]
Nevertheless, I still think it would be *extremely* difficult to emulate a 
whole brain. Just about every physical parameter for each neuron would be 
relevant, down to the atomic level. If any of these parameters are 
slightly off, or if the mathematical model is slightly off, the behaviour 
of a single neuron may seem to be unaffected, but the error will be 
amplified enormously by the cascade as one neuron triggers another.


I don't think that follows. After all, we maintain the same personality 
despite the fact that these detailed parameters are constantly varying in 
our own neurons (and the neurons themselves are being completely replaced 
every few months or so); neural networks are not that brittle, they tend 
to be able to function in broadly the same way even when damaged in various 
ways, and slight imperfections in the simulated behavior of individual 
neurons could be seen as a type of damage. As long as the behavior of each 
simulated neuron is close enough to how the original neuron would have 
behaved in the same circumstances, I don't think occasional slight 
deviations would be fatal to the upload...


Perhaps, perhaps not. For one thing, in the brain's case we are relying on 
the laws of chemistry and physics, which in the real world are invariable; 
we don't know what would happen if these laws were slightly off in a 
simulation. For another, we do know that tiny chemical changes, such as a 
few molecules of LSD, can make huge behavioural changes, suggesting that the 
brain is exquisitely sensitive to at least some parameters. It is likely 
that multiple error correction and negative feedback systems are in place to 
ensure that small changes are not chaotically amplified to cause gross 
mental changes after a few seconds, and all these systems would have to be 
simulated as well. The end result may be that none of the cellular machinery 
can be safely ignored in an emulation, which is very far from modelling the 
brain as a neural net. I may be wrong, and it may be simpler than I suggest, 
but as a general rule, if there were a simpler and more economical way to do 
things, evolution would have found it.


--Stathis Papaioannou

_
SEEK: Over 80,000 jobs across all industries at Australia's #1 job site.   
http://ninemsn.seek.com.au?hotmail




Re: where do copies come from?

2005-07-10 Thread Stathis Papaioannou

Quentin Anciaux writes:



 Nevertheless, I still think
 it would be *extremely* difficult to emulate a whole brain.

while I agree with you about the difficulty to emulate a brain that already
exists (such as emulate you or me for example), I don't think it is as such
difficult as to emulate a conscious being. I remenber not so long ago the
project mindpixel which was about to learn the common sense to a machine.

I do think that passing the turing test is possible, and if it is one day
succesfully passed by a machine (and not once but several time), It will be 
a
proof that we are indeed turing emulable... if it is so, Bruno's theory 
will

not be far from the truth ;)


I agree: it will be *far* easier to build a conscious machine than to 
emulate a particular brain, just as it is far easier to build a pump than an 
exact, cell for cell analogue of a human heart. In the case of the heart the 
simpler artificial pump might be just as good, but in the case of a brain, 
the electrical activity of each and every neuron is intrinsically important 
in the final result.


--Stathis Papaioannou

_
REALESTATE: biggest buy/rent/share listings   
http://ninemsn.realestate.com.au




Re: where do copies come from?

2005-07-10 Thread Johnathan Corgan

Stathis Papaioannou wrote:

It is likely that multiple error correction and negative 
feedback systems are in place to ensure that small changes are not 
chaotically amplified to cause gross mental changes after a few seconds, 


On the other hand, the above may be precisely how consciousness operates!

Picture a system that traverses through many different states as 
chaotic attractor cycles, and outside stimuli act to nudge the system 
between grossly different chaotic attractors.  You have a system that 
needs to be exquisitely tuned to subtle input changes, yet also robust 
in the face of other types of changes (damage, etc.)


In the brain, these state trajectories would be neuronal firing 
patterns and synaptic chemical gradients.  Determining the chaotic 
attractors themselves would be neuronal morphology and ion channel types 
and locations.


The short-term information about a brain might not need to be stored 
in order to reconstruct a brain.  That is, individual neuron on-off 
states and synaptic chemical gradients may be how you feel and what you 
are thinking this moment--but discarding (or not measuring) this info 
might only mean the reconstructed brain would start from some blank 
state.  Chaotic attractor dynamics would pull the system into one of 
the aforementioned chaotic cycles and the system as a whole would 
eventually recreate the short-term firing patterns and chemical 
gradients needed for normal functioning.


(The above might be wrong in particulars, but I strongly suspect the 
concept of small changes perturbing a chaotic system to shift between 
chaotic attractors will play a role in the ultimate explanation of how 
neuronal processes give rise to conscious experience.)


-Johnathan




UD + ASSA

2005-07-10 Thread Hal Finney
Bruno asked a while back for various people to try to encapsulate
their favorite theory or model of the everything exists concept,
so I will try to describe my current views here.

Basically it can be summed up very simply as: Universal Distribution
(UD) plus ASSA (absolute self selection assumption).

Traditional philosophy distinguished between ontology, the study of the
nature of reality, and epistemology, which examines our relation to and
understanding of the world.  I can adopt this distinction and say that
the UD is the ontology, and that the epistemology is roughly the ASSA.
As you will see, my ontology is stronger than my epistemology.

[Note, UD is often used to mean Universal Dovetailer, a different
concept.]

For the ontology, the UD is a probability distribution over information
objects (i.e. information patterns) which I assume is the fundamental
system of measure in the multiverse.  It is defined with respect to an
arbitrary Universal Turing Machine (UTM) and basically is defined as
the fraction of all possible input program strings that produce that
information pattern as output.

I am therefore implicitly assuming that only information objects exist.
Among the information objects are integers, universes, computer programs,
program traces (records of executions), observers, and observer-moments.

The UD is an attractive choice because it is dominant, meaning that it
is asymptotically within a constant factor of any other distribution,
including UD's defined by other UTMs.  This is why it is called
universal.  It is often considered the default probability distribution
when no information is available.  This makes it a natural choice, perhaps
the only natural choice, for a distribution over information objects.

The UD defines a probability or measure for every information object.
This is the basic ontology which I assume exists.  It is the beginning
and ending of my ontology.

A few additional points are worth making.  Time does not play a
significant role in this model.  An information object may or may not
include a time element.  Time is merely a type of relationship which
can exist among the parts of the information object, just as space is
another type.  In relativity theory, time is different from space in
the sign (positive/negative) by which its effects are made known on the
metric.

Among universes, some may have a time dimension, some may not; some
may have more than one dimension of time.  Similarly, they could have
different dimensions of space, or perhaps fractal dimensions.

Observers are by definition information systems that are similar to us,
and since time is intimately bound up in our perception of the world,
observers will be information objects which do include a time element.

It is also worth noting that the UD measure is non-computable.  However
it can in practice be approximated, and that seems good enough for my
purposes.

Another point relates to the question of copies.  One way to interpret
the UD is to imagine infinite numbers of UTMs operating on all possible
programs.  The measure of an object is the fraction of the UTMs which
output that object.  This inherently requires that copies count, even
exact copies.  The more copies of an information object are created, the
more measure it has.

A final point: I strongly suspect that the biggest contribution to the
measure of observers (and observer-moments) like our own will arise from
programs which conceptually have two parts.  The first part creates a
universe similar to the one we see where the observers evolve, and the
second part selects the observer for output.  I argued before that each
part can be relatively small compared to a program which was hard-wired
to produce a specific observer and had all the information necessary to
do so.  Small programs have greater measure (occupy a greater fraction
of possible input strings) hence this would be the main source of measure
for observers like us.


For the epistemology, we need some way to relate this definition of
measure to our experience of the world.  This is necessary to give the
theory grounding and enable it to make predictions and explanations.
What we want is to be able to explain things by arguing that they
correspond to high-measure information patterns.  We also want to be
able to make predictions by saying that higher measure outcomes are
more likely than lower measure ones.  To achieve this I want to adopt
a relatively vague statement like:

You are more likely to be a high measure information object.

Obviously this statement raises many questions.  It seems to suggest
that you might be a table, or the number 3.  It also has problems
with the passage of time.  When are you a given information object?
Are you first one and then another?  If you start off as one, do you
stay the same?

I am not necessarily prepared to fully explain and answer all of these
problems.  At this point I am trying to keep to the big picture.  Objects
have measure, and 

Re: where do copies come from?

2005-07-10 Thread Stephen Paul King

Dear Johnathan,

   I find this idea to be very appealing! It seesm to imply that 
consciousness per say has more to do with the attractor in state space 
that any particular tableaux of neutron firings.
   This, of course, would not fit well with the material eliminativists to 
be forced to extend the same ontological status that we extend to flesh and 
blood and hardware and electromagnetic fields to such entities as strange 
attractors ! ;-)


http://www.newdualism.org/papers/M.Robertson/churchl.pdf

Kindest regards,

Stephen

- Original Message - 
From: Johnathan Corgan [EMAIL PROTECTED]

To: Stathis Papaioannou [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]; everything-list@eskimo.com
Sent: Sunday, July 10, 2005 10:48 PM
Subject: Re: where do copies come from?



Stathis Papaioannou wrote:

It is likely that multiple error correction and negative feedback systems 
are in place to ensure that small changes are not chaotically amplified 
to cause gross mental changes after a few seconds,


On the other hand, the above may be precisely how consciousness operates!

Picture a system that traverses through many different states as chaotic 
attractor cycles, and outside stimuli act to nudge the system between 
grossly different chaotic attractors.  You have a system that needs to be 
exquisitely tuned to subtle input changes, yet also robust in the face of 
other types of changes (damage, etc.)


In the brain, these state trajectories would be neuronal firing patterns 
and synaptic chemical gradients.  Determining the chaotic attractors 
themselves would be neuronal morphology and ion channel types and 
locations.


The short-term information about a brain might not need to be stored in 
order to reconstruct a brain.  That is, individual neuron on-off states 
and synaptic chemical gradients may be how you feel and what you are 
thinking this moment--but discarding (or not measuring) this info might 
only mean the reconstructed brain would start from some blank state. 
Chaotic attractor dynamics would pull the system into one of the 
aforementioned chaotic cycles and the system as a whole would eventually 
recreate the short-term firing patterns and chemical gradients needed for 
normal functioning.


(The above might be wrong in particulars, but I strongly suspect the 
concept of small changes perturbing a chaotic system to shift between 
chaotic attractors will play a role in the ultimate explanation of how 
neuronal processes give rise to conscious experience.)


-Johnathan





RE: The Time Deniers

2005-07-10 Thread Stathis Papaioannou

I wasn't very clear in my last post. What I meant was this:

(a) A conscious program written in C is compiled on a computer. The C 
instructions are converted into binary code, and when this code is run, the 
program is self-aware.


(b) The same conscious program is written in some idiosyncratic programming 
language, created by a programmer who has since died. He has requested in 
his will that the program be compiled, then all copies of the compiler and 
all the programmer's notes be destroyed before the program is run. Once 
these instructions are carried out, the binary code is run, and the program 
is self-aware as before - although it is difficult or impossible for an 
outsider to work out what is going on.


(c) A random string of binary code is run on a computer. There exists a 
programming language which, when a program is written in this language so 
that it is the same program as in (a) and (b), then compiled, the binary 
code so produced is the same as this random string.


Is this nonsense? Is (c) fundamentally different from (b)? If not, doesn't 
it mean that any random string implements any program? We might not know 
what it says, but if the program is self-aware, then by definition *it* 
knows.


--Stathis Papaioannou



From: Lee Corbin [EMAIL PROTECTED]
Reply-To: [EMAIL PROTECTED]
To: everything-list@eskimo.com
Subject: RE: The Time Deniers
Date: Fri, 8 Jul 2005 15:42:49 -0700

Stathis writes

 Lee Corbin writes:

  But it is *precisely* that I cannot imagine how this stack of
  Life gels could possibly be thinking or be conscious that forces
  me to admit that something like time must play a role.
 
  Here is why: let's suppose that your stack of Life boards does
  represent each generation of Conway's Life as it emulates a
  person If a stack of gels like this amounts to the conscious
  experience of an entity, then it certainly wouldn't hurt to move
  them farther apart... Next, we alter the orientations of the gels...
 
  So, for me, since it is absurd to think that either vibrating
  bits of matter (an example Hal Finney quotes) or random patches
  of dust (Greg Egan's theory of Dust) can actually give runtime
  to entities, then I have to draw the line somewhere. Where I
  have always chosen is this: if states, no matter now represented,
  are not causally connected with each other, consciousness does
  not obtain.

 If you remember Egan's dust theory in Permutation City, you probably 
also
 remember that he did the same manipulations of a computation running in 
time
 as you suggest doing with the Life board stacks in space. Do you not 
think a

 computation would work if chopped up in this way?

If you are speaking of the earlier part of the Greg Egan novel
(which I claim to entirely understand) then no, he did not isolate
a person's experiences down to *instants*.  He would run a minute's
worth now, a minute's worth then, and mix them up in order.

But!  The only causal discontinuities were *between* the successive
sessions (each session at least a minute long---but I'd be happy
with a millisecond long).

 The idea that any computation can be implemented by any random process,
 given an appropriate programming language (which might be a giant lookup
 table, mapping [anything] - [line of code]) is generally taken as being
 self-evidently absurd.

Not sure I understand. Since you are talking about a *process*,
then for my money we're already half-way there! (I.e., the
Time Deniers have not struck.)  Suppose that we have a trillion
by trillion Life Board and the program randomly assigns pixels
for each generation. Then, yes, I guess I agree with you: we
have achieved nothing: the random states are admittedly connected
by causal processes (your machine is an ordinary causal process
operating in *time*), but nothing intelligent is being implemented.
It's not even implementing a wild rain-storm.

(Of course, the Time Deniers, as I understand them, would be
perfectly happy to let this machine run for 10^10^200 years,
and then identify (pick out) a sequence of apparently related
states, in fact, a sequence that seemed to be you or me having
a conscious experience. They'd be quite happy (many of them
at least) to say that once again Stathis or Lee had been
implemented in the universe and had had some conscious
experience (i.e. OMs).

 The argument goes that that the information content
 of the programming language must contain all the information the 
random
 system is supposed to be producing, so this system is actually 
superfluous.

 This means we have won no computational benefit by setting up this odd
 machine.

I'm following so far.

 However, the programming language is only there so that the machine
 can interact with the environment. If there is no programming language
 and no I/O, the machine can be a complete solipsist.

You've lost me, sorry. Could you explain what you mean and
where you are going here?

 This might occur also if
 some future archaeologist finds an