RE: Pareto laws and expected income

2005-06-21 Thread Jonathan Colvin
Russell Standish wrote:
> I retract an earlier agreement with Jonathon that the 
> expected income argument is the same as the "Why I am not 
> Chinese argument". They are not, for the simple expedient 
> that ones income does not affect your chances of birth (if 
> there is any effect, it would be a negative one with your 
> parent's wealth). In the "Why am I not Chinese" argument, the 
> population of your country of birth is indexical.
> 
> My expected income is a complicated function of my life's 
> history, it may have some bearing on things like my innate 
> intelligence, and my parent's wealth, but the problem is so 
> multifactorial it is inappropriate to use anthropic reasoning.

My consciousness (or degree of such) is a complicated function of my
evolutionary history, but the problem is so multifactorial it is
inappropriate to use anthropic reasoning.

> 
> What it does show is what an ass the ASSA is. It is 
> unreasonable to suppose that my current wealth is sampled 
> randomly from the distribution of al wealths (a Pareto 
> distribution like P(x)=x^a, for some a).

Why is it any more unreasonable than supposing that your birth rank is
sampled randomly from the distribution of all birth ranks?

> 
> A more interesting point that Jonathon Colvin could have made 
> was questioning why ones IQ is so high. It is a reasonable 
> speculation that the IQ of people on this list would usually 
> be far above average (IQ=100 by definition). Of course, that 
> is selective effects of this list. But one can also ask why 
> in anthropic reasoning is my IQ in the top part of the 
> distribution (I don't know my IQ, but I'm sure I'm in the 
> tail :). The answer is that if you consider all possible 
> congenital characteristics (eg country of birth, parents 
> wealth, intelligence, skill at playing ball, etc.), there is 
> very likely one or two chracteristics that are extreme. In my 
> case it happens to be intelligence. My family's income was 
> below average (for Australia that is, but probably more on a 
> par with world average, actually)

If you consider all possible personal characteristics, there is very likely
one or two characteristics that are extreme. In my case it happens to be
birth rank.

> 
> So that is not so strange really. However, when it comes to 
> sampling indexical quantities (eg birth rates or population 
> sizes), anthropic arguments take on a particular force, than 
> sampling non-indexical quantities.

Why do anthropic arguments suddenly take on such force when sampling
indexical quantities?

Jonathan Colvin



RE: Reference class (was dualism and the DA)

2005-06-21 Thread Jonathan Colvin
Russell Standish wrote:
> > I'd be interested to hear it. Here's something else you could look 
> > at...calculate the median annual income for all humans 
> alive today (I 
> > believe it is around $4,000 /year), compare it to your own, 
> and see if 
> > you are anyway near the median. I predict that the answer 
> for you (and 
> > for anyone else reading this), is far from the median. This 
> result is 
> > obviously related to the "why you are not Chinese" 
> criticism, and is,
> 
> Yes, it is. Incomes follow a Pareto law, which is another one 
> of these power laws (although I remember a recent paper that 
> indicated the rich part of the curve had a different law). It 
> may even be exactly 1/x, in which case one's income could be 
> anything! However, I'd need to look up the relevant papers. 
> Comparing things to medians is _not_ relevant.

Ok, not the median then. I believe, if you plot a graph of worldwide income
levels, you get a semi-hockey stick-like curve. My prediction is that you,
me and anyone else reading this is on the far right of the stick, (if the
handle is at the left). If we accept the DA, shouldn't we be randomly
distributed across it?

Jonathan Colvin

> 
> > I believe, the reason the DA goes astray.

> No it's not! Working with actual distributions solves these 
> counter arguments (or at least seems to).





Re: Pareto laws and expected income

2005-06-21 Thread Russell Standish
On Tue, Jun 21, 2005 at 12:00:38AM -0700, Jonathan Colvin wrote:
> Russell Standish wrote:
> 
> My consciousness (or degree of such) is a complicated function of my
> evolutionary history, but the problem is so multifactorial it is
> inappropriate to use anthropic reasoning.

Nonsense. You are either conscious, in which case you will observe
something, or you are not, which case you don't. This is a simple two
state logic.

> 
> > 
> > What it does show is what an ass the ASSA is. It is 
> > unreasonable to suppose that my current wealth is sampled 
> > randomly from the distribution of al wealths (a Pareto 
> > distribution like P(x)=x^a, for some a).
> 
> Why is it any more unreasonable than supposing that your birth rank is
> sampled randomly from the distribution of all birth ranks?

Because current observer moments are dependent on previous observer
moments. Births are not.

For example, one's income tends to be positively correlated with age
(until age of retirement, that is, when the trend reverses).

> 
> If you consider all possible personal characteristics, there is very likely
> one or two characteristics that are extreme. In my case it happens to be
> birth rank.

What evidence do you advance for this? If true, this would be remarkable.

> 
> > 
> > So that is not so strange really. However, when it comes to 
> > sampling indexical quantities (eg birth rates or population 
> > sizes), anthropic arguments take on a particular force, than 
> > sampling non-indexical quantities.
> 
> Why do anthropic arguments suddenly take on such force when sampling
> indexical quantities?
> 
> Jonathan Colvin

Think about it! Why is a 1/f law for country populations mean
Anthropic considerations do not constrain which country you might be
born, yet a 1/f law in incomes implies you're likely to be born into a
poor family.

-- 
*PS: A number of people ask me about the attachment to my email, which
is of type "application/pgp-signature". Don't worry, it is not a
virus. It is an electronic signature, that may be used to verify this
email came from me if you have PGP or GPG installed. Otherwise, you
may safely ignore this attachment.


A/Prof Russell Standish  Phone 8308 3119 (mobile)
Mathematics0425 253119 (")
UNSW SYDNEY 2052 [EMAIL PROTECTED] 
Australiahttp://parallel.hpc.unsw.edu.au/rks
International prefix  +612, Interstate prefix 02



pgpFmVqBY20pt.pgp
Description: PGP signature


Doomsday and computational irreducibility

2005-06-21 Thread Jonathan Colvin
A new (at least I think it is new) objection to the DA just occurred to me
(googling computational + irreducibility +doomsday came up blank).

This objection (unfortunately) requires a few assumptions:

1) No "block" universe (ie. the universe is a process).

2) Wolframian computational irreducibility ((2) may be a consequence of (1)
under certain other assumptions)

3) No backwards causation.

The key argument is that by 1) and 2), at time T, the state of the universe
at time T+x is in principle un-knowable, even to the universe itself.

Thus, at this time T (now), nothing, even the universe itself, can know
whether the human race will stop tomorrow, or continue for another billion
years.

To accept the DA under these conditions requires accepting backwards
causation; that the probability for my existence must depend on a fact
determined in the future.

If we wish to accept the DA is possible, we must deny at least one of the
above three. 

The following thought experiment illustrates the argument:

Imagine I know that I am one of the first ten humans that God has created (I
know this, because God has told me). I also know that God is in the process
of throwing a dice (say the dice are tumbling end-over-end but have not
landed yet). Neither God (nor the universe, if the two are not equivalent)
knows what the dice result will be before it occurs (it is unknowable by
(2)). If the dice comes up heads, God will create a billion more people (so
she says, anyway). If the dice is tails, God will create no more people.
Should I be able to predict that the dice will come up heads?

I'd argue that given 1), 2) and 3), our situation is analogous to the above;
and that no, contra the DA, we should not be able to predict that the dice
will come up heads.

Its not a slam-dunk against the DA because 1), 2) and 3) are far from
uncontroversial; but it *seems* to be a hit. Then again, the DA is a
slippery weasel and I expect there's probably some counterargument
somewhere. Perhaps God can change her mind?

Jonathan Colvin






Re: death

2005-06-21 Thread Bruno Marchal


Le 20-juin-05, à 18:16, Hal Finney a écrit :


Bruno Marchal writes:

Le 19-juin-05, =E0 15:52, Hal Finney a =E9crit :


I guess I would say, I would survive death via anything that does not
reduce my measure.


But if the measure is absolute and is bearing on the OMs, and if 
that=20
is only determined by their (absolute) Kolmogorov complexity (modulo 
a=20=
constant) associated to the OM ("how" is still a mystery for 
me(*)),=20

how could anything change the measure of an OM?


That's true, from the pure OM perspective "death" doesn't make sense
because OMs are timeless.  I was trying to phrase things in terms of
the observer model in my reply to Stathis.  An OM wants to preserve
the measure of the observer that it is part of, due to the effects of
evolution.  Decreases in that measure would be the meaning of death,
in the context of the multiverse.



I will keep reading your posts hoping to make sense of it. Still I was 
about asking you if you were assuming the "multiverse context" or if 
you were hoping to extract (like me) the multiverse itself from the 
OMs. In which case, the current answer seems still rather hard to 
follow. Then in another post you just say:



It's a bit hard for me to come up with a satisfactory answer to this 
problem, because I don't start from the assumption of a physical 
universe at all--like Bruno, I'm trying to start from a measure on 
observer-moments and hope that somehow the appearance of a physical 
universe can be recovered from the subjective probabilities 
experienced by observers



And this answers the question. I am glad of your  interest in the 
possibility to explain the universe from OMs, but then, as I said I 
don't understand how an OM could change its measure. What is clear for 
me is that an OM (or preferably a 1-person, an OM being some piece of 
the 1-person) can change its *relative* measure (by decision, choice, 
will, etc.) of its possible next OMs.


Bruno

http://iridia.ulb.ac.be/~marchal/




Re: Conscious descriptions

2005-06-21 Thread Russell Standish
On Mon, Jun 20, 2005 at 11:40:03AM +0200, Bruno Marchal wrote:
> 
> Le 17-juin-05, ? 07:19, Russell Standish a ?crit :
> 
> >Hmm - this is really a definition of a universal machine. That such a
> >machine exists is a theorem. Neither depend on the Church-Turing
> >thesis, which says that any "effective" computation can be done using
> >a Turing machine (or recursive function, or equivalent). Of course the
> >latter statement can be considered a definition, or a formalisation,
> >of the term "effective computation.
> 
> Hmm - I disagree. Once you give a definition of what a turing machine 
> is, or of what a program fortran is, then it is a theorem that 
> universal turing machine exists and that universal fortran program 
> exists. To say that a universal machine exists, computing by definition 
> *all* computable function, without any "turing" or "fortran" 
> qualification, you need Church thesis.
> 
> Bruno
> 
> 
> http://iridia.ulb.ac.be/~marchal/
> 

From Li & Vitanyi:

Church's thesis: The class of algorithmically computable numerical
functions (in the intuitive sense) coincides with the class of partial
recursive functions

Turing's thesis: Any process that can be naturally called an effective
procedure is realized by a Turing machine.

Both of these are really a definition of what it means call an
algorithm "effective".

Theorem: the class of Turing machines corresponds to to the class of
partial recursive functions. Consequently, both theses are equivalent.

Theorem: The class of Fortran machines corresponds to the class of
Turing machines. (I don't think this is proved in Li & Vitanyi, but
I'm sure it is proved somewhere. It is clearly not a consequence of
the Church-Turing thesis).

Theorem: There exist formalisable processes that aren't simulable by
Turing machines. Such processes are called hypermachines. See Toby
Ord, math.LO/0209332

Conjecture: All physical processes can be simulated by a Turing
machine. I suspect this is false, and point to beta decay as a
possible counter example.

Conjecture: All "harnessable" physical processes can be simulated by a
Turing machine. By harnessable, we mean exploited for performing some
computation. I suspect this is true. Machines with random oracles with
computable means only compute the same class of functions as do Turing
machines. (classic result by de Leeuw et al. in 1956)

So, no I don't think the Turing thesis is needed for a universal
machine.

Cheers

-- 
*PS: A number of people ask me about the attachment to my email, which
is of type "application/pgp-signature". Don't worry, it is not a
virus. It is an electronic signature, that may be used to verify this
email came from me if you have PGP or GPG installed. Otherwise, you
may safely ignore this attachment.


A/Prof Russell Standish  Phone 8308 3119 (mobile)
Mathematics0425 253119 (")
UNSW SYDNEY 2052 [EMAIL PROTECTED] 
Australiahttp://parallel.hpc.unsw.edu.au/rks
International prefix  +612, Interstate prefix 02



pgpQklhRggBVS.pgp
Description: PGP signature


Torture yet again

2005-06-21 Thread Jonathan Colvin
Sorry, I can't let go of this one. I'm trying to understand it
psychologically.

Here's another thought experiment which is roughly equivalent to our
original scenario.

You are sitting in a room, with a not very nice man.

He gives you two options.

1) He'll toss a coin. Heads he tortures you, tails he doesn't.

2) He's going to start torturing you a minute from now. In the meantime, he
shows you a button. If you press it, you will get scanned, and a copy of you
will be created in a distant town. You've got a minute to press that button
as often as you can, and then you are getting tortured.

What are you going to choose (Stathis and Bruno)? Are you *really* going to
choose (2), and start pressing that button frantically? Do you really think
it will make any difference? 

I'm just imagining having pressed that button a hundred times. Each time I
press it, nothing seems to happen. Meanwhile, the torturer is making his
knife nice and dull, and his smile grows ever wider.

Cr^%^p, I'm definitely choosing (1).

Ok, sure, each time I press it, I also step out of a booth in Moscow,
relieved to be pain-free (shortly to be followed by a second me, then a
third, each one successively more relieved.) But I'm still choosing (1). 

Now, the funny thing is, if you replace "torture" by "getting shot in the
head", then I will pick (2). That's interesting, isn't it?

Jonathan Colvin



Re: Torture yet again

2005-06-21 Thread Eugen Leitl
On Tue, Jun 21, 2005 at 04:05:02AM -0700, Jonathan Colvin wrote:

> Now, the funny thing is, if you replace "torture" by "getting shot in the
> head", then I will pick (2). That's interesting, isn't it?

Why is that interesting? It's indistinguishable from a teleportation
scenario.

-- 
Eugen* Leitl http://leitl.org";>leitl
__
ICBM: 48.07100, 11.36820http://www.leitl.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE


signature.asc
Description: Digital signature


Re: What is an observer moment?

2005-06-21 Thread Bruno Marchal

Le 21-juin-05, à 05:33, George Levy a écrit :

 An interesting thought is that a psychological first person can surf simultaneously through a large number of physical OMs

With comp, we should say that the first person MUST surf simultaneously through an INFINITY of third person OMs.

(I would not use the term "physical" at all, because at this stage it is not defined. But with the negation of comp + assumption of slightly incorrect QM what you say seems to me  plausible.)

Bruno


http://iridia.ulb.ac.be/~marchal/


Re: death

2005-06-21 Thread Jesse Mazer

Bruno Marchal wrote:


Then in another post you just say:


It's a bit hard for me to come up with a satisfactory answer to this 
problem, because I don't start from the assumption of a physical universe 
at all--like Bruno, I'm trying to start from a measure on observer-moments 
and hope that somehow the appearance of a physical universe can be 
recovered from the subjective probabilities experienced by observers



And this answers the question.


That was actually me who wrote that, not Hal Finney. But in reply to that 
comment, Hal wrote:



I have a similar perspective.  However I think it will turn out that the
simplest mathematical description of an observer-moment will involve a Big
Bang.  That is, describe a universe, describe natural laws, and let the
OM evolve.  This is the foundation for saying that the universe is real.


Jesse




Re: death

2005-06-21 Thread "Hal Finney"
Bruno Marchal writes:
> Le 20-juin-05, =E0 18:16, Hal Finney a =E9crit :
> > That's true, from the pure OM perspective "death" doesn't make sense
> > because OMs are timeless.  I was trying to phrase things in terms of
> > the observer model in my reply to Stathis.  An OM wants to preserve
> > the measure of the observer that it is part of, due to the effects of
> > evolution.  Decreases in that measure would be the meaning of death,
> > in the context of the multiverse.
>
> I will keep reading your posts hoping to make sense of it. Still I was=20=
> about asking you if you were assuming the "multiverse context" or if=20
> you were hoping to extract (like me) the multiverse itself from the=20
> OMs. In which case, the current answer seems still rather hard to=20
> follow.

I was trying to use Stathis' terminology when I wrote about the
probability of dying.  Actually I am now trying to use the ASSA and I
don't have a very good idea about what it means to specify a subjective
next moment.  I think ultimately it is up to each OM as to what it views
as its predecessor moments, and perhaps which ones it might like to
consider its successor moments.

Among the problems: substantial, short-term mental changes might be
so great that the past OM would not consider the future OM to be the
same person.  This sometimes even happens with our biological bodies.
I can easily create thought experiments that bend the connections beyond
the breaking poing.  There appears to be no bright line between the
degree to which a past and future OM can be said to be the same person,
even if we could query the OM's in question.

Another problem: increases in measure from a past OM to a future OM.
We can deal with decreases in measure by the traditional method of
expected probability.  But increases in measure appear to require
probability > 1.  That doesn't make sense, again causing me to question
the whole idea of a subjective probability distribution over possible
next moments.


> Then in another post you just say:
>
> > It's a bit hard for me to come up with a satisfactory answer to this=20=
> > problem, because I don't start from the assumption of a physical=20
> > universe at all--like Bruno, I'm trying to start from a measure on=20
> > observer-moments and hope that somehow the appearance of a physical=20
> > universe can be recovered from the subjective probabilities=20
> > experienced by observers

Actually I didn't write this, Jesse Mazer did.  But I do largely agree
with this approach, and I wrote the reply:

I have a similar perspective.  However I think it will turn out that the
simplest mathematical description of an observer-moment will involve a Big
Bang.  That is, describe a universe, describe natural laws, and let the
OM evolve.  This is the foundation for saying that the universe is real.


> And this answers the question. I am glad of your  interest in the=20
> possibility to explain the universe from OMs, but then, as I said I=20
> don't understand how an OM could change its measure. What is clear for=20=
> me is that an OM (or preferably a 1-person, an OM being some piece of=20
> the 1-person) can change its *relative* measure (by decision, choice,=20
> will, etc.) of its possible next OMs.

The OM can change the universe, and this will include changing the measure
of many people's future OMs.  Wei Dai, in whose footsteps I largely
travel, finally decided that *any* philosophy for an OM was acceptable,
and its only task was to optimize the multiverse to suit its preferences.
This does not require that we introduce a subjective probability for
measure of next OM, but it can allow OMs to think that way.  If the
current OM has an interest in certain OMs, the ones it chooses to call its
"next OMs", and it wants to adjust the relative measure of those OMs to
suit its tastes, that can be accommodated in this very general model.

Hal Finney



Re: Torture yet again

2005-06-21 Thread "Hal Finney"
Jonathan Colvin writes:
> You are sitting in a room, with a not very nice man.
>
> He gives you two options.
>
> 1) He'll toss a coin. Heads he tortures you, tails he doesn't.
>
> 2) He's going to start torturing you a minute from now. In the meantime, he
> shows you a button. If you press it, you will get scanned, and a copy of you
> will be created in a distant town. You've got a minute to press that button
> as often as you can, and then you are getting tortured.

I understand that you are trying to challenge this notion of "subjective
probability" with copies.  I agree that it is problematic.  IMO it is
different to make a copy than to flip a coin -  different operationally,
and different philosophically.

What you need to do is to back down from subjective probabilities and
just ask it like this: which do you like better, a universe where there
is one of you who has a 50-50 chance of being tortured; or a universe
where there are a whole lot of you and one of them will be tortured?
Try not to think about which one "you" will be.  You will be all of them.
Think instead about the longer term: which universe will best serve your
needs and desires?

There is an inherent inconsistency in this kind of thought experiment
if it implicitly assumes that copying technology is cheap, easy and
widely available, and that copies have good lives.  If that were the
case, everyone would use it until there were so many copies that these
properties would no longer be true.

It is important in such experiments to set up the social background in
which the copies will exist.  What will their lives be like, good or
bad?  If copies have good lives, then copying is normally unavailable.
In that case, the chance to make copies in this experiment may be a
once-in-a-lifetime opportunity.  That might well make you be willing to
accept torture of a person you view as a future self, in exchange for
the opportunity to so greatly increase your measure.

OTOH if copying is common and most people don't do it because the future
copies will be penniless and starve to death, then making copies in this
experiment is of little value and you would not accept the greater chance
of torture.

This analysis is all based on the assumption that copies increase measure,
and that in such a world, observers will be trained that increasing
measure is good, just as our genes quickly learned that lesson in a
world where they can be copied.

Hal Finney



Re: another puzzzle

2005-06-21 Thread daddycaylor

Stathis wrote:>To summarise my position, it is this: the measure of an observer moment is relevant when a given observer is contemplating what will happen next...  Now, minimising acronym use, could you explain what your understanding is of how measure changes with number of copies of an OM which are instantiated, and if it doesn't, then how does it change, and when you use it in calculating how someone's life will go from OM to OM.  
Jesse wrote:> Well, see my last response to Hal Finney...  The measure on the set of all unique observer-moments is really the fundamental thing, physical notions like "number of copies" are secondary. But I have speculated on the "anticipatory" idea where multiple copies affects your conditional probabilities to the extend that the copies are likely to diverge in the future; so in your example, as long as those 10^100 copies are running in isolated virtual environments and following completely deterministic rules, they won't diverge, so my speculation is that the absolute and relative measures would not be affected in any way by this...  There is the question of what it is, exactly, that's supposed to be moving between OMs, and whether this introduces some sort of fundamental duality into my picture of reality...
 
So if the copies are completely synchronized, this puzzle is a no-brainer (easy).  But what about if one of the neurons in one of the copies does a little jig of its own for second?
 
More in general, I'm doubting the legitimacy of the puzzle in the first place:  If, in your theory, measure really corresponds to the probability of having a next observer moment, and then you bring God into the picture and have him totally mess up the probabilities by doing what he wants, how are you going to conclude anything meaningful as a continuation of your definition of measure?  The flip side of the coin is that apparently the probability of having a next OM is 100% ("everything exists").  In this theory, no matter what God does with 10^100 copies, there are 10^100^n other identical next OMs out there to replace them. It seems like what I've seen so far on this list is an exercise in forgetting that "everything exists" for a moment to do a thought experiment to conclude more about "everything exists".
 
Tom Caylor
 


Re: Torture yet again

2005-06-21 Thread Bruno Marchal


Le 21-juin-05, à 13:05, Jonathan Colvin a écrit :



Sorry, I can't let go of this one. I'm trying to understand it
psychologically.

Here's another thought experiment which is roughly equivalent to our
original scenario.

You are sitting in a room, with a not very nice man.

He gives you two options.

1) He'll toss a coin. Heads he tortures you, tails he doesn't.

2) He's going to start torturing you a minute from now. In the 
meantime, he
shows you a button. If you press it, you will get scanned, and a copy 
of you
will be created in a distant town. You've got a minute to press that 
button

as often as you can, and then you are getting tortured.

What are you going to choose (Stathis and Bruno)? Are you *really* 
going to
choose (2), and start pressing that button frantically? Do you really 
think

it will make any difference?



I will choose 2, and most probably start pressing the button 
frantically.  Let us imagine that I press on the button 64 times.
The one who will be tortured is rather unlucky, he has 1/2^64 chance to 
"stay" in front of you.  He will probably even infer the falsity of 
comp, but then you will kill him!
The 63 other "brunos" will infer comp is true, and send 63 more 
arguments for it to the list, including the argument based on having 
survive your experiment!


OK with the number?

Bruno



http://iridia.ulb.ac.be/~marchal/




Re: Conscious descriptions

2005-06-21 Thread Bruno Marchal


Le 21-juin-05, à 12:28, Russell Standish a écrit :


On Mon, Jun 20, 2005 at 11:40:03AM +0200, Bruno Marchal wrote:


Le 17-juin-05, ? 07:19, Russell Standish a ?crit :


Hmm - this is really a definition of a universal machine. That such a
machine exists is a theorem. Neither depend on the Church-Turing
thesis, which says that any "effective" computation can be done using
a Turing machine (or recursive function, or equivalent). Of course 
the

latter statement can be considered a definition, or a formalisation,
of the term "effective computation.


Hmm - I disagree. Once you give a definition of what a turing machine
is, or of what a program fortran is, then it is a theorem that
universal turing machine exists and that universal fortran program
exists. To say that a universal machine exists, computing by 
definition

*all* computable function, without any "turing" or "fortran"
qualification, you need Church thesis.

Bruno


http://iridia.ulb.ac.be/~marchal/



From Li & Vitanyi:

Church's thesis: The class of algorithmically computable numerical
functions (in the intuitive sense) coincides with the class of partial
recursive functions


OK.



Turing's thesis: Any process that can be naturally called an effective
procedure is realized by a Turing machine.


Not OK. Please give me the page.



Both of these are really a definition of what it means call an
algorithm "effective".



By who? Effective has more than one meaning in logic.




Theorem: the class of Turing machines corresponds to to the class of
partial recursive functions. Consequently, both theses are equivalent.


OK.




Theorem: The class of Fortran machines corresponds to the class of
Turing machines.


OK.



(I don't think this is proved in Li & Vitanyi, but
I'm sure it is proved somewhere. It is clearly not a consequence of
the Church-Turing thesis).



It depends of the context, but you can prove it without Church's thesis.




Theorem: There exist formalisable processes that aren't simulable by
Turing machines. Such processes are called hypermachines. See Toby
Ord, math.LO/0209332



It is an "obvious" consequence of Church's thesis. See the 
"diagonalisation posts" in my url.






Conjecture: All physical processes can be simulated by a Turing
machine. I suspect this is false, and point to beta decay as a
possible counter example.


OK. I even pretend it is provably false with comp. You cannot simulate 
a priori all 1-person continuations generated by the UD in one stroke, 
as the first person lives it from his point of view given that the 
first person is unaware of the number of steps the UD computes to get 
at it, in its dovetailing way.





Conjecture: All "harnessable" physical processes can be simulated by a
Turing machine. By harnessable, we mean exploited for performing some
computation. I suspect this is true.


I don't understand.



Machines with random oracles with
computable means only compute the same class of functions as do Turing
machines. (classic result by de Leeuw et al. in 1956)


OK. Without computable means: random oracle makes them more powerfull.
(Kurtz and Smith).




So, no I don't think the Turing thesis is needed for a universal
machine.



I still disagree. I will say more but I have a meeting now.

Bruno

http://iridia.ulb.ac.be/~marchal/




Re: Dualism and the DA

2005-06-21 Thread Pete Carlton
On Jun 20, 2005, at 10:44 AM, Hal Finney wrote:Pete Carlton writes: -- we don't need to posit any  kind of dualism to paper over it, we just have to revise our concept  of "I". Hal Finney wrote:Copies seem a little more problematic.  We're pretty cavalier aboutcreating and destroying them in our thought experiments, but the socialimplications of copies are enormous and I suspect that people's viewsabout the nature of copying would not be as simple as we sometimes assume.I doubt that many people would be indifferent between the choice ofhaving a 50-50 chance of being teleported to Moscow or Washington, vshaving copies made which wake up in both cities.  The practical effectswould be enormously different.  And as I wrote before, I suspect thatthese practical differences are not to be swept under the rug, but pointto fundamental metaphysical differences between the two situations.I think the practical differences are large, as you say, but I disagree that it points to a fundamental metaphysical difference.  I think what appears to be a metaphysical difference is just the breakdown of our folk concept of "I".  Imagine a primitive person who didn't understand the physics of fire, seeing two candles lit from a single one, then the first one extinguished - they may be tempted to conclude that the first flame has now become two flames.  Well, this is no problem because flames never say things like "I would like to keep burning" or "I wonder what my next experience would be".  We, however, do say these things.  But does this bit of behavior (including the neural activity that causes it) make us different in a relevant way? And if so, how?This breakdown of "I" is very interesting.  Since there's lots of talk about torture here, let's take this extremely simple example: Smith is going to torture someone, one hour from now.  You may try to take steps to prevent it.  How much effort you are willing to put in depends, among other things, on the identity of the person Smith is going to torture.  In particular, you will be very highly motivated if that person is you; or rather, the person you will be one hour from now.  The reason for the high motivation is that you have strong desires for that person to continue their life unabated, and those desires hinge on the outcome of the torture.  But my point is that your strong desires for your own survival are just a special case of desires for a given person's survival - in other words, you are already taking a third-person point of view to your (future) self.  You know that if the person is killed during torture, they will not continue their life; if they survive it, their life will still be negatively impacted, and your desires for the person's future are thwarted.Now, if you introduce copies to this scenario, it does not seem to me that anything changes fundamentally.  Your choice on what kind of scenario to accept will still hinge on your desires for the future of any persons involved.  The desires themselves may be very complicated, and in fact will depend on lots of hitherto unspecified details such as the legal status, ownership rights, etc., of copies.  Of course one copy will say "I pushed the button and then I got tortured", and the other copy will say "I pushed the button and woke up on the beach" - which is exactly what we would expect these two people to say.  And they're both right, insofar as they're giving an accurate report of their memories.  What is the metaphysical issue here?

Re: Measure, Doomsday argument

2005-06-21 Thread Quentin Anciaux
Le Lundi 20 Juin 2005 23:12, "Hal Finney" a écrit :

>
> The empirical question presents itself like this.  Very simple universes
> (such as empty universes, or ones made up of simple repeating patterns)
> would have no life at all.  Perhaps sufficiently complex ones would be
> full of life.  So as we move up the scale from simple to complex, at
> some point we reach universes that just barely allow for advanced life
> to evolve, and even then it doesn't last very long.  The question is,
> as we move through this transition region from nonliving universes,
> to just-barely-living ones, to highly-living ones, how long is the
> transition region?
>
> That is, how much more complex is a universe that will be full of life,
> compared to one which just barely allows for life?  We don't know the
> answer to that, but in principle it can be learned, through study and
> perhaps experimental simulations.  If it takes only a bit more complexity
> to go from a just-barely-living universe to a highly-living one, then
> we have a puzzle.  Why aren't we in one of the super-living universes,
> when their complexity penalty is so low?

Beside this. I just think about this :

Why aren't we blind ? :-)

If the "measure" of an OM come from the information complexity of it, it seems 
that an OM of a blind person need less information content because there is 
no complex description of the outside world available to the blind observer. 
So as they are less complex, they must have an higher "measure" ... but I'm 
not blind, so as a lot of people on earth... 

Quentin



Re: Conscious descriptions

2005-06-21 Thread Russell Standish
On Tue, Jun 21, 2005 at 07:43:49PM +0200, Bruno Marchal wrote:
> >
> >Turing's thesis: Any process that can be naturally called an effective
> >procedure is realized by a Turing machine.
> 
> Not OK. Please give me the page.
> 

2nd edition, page 24, about 1/3 of the way down the page.

> >
> >Both of these are really a definition of what it means call an
> >algorithm "effective".
> 
> 
> By who? Effective has more than one meaning in logic.
> 

I think it was by Turing actually. Its been a while since I read the
original article though, so I'm not certain.

> 
> >
> >Conjecture: All "harnessable" physical processes can be simulated by a
> >Turing machine. By harnessable, we mean exploited for performing some
> >computation. I suspect this is true.
> 
> I don't understand.
> 

Again these are intuitive concepts. I would interpret this as saying
that we can perform the same computation as any physical process, even
if we cannot simulate the process itself (ie the process may do
something more than computation).

> 
> >Machines with random oracles with
> >computable means only compute the same class of functions as do Turing
> >machines. (classic result by de Leeuw et al. in 1956)
> 
> OK. Without computable means: random oracle makes them more powerfull.
> (Kurtz and Smith).
> 

Do you have a reference? Li & Vitanyi appear to be unaware of this result.

> 
> >
> >So, no I don't think the Turing thesis is needed for a universal
> >machine.
> 
> 
> I still disagree. I will say more but I have a meeting now.
> 

I look forward to that.

> Bruno
> 
> http://iridia.ulb.ac.be/~marchal/

-- 
*PS: A number of people ask me about the attachment to my email, which
is of type "application/pgp-signature". Don't worry, it is not a
virus. It is an electronic signature, that may be used to verify this
email came from me if you have PGP or GPG installed. Otherwise, you
may safely ignore this attachment.


A/Prof Russell Standish  Phone 8308 3119 (mobile)
Mathematics0425 253119 (")
UNSW SYDNEY 2052 [EMAIL PROTECTED] 
Australiahttp://parallel.hpc.unsw.edu.au/rks
International prefix  +612, Interstate prefix 02



pgpzjyqsPxc9i.pgp
Description: PGP signature


Re: Doomsday and computational irreducibility

2005-06-21 Thread Russell Standish
On Tue, Jun 21, 2005 at 03:25:21AM -0700, Jonathan Colvin wrote:
> A new (at least I think it is new) objection to the DA just occurred to me
> (googling computational + irreducibility +doomsday came up blank).
> 
> This objection (unfortunately) requires a few assumptions:
> 
> 1) No "block" universe (ie. the universe is a process).
> 
> 2) Wolframian computational irreducibility ((2) may be a consequence of (1)
> under certain other assumptions)

Actually, I think that 2) is incompatible with 1). A computational
process is deterministic, therefore can be replaced by a "block"
representation.

> 
> 3) No backwards causation.
> 
> The key argument is that by 1) and 2), at time T, the state of the universe
> at time T+x is in principle un-knowable, even to the universe itself.
> 
> Thus, at this time T (now), nothing, even the universe itself, can know
> whether the human race will stop tomorrow, or continue for another billion
> years.
> 
In any case, computational irreducibility does not imply that the the
state of the universe at T+x is unknowable. In loose terms,
computational irreducibility say that no matter what
model of the universe you have that is simpler to compute than the
real thing, your predictions will ultimately fail to track the universe's
behaviour after a finite amount of time.

Of course up until that finite time, the universe is highly
predictable :)


The question is, can we patch up this criticism? What if the universe
were completely indeterministic, with no causal dependence from one
time step to the next? I think this will expose a few "hidden"
assumptions in the DA:

1) I think the DA requires that the population curve is "continuous"
   in some sense (given that it is a function from R->N, it cannot be
   strictly continuous). Perhaps the notion of "bounded variation"
   does the trick. My knowledge is bit patchy here, as I never studied
   Lebesgue integration, but I think bounded variation is sufficient
   to guarantee existence of the integral of the population curve.

2) The usual DA requires that the integral of the population curve
   from -\infty to \infty be finite. I believe this can be extended to
   certain case where the integral is infinite, however I haven't
   really given this too much thought. But I don't think anyone else
   has either...

3) I have reason to believe (hinted at in my "Why Occam's razor"
   paper) that the measure for the population curve is actually
   complex when you take the full Multiverse into account. If you
   thought the DA on unbounded populations was bad - just wait for the complex
   case. My brain has already short-circuited at the prospect :)

In any case, whatever the conditions really turn out to be, there has
to be some causal structure linking now with the future. Consequently,
this argument would appear to fail. (But interesting argument anyway,
if it helps to clarify the assumptions of the DA).


-- 
*PS: A number of people ask me about the attachment to my email, which
is of type "application/pgp-signature". Don't worry, it is not a
virus. It is an electronic signature, that may be used to verify this
email came from me if you have PGP or GPG installed. Otherwise, you
may safely ignore this attachment.


A/Prof Russell Standish  Phone 8308 3119 (mobile)
Mathematics0425 253119 (")
UNSW SYDNEY 2052 [EMAIL PROTECTED] 
Australiahttp://parallel.hpc.unsw.edu.au/rks
International prefix  +612, Interstate prefix 02



pgp3uz9vGiA4N.pgp
Description: PGP signature


Re: Measure, Doomsday argument

2005-06-21 Thread Russell Standish
The answer is probably something along the lines of:

  OM with lots of sighted observers (as well as the odd blind one) will
  have lower complexity than OMs containing only blind observers (since
  the latter do not seem all that probable from an evolutionary point of
  view).

  Given there are so many sighted observers around, then it is not
  surprising if we're sighted.

This argument is a variation of the argument for why we find so many
observers in our world, rather than being alone in the universe, and
is similar to why we expect the universe to be so big and old.

Of course this argument contains a whole raft of ill-formed
assumptions, so I'm expecting Jonathin Colvin to be warming up his
keyboard for a critical response!

Cheers.

On Tue, Jun 21, 2005 at 10:56:48PM +0200, Quentin Anciaux wrote:
> 
> Beside this. I just think about this :
> 
> Why aren't we blind ? :-)
> 
> If the "measure" of an OM come from the information complexity of it, it 
> seems 
> that an OM of a blind person need less information content because there is 
> no complex description of the outside world available to the blind observer. 
> So as they are less complex, they must have an higher "measure" ... but I'm 
> not blind, so as a lot of people on earth... 
> 
> Quentin

-- 
*PS: A number of people ask me about the attachment to my email, which
is of type "application/pgp-signature". Don't worry, it is not a
virus. It is an electronic signature, that may be used to verify this
email came from me if you have PGP or GPG installed. Otherwise, you
may safely ignore this attachment.


A/Prof Russell Standish  Phone 8308 3119 (mobile)
Mathematics0425 253119 (")
UNSW SYDNEY 2052 [EMAIL PROTECTED] 
Australiahttp://parallel.hpc.unsw.edu.au/rks
International prefix  +612, Interstate prefix 02



pgpPnVEqKU4qp.pgp
Description: PGP signature


Re: Measure, Doomsday argument

2005-06-21 Thread "Hal Finney"
Quentin Anciaux writes:
> Why aren't we blind ? :-)
>
> If the "measure" of an OM come from the information complexity of it, it 
> seems 
> that an OM of a blind person need less information content because there is 
> no complex description of the outside world available to the blind observer. 
> So as they are less complex, they must have an higher "measure" ... but I'm 
> not blind, so as a lot of people on earth... 

There may be something of a puzzle there...

Although I think specifically that blind people don't necessarily have
a lower information content in their mental states.  It is said that
blind people have their other sense become more acute to take over the
unused brain capacity (at least people blind from birth).  So their mental
states may take just as much information as sighted people.

Beyond that, the puzzle remains as to why we are as complex as we are,
why we are not simpler beings.  It would seem that one could imagine
conscious beings who would count as observers, as people we "might
have been", but who would have simpler minds and senses than ours.
Certainly the higher animals show signs of consciousness, and their
brains are generally smaller than humans, especially the cortex, hence
probably with lower information content.

Of course there are a lot more people than other reasonably large-brained
animals, so perhaps our sheer numbers cancel any penalty due to our
larger and more-complex brains.

Hal Finney



Re: Measure, Doomsday argument

2005-06-21 Thread Russell Standish
On Tue, Jun 21, 2005 at 06:13:53PM -0700, "Hal Finney" wrote:
> Quentin Anciaux writes:
> > Why aren't we blind ? :-)
> >
> > If the "measure" of an OM come from the information complexity of it, it 
> > seems 
> > that an OM of a blind person need less information content because there is 
> > no complex description of the outside world available to the blind 
> > observer. 
> > So as they are less complex, they must have an higher "measure" ... but I'm 
> > not blind, so as a lot of people on earth... 
> 
> There may be something of a puzzle there...
> 
> Although I think specifically that blind people don't necessarily have
> a lower information content in their mental states.  It is said that
> blind people have their other sense become more acute to take over the
> unused brain capacity (at least people blind from birth).  So their mental
> states may take just as much information as sighted people.
> 
> Beyond that, the puzzle remains as to why we are as complex as we are,
> why we are not simpler beings.  It would seem that one could imagine
> conscious beings who would count as observers, as people we "might
> have been", but who would have simpler minds and senses than ours.
> Certainly the higher animals show signs of consciousness, and their
> brains are generally smaller than humans, especially the cortex, hence
> probably with lower information content.
> 
> Of course there are a lot more people than other reasonably large-brained
> animals, so perhaps our sheer numbers cancel any penalty due to our
> larger and more-complex brains.
> 
> Hal Finney

I take from this argument that the Anthropic Principle is a necessary
requirement on conscious experience. In other words - self-awareness
is a requirement. I cannot say why this should be so, as we do not
have an acceptable theory of consciousnes, only that it must be so,
otherwise we would expect to live in a too simple environment. And
this is an interesting constraint on acceptable theories of
consciousness.

Cheers

PS: only a few species have been shown to be self-aware: Homo Sapiens
(older than 18 months), both Chimpanzees, one of the Gibbons (IIRC)
and some species of Dolphin. Naturally, I'd expect a few more to come
to light, but self-awareness does appear to be rare in the animal
kingdom. Of course homo sapiens outnumbers all these species by many
orders of magnitude.


-- 
*PS: A number of people ask me about the attachment to my email, which
is of type "application/pgp-signature". Don't worry, it is not a
virus. It is an electronic signature, that may be used to verify this
email came from me if you have PGP or GPG installed. Otherwise, you
may safely ignore this attachment.


A/Prof Russell Standish  Phone 8308 3119 (mobile)
Mathematics0425 253119 (")
UNSW SYDNEY 2052 [EMAIL PROTECTED] 
Australiahttp://parallel.hpc.unsw.edu.au/rks
International prefix  +612, Interstate prefix 02



pgp3mLF0FpoKW.pgp
Description: PGP signature


RE: Doomsday and computational irreducibility

2005-06-21 Thread Jonathan Colvin
Russell Standish wrote:
>> A new (at least I think it is new) objection to the DA just occurred 
>> to me (googling computational + irreducibility +doomsday 
>came up blank).
>> 
>> This objection (unfortunately) requires a few assumptions:
>> 
>> 1) No "block" universe (ie. the universe is a process).
>> 
>> 2) Wolframian computational irreducibility ((2) may be a consequence 
>> of (1) under certain other assumptions)
>
>Actually, I think that 2) is incompatible with 1). A 
>computational process is deterministic, therefore can be 
>replaced by a "block"
>representation.

Are you familiar with Wolframian CI systems? The idea of CI is that while
the system evolves deterministically, it is impossible (even in principle)
to determine or predict the outcome without actually performing the
iterations. I'm not at all sure that the idea of block representation works
in this case.


>> 3) No backwards causation.
>> 
>> The key argument is that by 1) and 2), at time T, the state of the 
>> universe at time T+x is in principle un-knowable, even to 
>the universe itself.
>> 
>> Thus, at this time T (now), nothing, even the universe itself, can 
>> know whether the human race will stop tomorrow, or continue for 
>> another billion years.
>> 
>In any case, computational irreducibility does not imply that 
>the the state of the universe at T+x is unknowable. In loose 
>terms, computational irreducibility say that no matter what 
>model of the universe you have that is simpler to compute than 
>the real thing, your predictions will ultimately fail to track 
>the universe's behaviour after a finite amount of time.
>
>Of course up until that finite time, the universe is highly 
>predictable :)

I'm thinking of Wolframian CI. There seem to be no short-cuts under that
assumption (ie. No simpler model possible).

>
>
>The question is, can we patch up this criticism? What if the 
>universe were completely indeterministic, with no causal 
>dependence from one time step to the next? I think this will 
>expose a few "hidden"
>assumptions in the DA:
>
>1) I think the DA requires that the population curve is "continuous"
>   in some sense (given that it is a function from R->N, it cannot be
>   strictly continuous). Perhaps the notion of "bounded variation"
>   does the trick. My knowledge is bit patchy here, as I never studied
>   Lebesgue integration, but I think bounded variation is sufficient
>   to guarantee existence of the integral of the population curve.
>
>2) The usual DA requires that the integral of the population curve
>   from -\infty to \infty be finite. I believe this can be extended to
>   certain case where the integral is infinite, however I haven't
>   really given this too much thought. But I don't think anyone else
>   has either...
>
>3) I have reason to believe (hinted at in my "Why Occam's razor"
>   paper) that the measure for the population curve is actually
>   complex when you take the full Multiverse into account. If you
>   thought the DA on unbounded populations was bad - just wait 
>for the complex
>   case. My brain has already short-circuited at the prospect :)
>
>In any case, whatever the conditions really turn out to be, 
>there has to be some causal structure linking now with the 
>future. Consequently, this argument would appear to fail. (But 
>interesting argument anyway, if it helps to clarify the 
>assumptions of the DA).

I don't see that causal structure is key. My understanding of the standard
DA is that the system (universe) itself has knowledge of its future that the
observer lacks (sort of bird's eye vs. frog's eye situation), which avoids
the reverse -causation problem. Wolframian CI seems like it might be
problematic for that account.

Jonathan Colvin




RE: Measure, Doomsday argument

2005-06-21 Thread Jonathan Colvin
Russell Standish wrote:

>This argument is a variation of the argument for why we find 
>so many observers in our world, rather than being alone in the 
>universe, and is similar to why we expect the universe to be 
>so big and old.
>
>Of course this argument contains a whole raft of ill-formed 
>assumptions, so I'm expecting Jonathin Colvin to be warming up 
>his keyboard for a critical response!

Ok, if you insist :)

I think the above are two disparate arguments. It is simpler by Occam to
assume that there should be many observers rather than only one (similar
argument to favouring the multiverse over only one big-bang). Once you admit
the possibility of one observer, it takes extra argument to say why there
should be *only* one.

But we expect the universe to be old for cosmological reasons (takes stars a
long time to cook up the needed elements, observer take a long time to
evolve). Simplicity does not seem to be a factor here. A big universe does
not seem much simpler either.

Jonathan Colvin




Re: Doomsday and computational irreducibility

2005-06-21 Thread Russell Standish
On Tue, Jun 21, 2005 at 09:06:22PM -0700, Jonathan Colvin wrote:
> Russell Standish wrote:
> 
> Are you familiar with Wolframian CI systems? 

Yes of course. Wolfram did not invent the term.

> The idea of CI is that while
> the system evolves deterministically, it is impossible (even in principle)
> to determine or predict the outcome without actually performing the
> iterations. I'm not at all sure that the idea of block representation works
> in this case.
> 

Easy. Run your computer simulation for an infinite time, and save the
output. There is your block representation. Computational
irreducibility has nothing to do with whether the block representation
is possible - only indeterminism. And even then, if you include all the
counterfactuals, a block representation is possible too.

> >In any case, computational irreducibility does not imply that 
> >the the state of the universe at T+x is unknowable. In loose 
> >terms, computational irreducibility say that no matter what 
> >model of the universe you have that is simpler to compute than 
> >the real thing, your predictions will ultimately fail to track 
> >the universe's behaviour after a finite amount of time.
> >
> >Of course up until that finite time, the universe is highly 
> >predictable :)
> 
> I'm thinking of Wolframian CI. There seem to be no short-cuts under that
> assumption (ie. No simpler model possible).

There are always simpler models. CI implies that no simpler model
remains accurate in the long run. But in the short term it is entirely
possible.

An example: Conways Game of Life is a computationally irreducible
system. Yet you can predict the motion of a glider most accurately
while it is in free space. Only when it runs into some other
configuration of blocks does it become unpredictable.

> >
> >In any case, whatever the conditions really turn out to be, 
> >there has to be some causal structure linking now with the 
> >future. Consequently, this argument would appear to fail. (But 
> >interesting argument anyway, if it helps to clarify the 
> >assumptions of the DA).
> 
> I don't see that causal structure is key. My understanding of the standard
> DA is that the system (universe) itself has knowledge of its future that the
> observer lacks (sort of bird's eye vs. frog's eye situation), which avoids
> the reverse -causation problem.

I've never thought of the DA in that way, but it might be valid.

Analytic functions have the property that all information about what
that function does everywhere is contained within the derivatives all
evaluated at one point.

Whilst I don't expect population curves to be analytic, I am saying
the DA probably implicitly assumes some constraints, which act as
information storage about the future in the here and now.


> Wolframian CI seems like it might be
> problematic for that account.
> 
> Jonathan Colvin
> 

Even a computationally irreducible system contains information about
the future. Its just that much of it is inaccessible.

-- 
*PS: A number of people ask me about the attachment to my email, which
is of type "application/pgp-signature". Don't worry, it is not a
virus. It is an electronic signature, that may be used to verify this
email came from me if you have PGP or GPG installed. Otherwise, you
may safely ignore this attachment.


A/Prof Russell Standish  Phone 8308 3119 (mobile)
Mathematics0425 253119 (")
UNSW SYDNEY 2052 [EMAIL PROTECTED] 
Australiahttp://parallel.hpc.unsw.edu.au/rks
International prefix  +612, Interstate prefix 02



pgpoJjQ6IXmYZ.pgp
Description: PGP signature


RE: Torture yet again

2005-06-21 Thread Jonathan Colvin
Eugen Leitl wrote:
>> (JC) Now, the funny thing is, if you replace "torture" by 
>"getting shot in 
>> the head", then I will pick (2). That's interesting, isn't it?
>
>Why is that interesting? It's indistinguishable from a 
>teleportation scenario.

Before thinking about it, I would have assumed that I would make the same
choices under the circumstances of torture or getting shot. I was surprised
that this is not the case; I choose 50/50 under torture, and the copies
under getting shot.

Stathis and Bruno choose the copies under both scenarious (I assume). So it
is interesting (to me, anyway), that I make different choices depending on
what the undesirable event is. Perhaps I'm simply being inconsistent. But I
think my reasoning is that so long as I have at least one copy that
survives, I don't care about getting shot. But however many copies I have, I
still don't want to get tortured.

Jonathan Colvin




Re: Measure, Doomsday argument

2005-06-21 Thread Russell Standish
On Tue, Jun 21, 2005 at 09:14:18PM -0700, Jonathan Colvin wrote:
> Russell Standish wrote:
> 
> >This argument is a variation of the argument for why we find 
> >so many observers in our world, rather than being alone in the 
> >universe, and is similar to why we expect the universe to be 
> >so big and old.
> >
> >Of course this argument contains a whole raft of ill-formed 
> >assumptions, so I'm expecting Jonathin Colvin to be warming up 
> >his keyboard for a critical response!
> 
> Ok, if you insist :)
> 
> I think the above are two disparate arguments. It is simpler by Occam to
> assume that there should be many observers rather than only one (similar
> argument to favouring the multiverse over only one big-bang). Once you admit
> the possibility of one observer, it takes extra argument to say why there
> should be *only* one.
> 
> But we expect the universe to be old for cosmological reasons (takes stars a
> long time to cook up the needed elements, observer take a long time to
> evolve). Simplicity does not seem to be a factor here. A big universe does
> not seem much simpler either.
> 
> Jonathan Colvin
> 

Sorry, I was being overly telegraphic. A big and old universe with
simple initial conditions is the simplest way of providing an
environment rich enough to support conscious life. The process of
evolution implied also implies a large number of observers, and a
panoply of other interim forms (non-conscious life). By contrast a
universe that is just big enough (eg a few years old, and containing
just the planet Earth, or even just the room in which you're located)
requires a mind-bogglingly large array of initial conditions - really
needing a creative deity of some kind to bring it into existence. This
is what I mean by big & old universes being simpler.

Cheers
-- 
*PS: A number of people ask me about the attachment to my email, which
is of type "application/pgp-signature". Don't worry, it is not a
virus. It is an electronic signature, that may be used to verify this
email came from me if you have PGP or GPG installed. Otherwise, you
may safely ignore this attachment.


A/Prof Russell Standish  Phone 8308 3119 (mobile)
Mathematics0425 253119 (")
UNSW SYDNEY 2052 [EMAIL PROTECTED] 
Australiahttp://parallel.hpc.unsw.edu.au/rks
International prefix  +612, Interstate prefix 02



pgp9ERg1WndvZ.pgp
Description: PGP signature


RE: Pareto laws and expected income

2005-06-21 Thread Jonathan Colvin
Russell Standish wrote:
>> 
>> (JC) My consciousness (or degree of such) is a complicated function of my

>> evolutionary history, but the problem is so multifactorial it is 
>> inappropriate to use anthropic reasoning.
>
>Nonsense. You are either conscious, in which case you will 
>observe something, or you are not, which case you don't. This 
>is a simple two state logic.

That seems a remarkable assertion. As I grow from a fetus to an adult, is
there one particular interval of planck time where I go from being an
unconscious object to a conscious observer? 


>> > What it does show is what an ass the ASSA is. It is 
>unreasonable to 
>> > suppose that my current wealth is sampled randomly from the 
>> > distribution of al wealths (a Pareto distribution like 
>P(x)=x^a, for 
>> > some a).
>> 
>> Why is it any more unreasonable than supposing that your 
>birth rank is 
>> sampled randomly from the distribution of all birth ranks?
>
>Because current observer moments are dependent on previous 
>observer moments. Births are not.

Ok, forget observer moments/ momentary income, and look at your total income
integrated over your entire life. Shouldn't you expect your lifetime net
worth to be sampled randomly from the distribution of all lifetime net
worths?

>
>For example, one's income tends to be positively correlated 
>with age (until age of retirement, that is, when the trend reverses).

As above, look at sum lifetime income rather than momentary income. No more
age correlation.

Jonathan Colvin



RE: Torture yet again

2005-06-21 Thread Jesse Mazer

Jonathan Colvin wrote:


Eugen Leitl wrote:
>> (JC) Now, the funny thing is, if you replace "torture" by
>"getting shot in
>> the head", then I will pick (2). That's interesting, isn't it?
>
>Why is that interesting? It's indistinguishable from a
>teleportation scenario.

Before thinking about it, I would have assumed that I would make the same
choices under the circumstances of torture or getting shot. I was surprised
that this is not the case; I choose 50/50 under torture, and the copies
under getting shot.

Stathis and Bruno choose the copies under both scenarious (I assume). So it
is interesting (to me, anyway), that I make different choices depending on
what the undesirable event is. Perhaps I'm simply being inconsistent. But I
think my reasoning is that so long as I have at least one copy that
survives, I don't care about getting shot. But however many copies I have, 
I

still don't want to get tortured.


Suppose there had already been a copy made, and the two of you were sitting 
side-by-side, with the torturer giving you the following options:


A. He will flip a coin, and one of you two will get tortured
B. He points to you and says "I'm definitely going to torture the guy 
sitting there, but while I'm sharpening my knives he can press a button that 
makes additional copies of him as many times as he can."


Would this change your decision in any way? What if you are the copy in this 
scenario, with a clear memory of having been the "original" earlier but then 
pressing a button and finding yourself suddenly standing in the copying 
chamber--would that make you more likely to choose B?


Jesse




RE: Torture yet again

2005-06-21 Thread Jonathan Colvin
Jesse Mazer wrote:

>Suppose there had already been a copy made, and the two of you 
>were sitting side-by-side, with the torturer giving you the 
>following options:
>
>A. He will flip a coin, and one of you two will get tortured 
>B. He points to you and says "I'm definitely going to torture 
>the guy sitting there, but while I'm sharpening my knives he 
>can press a button that makes additional copies of him as many 
>times as he can."
>
>Would this change your decision in any way? What if you are 
>the copy in this scenario, with a clear memory of having been 
>the "original" earlier but then pressing a button and finding 
>yourself suddenly standing in the copying chamber--would that 
>make you more likely to choose B?

That would be a more difficult decision. At this point, having experienced
escaping the torture, I might be more inclined to change my mind. That would
certainly be intellectually inconcistent, but psychologically
understandable.  But I still think I'd cboose (A).

Jonathan Colvin