Re: naturally selected ethics, and liking chocolate

2004-01-23 Thread Hal Finney
Eric Hawthorne writes:
 I think each form of emergent complex order which is capable of becoming 
 intelligent and forming goals in general contexts
 problably would have by default an ethical principle promoting the 
 continued existence of the most complex (high-level)
 emergent system in its vicinity of which it perceives itself to be a 
 part, and which it perceives to be beneficial to its own survival.

 I can say this because forms of emergent complex order that included 
 SAS's that didn't have this ethic would not survive long
 compared to other emergent complex orders whose SAS's did have this 
 ethic.

This is getting somewhat off-topic for this list, as it's not really
multiverse related (except insofar as everything is multiverse related,
since the multiverse includes everything).

However you should be aware that evolutionary theory prefers to avoid this
kind of reasoning.  At one time it was widely assumed that such behaviors
as altruism could be evolved and maintained for reasons similar to what
you describe, that they benefit the group, and so groups whose members
were altruistic would tend to survive better than groups whose members
were selfish.

Later analyses showed that this doesn't really work; that selfish
behaviors have strong selective advantage compared to the relatively
weak effects of group selection.  It would be very difficult for an
altruistic behavior to spread and persist within a group if it caused
disadvantage to the individuals who possessed it.

Instead, biologists eventually identified alternative explanations for
altruistic behavior, in terms of kin selection and similar factors.
Group selection is now discredited as an evolutionary force.

See http://www.utm.edu/~rirwin/391LevSel.htm for some class lecture
notes discussiong group selection.

Hal Finney



Re: Is group selection discredited?

2004-01-23 Thread Eric Hawthorne
Unfortunately, disallowing notions of group selection also disallows 
notions of
emergent higher-level-order systems. You must allow for selection 
effects at all
significantly functioning layers/levels of the emergent system, to 
explain the emergence
of these systems adequately. For example, ant colonies (as an emerged 
system) live
for 15 years whereas the ants live for at most a year. Yet the colony 
(controlling for
colony size) behaves diffently when it is a young colony (say its first 
five years) compared
to when it is in its old age. (Essentially, the colony's behaviours 
become more
conservative (less amenable to change of tactics.))
It would be very difficult to explain this solely from the perspective 
of the direct benefit
to any individual ant's genes. For the benefit of ant-genes in general 
in the colony,
yes.

I think that it's just been too difficult to get adequate controlled 
studies to determine
whether a group selection effect is happening. Because the individuals 
tend not to
live at all if removed from their group.

I think it is still an open debate. Group selection being discredited is 
just Dawkins
and  some like-minded people's favorite theory right now.

Group selection is now discredited as an evolutionary force.

See http://www.utm.edu/~rirwin/391LevSel.htm for some class lecture
notes discussion group selection.
 





probabilities measures computable universes

2004-01-23 Thread Juergen Schmidhuber
I browsed through recent postings and hope
this delayed but self-contained message can clarify
a few things about probabilities and measures
and predictability etc.
What is the probability of an integer being, say,
a square? This question does not make sense without
a prior probability distribution on the integers.
This prior cannot be uniform. Try to find one!
Under _any_ distribution some integers must be
more likely than others.
Which prior is good?  Is there a `best' or
`universal' prior? Yes, there is. It assigns to
each integer n as much probability as any other
computable prior, save for a constant factor that
does not depend on n.  (A computable prior can be
encoded as a program that takes n as input and
outputs n's probability, e.g., a program that
implements Bernoulli's formula, etc.)
Given a set of priors, a universal prior is
essentially a weighted sum of all priors in the
set. For example, Solomonoff's famous weighted sum
of all enumerable priors will assign at least as
much probability to any square integer as any
other computable prior, save for a constant
machine-dependent factor that becomes less and
less relevant as the integers get larger and
larger.
Now let us talk about all computable universe
histories. Some are finite, some infinite. Each
has at least one program that computes it. Again
there is _no_ way of assigning equal probability
to all of them! Many are tempted to assume a
uniform distribution without thinking much about
it, but there is _no_ such thing as a uniform
distribution on all computable universes, or on
all axiomatic mathematical structures, or on
all logically possible worlds, etc!
(Side note: There only is a uniform _measure_ on
the finitely many possible history _beginnings_
of a given size, each standing for an uncountable
_set_ of possible futures. Probabilities
refer to single objects, measures to sets.)
It turns out that we can easily build universal
priors using Levin's important concept of self-
delimiting programs. Such programs may
occasionally execute the instruction request new
input bit; the bit is chosen randomly and will
remain fixed thereafter. Then the probability of
some universe history is the probability of guessing
a program for it. This probability is `universal'
as it does not depend much on the computer (whose
negligible influence can be buried in a constant
universe-independent factor). Some programs halt or
go in an infinite loop without ever requesting
additional input bits. Universes with at least one
such short self-delimiting program are more probable
than others.
To make predictions about some universe, say,
ours, we need a prior as well. For instance,
most people would predict that next Tuesday it
won't rain on the moon, although there are
computable universes where it does. The anthropic
principle is an _insufficient_ prior that does not
explain the absence of rain on the moon - it does
assign cumulative probability 1.0 to the set of all
universes where we exist, and 0.0 to all the other
universes, but humans could still exist if it did
rain on the moon occasionally. Still, many tend to
consider the probability of such universes as small,
which actually says something about their prior.
We do not know yet the true prior from which
our universe is sampled - Schroedinger's wave
function may be an approximation thereof. But it
turns out that if the true prior is computable
at all, then we can in principle already predict
near-optimally, using the universal prior instead:
http://www.idsia.ch/~juergen/unilearn.html
Many really smart physicists do not know this
yet. Technical issues and limits of computable
universes are discussed in papers available at:
http://www.idsia.ch/~juergen/computeruniverse.html
Even stronger predictions using a prior based
on the fastest programs (not the shortest):
http://www.idsia.ch/~juergen/speedprior.html
-Juergen Schmidhuber



Re: probabilities measures computable universes

2004-01-23 Thread scerir
Are probabilities always and necessarily positive-definite?

I'm asking this because there is a thread, started by Dirac
and Feynman, saying the only difference between the classical 
and quantum cases is that in the former we assume the probabilities 
are positive-definite.

Thus, speaking of MWI, we could also ask: what is the joint 
probability of finding ourselves in a universe alpha and of 
finding ourselves in a universe beta, which is 180 degrees 
out of phase with the first one (whatever that could mean)?

s.



Re: Is the universe computable

2004-01-23 Thread Bruno Marchal
Dear Stephen,

At 12:39 21/01/04 -0500, Stephen Paul King wrote:
Dear Bruno and Kory,

Interleaving.

- Original Message -
From: Bruno Marchal [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Wednesday, January 21, 2004 9:21 AM
Subject: Re: Is the universe computable
 At 02:50 21/01/04 -0500, Kory Heath wrote:
 At 1/19/04, Stephen Paul King wrote:
  Were and when is the consideration of the physical resources
required
 for the computation going to obtain? Is my question equivalent to the
old
 first cause question?
 [KH]
 The view that Mathematical Existence == Physical Existence implies that
 physical resources is a secondary concept, and that the ultimate ground
 of any physical universe is Mathspace, which doesn't require resources of
 any kind. Clearly, you don't think the idea that ME == PE makes sense.
 That's understandable, but here's a brief sketch of why I think it makes
 more sense than the alternative view (which I'll call
Instantiationism):
 
[SPK]

Again, the mere postulation of existence is insufficient: it does not
thing to inform us of how it is that it is even possible for us, as mere
finite humans, to have experiences that change. We have to address why it
is that Time, even if it is ultimately an illusion, and the distingtion
between past and future is so intimately intetwined in our world of
experience.


Good question. But you know I do address this question in my thesis
(see url below). I cannot give you too much technical details, but here is a
the main line. As you know, I showed that if we postulate the comp hyp
then time, space, energy and, in fact, all physicalities---including the
communicable (like 3-person results of experiments) as the uncommunicable
one (like qualie or results of 1-person experiment) appears as modalities 
which are
variant of the Godelian self-referential provability predicates. As you know
Godel did succeed in defining formal provability in the language of a
consistent machine and many years later Solovay succeeds in formalising
all theorems of provability logic in a couple of modal logics G and G*.
G formalizes the provable (by the machine) statements about its own
provability ability; and G* extends G with all true statements about the
machine's ability (including those the machine cannot prove).
Now, independently, temporal logicians have defined some modal
systems capable of formalizing temporal statements. Also, Brouwer
developed a logic of the conscious subject, which has given rise to a whole
constructive philosophy of mathematics, which has been formalize
by a logic known as intuitionist logic, and later, like the temporal logic,
the intuitionist logic has been captured formally by an modal
extension of a classical modal logic. Actually it is Godel who has seen
the first that Intuitionist logic can be formalised by the modal logic S4, and
Grzegorczyk makes it more precise with the extended system S4Grz.
And it happens that S4Grz is by itself a very nice logic of subjective,
irreversible (anti-symmetric) time, and this gives a nice account too of the
relationship Brouwer described between time and consciousness.
Now, if you remember, I use the thaetetus trick of defining
(machine) knowledge of p by provability of p and p. Independently
Boolos, Goldblatt, but also Kusnetsov and Muravitski in Russia, showed
that the formalization of that form of knowledge (i.e. provability of p 
and p)
gives exactly the system of S4Grz. That's the way subjective time arises
in the discourse of the self-referentially correct machine.
Physical discourses come from the modal variant of provability given
by provable p and consistent p (where consistent p = not provable p):
this is justified by the thought experiment and this gives the arithmetical
quantum logics which capture the probability one for the probability
measure on the computational histories as seen by the average consistent
machine. Physical time is then captured by provable p and consistant p and p.
Obviously people could think that for a consistent machine
the three modal variants, i.e:

provable p
provable p and p
provable p and consistent p and p
are equivalent. Well, they are half right, in the sense that for G*, they 
are indeed
equivalent (they all prove the same p), but G, that is the self-referential 
machine
cannot prove those equivalences, and that's explain why, from the point of 
view of the
machine, they give rise to so different logics. To translate the comp hyp 
into the
language of the machine, it is necessary to restrict p to the \Sigma_1 
arithmetical
sentences (that is those who are accessible by the Universal Dovetailer, 
and that step
is needed to make the physicalness described by a quantum logic).
The constraints are provably (with the comp hyp) enough to defined all
the probabilities on the computational histories, and that is why, if ever 
a quantum
computer would not appear in those logics, then (assuming QM is true!) comp
would definitely be refuted; 

Re: naturally selected ethics

2004-01-23 Thread CMR

 Later analyses showed that this doesn't really work; that selfish
 behaviors have strong selective advantage compared to the relatively
 weak effects of group selection.  It would be very difficult for an
 altruistic behavior to spread and persist within a group if it caused
 disadvantage to the individuals who possessed it.

 Instead, biologists eventually identified alternative explanations for
 altruistic behavior, in terms of kin selection and similar factors.
 Group selection is now discredited as an evolutionary force.


Agreed (both with your point and it's tenuous relevance to he list - unless
it's all CAs and thus all intrinsically related..), but with a qualifier.
Are species is generally just a few millennia (ranging from the present to
10 or 12 thousand years ago depending on what group or region you pick) away
from a nomadic clan ecology. The probability of opportunities to act
altruistically towards someone in such an ecology would be skewed towards
that someone being a relation by blood or marriage.

Fast forward to the present where, for a great swath of humanity, Darwinian
natural selection has been turned on it's head. From a strict reproductive
success measure, the meek (the poor anyway) have inherited the earth
whereas from a resource control aspect the rich hold sway. Selection can
be viewed as having all but been neutralized in the west on the former front
in that potential reproductive success is only denied to the most severely
developmentally disabled. But biologically we remain for all practical
purposes identical to those clans people above and to the extent that we are
hard wired (EO Wilson vs SJ Gould),  we operate in response to the same
nature as they. I think Desmond Morris is not far wrong when he muses that
in the 1st and 2nd world (and rapidly the 3rd), our tribe now largely
consists of the contents of our (email) address books.

As a evolutionary biologist turned programmer, I have gradually shifted from
the hard Wilson camp (Social Biology) towards the soft Gould(emergent
Spandrels) by way of Wolfram: Natural selection tends to modify systems and
structures at the margins whereas much of the complexity and organization
of same is the direct result of self-organization only relatively
constrained by selective pressures from sources on the same and other
scales of hierarchal adaptation.

Given all the above, then the admittedly rare but real phenomena of
expensive (fitness lowering) altruism (as opposed to the cheap kind;
aka: the rich never give more than they can afford) may not be
surprising or unexpected.



Re: probabilities measures computable universes

2004-01-23 Thread Hal Finney
Juergen Schmidhuber writes:

 What is the probability of an integer being, say,
 a square? This question does not make sense without
 a prior probability distribution on the integers.

 This prior cannot be uniform. Try to find one!
 Under _any_ distribution some integers must be
 more likely than others.

 Which prior is good?  Is there a `best' or
 `universal' prior? Yes, there is. It assigns to
 each integer n as much probability as any other
 computable prior, save for a constant factor that
 does not depend on n.  (A computable prior can be
 encoded as a program that takes n as input and
 outputs n's probability, e.g., a program that
 implements Bernoulli's formula, etc.)

What is the probability that an integer is even?  Suppose we use a
distribution where integer n has probability 1/2^n.  As is appropriate
for a probability distribution, this has the property that it sums to 1
as n goes from 1 to infinity.

The even integers would then have probability 1/2^2 + 1/2^4 + 1/2^6 ...
which works out to 1/3.  So under this distribution, the probability
that an integer is even is 1/3, and odd is 2/3.

Do you think it would come out differently with a universal distribution?

The more conventional interpretation would use the probability computed
over all numbers less than n, and take the limit as n approaches infinity.
This would say that the probability of being even is 1/2.  I think this
is how such results are derived as the one mentioned earlier by Bruno,
that the probability of two random integers being coprime is 6/pi^2.

I'd imagine that this result would not hold using a universal
distribution.  Are these mathematical results fundamentally misguided,
or is this an example where the UD is not the best tool for the job?

Hal Finney