On 3/6/2020 5:07 PM, Bruce Kellett wrote:
On Fri, Mar 6, 2020 at 5:22 PM 'Brent Meeker' via Everything List <everything-list@googlegroups.com <mailto:everything-list@googlegroups.com>> wrote:

    On 3/5/2020 10:07 PM, Bruce Kellett wrote:
    In the full set of all 2^N branches there will, of course, be
    branches in which this is the case. But that is just because when
    every possible bit string is included, that possibility will also
    occur. The problem is that the proportion of branches for which
    this is the case becomes small as N increases.

    But not the proportion of branches which are within a fixed
    deviation from 2:1.  That proportion will increase with N.

    I can see that I'm going to have to write a program to produce and
    example for you.


I look forward to such a program -- my computer programming skills have abandoned me......

The trouble is that my intuition does not stretch to what happens in the branch multiplication situation -- I can convince myself either way....

Kent covers this scenario in his paper (arxiv:0905.0624). He writes:

"Consider a replicating multiverse, with a machine like the first one, in which branches arise as the result of technologically advanced beings running simulations. Whenever the red button is pressed in a simulated universe, that universe is deleted, and successor universes with outcomes 0 and 1 written on the tape are initiated. Suppose, in this case, that each time, the beings create three identical simulations with outcome 0, and just one with outcome 1. From the perspective of the inhabitants, there is no way to detect that outcomes 0 and 1 are being treated differently, and so they represent them in their theories with one branch each. In fact, though, given this representation, there is an at least arguably natural sense in which they ought to assign to the outcome 0 branch three times the importance of the outcome 1 branch: in other words, they ought to assign branch weights (3/4,1/4).

"They don't know this. But suppose that they believe that there are unknown weights attached to the branches. What happens now? After N runs of the experiment, there will actually be 4^N simulations, although in the inhabitants' theoretical representation, these are represented by 2^N branches. Of the 4^N simulation, almost all (for large N) will contain close to 3N/4 zeros and N/4 ones."

This is where my intuition breaks down -- this is by no means obvious to me, though I know that this is what you predicted for the 3:1 case we discussed before. My problem with this conclusion is that there are only 2^ distinct bit strings of length N. So the 4^N simulations must contain a lot of duplications. In fact, 4^N is immeasurably larger than 2^N: 4^N/2^N = 2^N. So there must be an infinite number of replicates as N --> oo. Why should those bit strings with the ratio 4:1 of zeros to ones be favoured in the duplications? Would not all strings be duplicated uniformly, so that the 4^N simulations will contain exactly the same ratio of 4:1 ratio bit strings as the original 2^N possible bit strings does. My intuition is clearly different from Kent's and your's.

Now Kent goes on:
"Now, I think I can see how to run some, though not all, of an argument that supports this conclusion. The branch importance measure defined by inhabitants who find relative frequency 3/4 of zeros corresponds to the counting measure on simulations. If we could argue, for instance by appealing to symmetry, that each of the 4^N simulations is equally important, then this branch importance measure would indeed be justified. If we could also argue, perhaps using some form of anthropic reasoning, that there is an equal chance of finding oneself in any of the 4^N simulations, then the chance of finding oneself in a simulation in which one concludes that the branch weights are (very close to) (3/4,1/4) would be very close to one. ... There would indeed then seem to be a sense in which the branch weights define which subsets of the branches are important for theory confirmation.

"It seems hard to make this argument rigorous. In particular, the notion of 'chance of finding oneself' in a particular simulation doesn't seem easy to define properly. Still, we have an arguable natural measure on simulations, the counting measure, according to which most of the inhabitants will arrive at (close to) the right theory of branch weights. That might perhaps be progress."


It is clear that Kent is far from convinced by this. And I have indicated that I am far from convinced even of things that Kent seems to find intuitively obvious. This needs to be worked through more carefully -- I remain unconvinced that branch duplication provides a way of getting probabilities into the data.

What do you think about identifying what one finds as an observer as a probability of being one of the leaves of the branching MWI tree, i.e. interpreting self-location uncertainty by probability.  I see no problem with looking at those leaves as an ensemble and one's experience as an element (a sequence of results) as a probabilistic sample from this ensemble.  The fact that no one can "see" the ensemble is like any probability example in which the ensemble is usually just hypothetical, i.e. what could have happened (or what Kastner calls "possibility space").

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/9cbc068a-eaf8-80bf-eccf-6216d2854043%40verizon.net.

Reply via email to