Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-10 Thread Russell Standish
On Mon, Mar 09, 2020 at 08:41:01PM -0700, 'Brent Meeker' via Everything List 
wro> 
> It may seem counter intuitive, but as the sample length goes up the
> probability of each possible proportion goes down, including that of the
> true value.  It goes down because there are more possible exact values.
> 
> Brent
>

Yes, but that is not really related either. When talking about all
proportions that lie within (say) 1% of 50/50, the number of such distinct
proportions grows faster than the probability of each possible
proportion.

-- 


Dr Russell StandishPhone 0425 253119 (mobile)
Principal, High Performance Coders hpco...@hpcoders.com.au
  http://www.hpcoders.com.au


-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/20200311053037.GA30360%40zen.


Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-09 Thread 'Brent Meeker' via Everything List



On 3/9/2020 3:08 PM, Bruce Kellett wrote:
On Tue, Mar 10, 2020 at 8:54 AM Russell Standish 
mailto:li...@hpcoders.com.au>> wrote:


On Sun, Mar 08, 2020 at 10:10:23PM +1100, Bruce Kellett wrote:
>
>     >     > In order to infer a probability of p = 0.5, your
branch data must
>     have
>     >     > approximately equal numbers of zeros and ones. The
number of
>     branches
>     >     with
>     >     > equal numbers of zeros and ones is given by the binomial
>     coefficient. For
>     >     large
>     >     > even N = 2M trials, this coefficient is N!/M!*M!.
Using the
>     Stirling
>     >     > approximation to the factorial for large N, this
goes as 2^N/sqrt
>     (N)
>     >     (within
>     >     > factors of order one). Since there are 2^N
sequences, the
>     proportion with
>     >     n_0 =
>     >     > n_1 vanishes as 1/sqrt(N) for N large.
>
>
>
> This is the nub of the proof you wanted.

No - it is simply irrelevant. The statement I made was about the
proportion of strings whose bit ratio lies within certain percentage
of the expected value.

After all when making a measurement, you are are interested in the
value and its error bounds, eg 10mm +/- 0.1%, or 10mm +/- 0.01mm. We
can never know its exact value.



If you are using experimental data to estimate a quantity (and a p 
value is a quantity in the required sense), then you are interested in 
the confidence interval, not an absolute or percentage error. And the 
confidence interval for a given probability of including the true 
value decreases with the number of trials (since the standard error 
decreases with N).


Right.  So that's because the density of results concentrates around the 
true value as N->oo.  In statistician talk, the mean is a consistent 
estimator.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/22f9aecf-2211-c4b6-8438-7895eb0df17e%40verizon.net.


Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-09 Thread 'Brent Meeker' via Everything List




On 3/9/2020 2:53 PM, Russell Standish wrote:

On Sun, Mar 08, 2020 at 10:10:23PM +1100, Bruce Kellett wrote:

 >     > In order to infer a probability of p = 0.5, your branch data must
 have
 >     > approximately equal numbers of zeros and ones. The number of
 branches
 >     with
 >     > equal numbers of zeros and ones is given by the binomial
 coefficient. For
 >     large
 >     > even N = 2M trials, this coefficient is N!/M!*M!. Using the
 Stirling
 >     > approximation to the factorial for large N, this goes as 2^N/sqrt
 (N)
 >     (within
 >     > factors of order one). Since there are 2^N sequences, the
 proportion with
 >     n_0 =
 >     > n_1 vanishes as 1/sqrt(N) for N large.



This is the nub of the proof you wanted.

No - it is simply irrelevant. The statement I made was about the
proportion of strings whose bit ratio lies within certain percentage
of the expected value.

After all when making a measurement, you are are interested in the
value and its error bounds, eg 10mm +/- 0.1%, or 10mm +/- 0.01mm. We
can never know its exact value.


It may seem counter intuitive, but as the sample length goes up the 
probability of each possible proportion goes down, including that of the 
true value.  It goes down because there are more possible exact values.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/9aabe816-a413-b2e8-7a0c-112980016ab7%40verizon.net.


Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-09 Thread Bruce Kellett
On Tue, Mar 10, 2020 at 11:49 AM Alina Gutoreva 
wrote:

> I think this is the time when I would like to ACTUALLY understand what
> you are talking about...
>
> I think this is important, but you lost me on nimimi:
>
> N!/M!*M!
>
>
> Would appreciate any examples from personal-life-perspecitve too!
>


What we are talking about are subtle points in the interpretation of the
binomial distribution. Look it up on the web if you want to understand
where the N-factorial/M-factorial-squared comes from. It is the count of
the number of ways you can get equal numbers of zeros and ones in an
N(=2M)-bit string.

Bruce

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAFxXSLSYYgr-qVgMTGxvo5pLno7TadZUsSKyx7g640Ce3_-qfQ%40mail.gmail.com.


Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-09 Thread Alina Gutoreva
I think this is the time when I would like to ACTUALLY understand what you are 
talking about...

I think this is important, but you lost me on nimimi:
> N!/M!*M!

Would appreciate any examples from personal-life-perspecitve too!



> On 9 Mar 2020, at 22:08, Bruce Kellett  wrote:
> 
> On Tue, Mar 10, 2020 at 8:54 AM Russell Standish  > wrote:
> On Sun, Mar 08, 2020 at 10:10:23PM +1100, Bruce Kellett wrote:
> > 
> > > > In order to infer a probability of p = 0.5, your branch data 
> > must
> > have
> > > > approximately equal numbers of zeros and ones. The number of
> > branches
> > > with
> > > > equal numbers of zeros and ones is given by the binomial
> > coefficient. For
> > > large
> > > > even N = 2M trials, this coefficient is N!/M!*M!. Using the
> > Stirling
> > > > approximation to the factorial for large N, this goes as 
> > 2^N/sqrt
> > (N)
> > > (within
> > > > factors of order one). Since there are 2^N sequences, the
> > proportion with
> > > n_0 =
> > > > n_1 vanishes as 1/sqrt(N) for N large.
> > 
> > 
> > 
> > This is the nub of the proof you wanted.
> 
> No - it is simply irrelevant. The statement I made was about the
> proportion of strings whose bit ratio lies within certain percentage
> of the expected value.
> 
> After all when making a measurement, you are are interested in the
> value and its error bounds, eg 10mm +/- 0.1%, or 10mm +/- 0.01mm. We
> can never know its exact value.
> 
> 
> If you are using experimental data to estimate a quantity (and a p value is a 
> quantity in the required sense), then you are interested in the confidence 
> interval, not an absolute or percentage error. And the confidence interval 
> for a given probability of including the true value decreases with the number 
> of trials (since the standard error decreases with N).
> 
> Bruce
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/CAFxXSLQZ6-y7W70ZDRDEqbcVY2agyrAB8SpovtgfiHMGQxDMqA%40mail.gmail.com
>  
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/76F1634F-6B3B-40C8-A460-FA6EC139E599%40gmail.com.


Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-09 Thread Bruce Kellett
On Tue, Mar 10, 2020 at 8:54 AM Russell Standish 
wrote:

> On Sun, Mar 08, 2020 at 10:10:23PM +1100, Bruce Kellett wrote:
> >
> > > > In order to infer a probability of p = 0.5, your branch data
> must
> > have
> > > > approximately equal numbers of zeros and ones. The number of
> > branches
> > > with
> > > > equal numbers of zeros and ones is given by the binomial
> > coefficient. For
> > > large
> > > > even N = 2M trials, this coefficient is N!/M!*M!. Using the
> > Stirling
> > > > approximation to the factorial for large N, this goes as
> 2^N/sqrt
> > (N)
> > > (within
> > > > factors of order one). Since there are 2^N sequences, the
> > proportion with
> > > n_0 =
> > > > n_1 vanishes as 1/sqrt(N) for N large.
> >
> >
> >
> > This is the nub of the proof you wanted.
>
> No - it is simply irrelevant. The statement I made was about the
> proportion of strings whose bit ratio lies within certain percentage
> of the expected value.
>
> After all when making a measurement, you are are interested in the
> value and its error bounds, eg 10mm +/- 0.1%, or 10mm +/- 0.01mm. We
> can never know its exact value.
>


If you are using experimental data to estimate a quantity (and a p value is
a quantity in the required sense), then you are interested in the
confidence interval, not an absolute or percentage error. And the
confidence interval for a given probability of including the true value
decreases with the number of trials (since the standard error decreases
with N).

Bruce

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAFxXSLQZ6-y7W70ZDRDEqbcVY2agyrAB8SpovtgfiHMGQxDMqA%40mail.gmail.com.


Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-09 Thread Russell Standish
On Sun, Mar 08, 2020 at 10:10:23PM +1100, Bruce Kellett wrote:
> 
> >     > In order to infer a probability of p = 0.5, your branch data must
> have
> >     > approximately equal numbers of zeros and ones. The number of
> branches
> >     with
> >     > equal numbers of zeros and ones is given by the binomial
> coefficient. For
> >     large
> >     > even N = 2M trials, this coefficient is N!/M!*M!. Using the
> Stirling
> >     > approximation to the factorial for large N, this goes as 2^N/sqrt
> (N)
> >     (within
> >     > factors of order one). Since there are 2^N sequences, the
> proportion with
> >     n_0 =
> >     > n_1 vanishes as 1/sqrt(N) for N large.
> 
> 
> 
> This is the nub of the proof you wanted.

No - it is simply irrelevant. The statement I made was about the
proportion of strings whose bit ratio lies within certain percentage
of the expected value.

After all when making a measurement, you are are interested in the
value and its error bounds, eg 10mm +/- 0.1%, or 10mm +/- 0.01mm. We
can never know its exact value.


-- 


Dr Russell StandishPhone 0425 253119 (mobile)
Principal, High Performance Coders hpco...@hpcoders.com.au
  http://www.hpcoders.com.au


-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/20200309215348.GH2903%40zen.


Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-09 Thread Bruno Marchal

> On 8 Mar 2020, at 11:56, Bruce Kellett  wrote:
> 
> On Sun, Mar 8, 2020 at 7:46 PM Russell Standish  > wrote:
> On Sun, Mar 08, 2020 at 06:50:52PM +1100, Bruce Kellett wrote:
> > On Sun, Mar 8, 2020 at 5:32 PM Russell Standish  > > wrote:
> > 
> > On Fri, Mar 06, 2020 at 10:44:37AM +1100, Bruce Kellett wrote:
> > 
> > > That is, in fact, false. It does not generate the same strings as
> > flipping a
> > > coin in single world. Sure, each of the strings in Everett could have
> > been
> > > obtained from coin flips -- but then the probability of a sequence of
> > 10,000
> > > heads is very low, whereas in many-worlds you are guaranteed that one
> > observer
> > > will obtain this sequence. There is a profound difference between the 
> > two
> > > cases.
> > 
> > You have made this statement multiple times, and it appears to be at
> > the heart of our disagreement. I don't see what the profound
> > difference is.
> > 
> > If I select a subset from the set of all strings of length N, for 
> > example
> > all strings with exactly N/3 1s, then I get a quite specific value for 
> > the
> > proportion of the whole that match it:
> > 
> > / N \
> > || 2^{-N}  = p.
> > \N/3/
> > 
> > Now this number p will also equal the probability of seeing exactly
> > N/3 coins land head up when N coins are tossed.
> > 
> > What is the profound difference?
> > 
> > 
> > 
> > Take a more extreme case. The probability of getting 1000 heads on 1000 coin
> > tosses is 1/2^1000.
> > If you measure the spin components of an ensemble of identical spin-half
> > particles, there will certainly be one observer who sees 1000 spin-up 
> > results.
> > That is the difference -- the difference between probability of 1/2^1000 
> > and a
> > probability of one.
> > 
> > In fact in a recent podcast by Sean Carroll (that has been discussed on the
> > list previously), he makes the statement that this rare event (with 
> > probability
> > p = 1/2^1000) certainly occurs. In other words, he is claiming  that the
> > probability is both 1/2^1000 and one. That this is a flat contradiction 
> > appears
> > to escape him. The difference in probabilities between coin tosses and
> > Everettian measurements couldn't be more stark.
> 
> That is because you're talking about different things. The rare event
> that 1 in 2^1000 observers see certainly occurs. In this case
> certainty does not refer to probability 1, as no probabilities are
> applicable in that 3p picture. Probabilities in the MWI sense refers
> to what an observer will see next, it is a 1p concept.
> 
> And that 1p context, I do not see any difference in how probabilities
> are interpreted, nor in their numerical values.
> 
> Perhaps Caroll is being sloppy. If so, I would think that could be forgiven.
> 
> 
> Yes, I think the Carroll's comment was just sloppy. The trouble is that this 
> sort of sloppiness permeates all of these discussions. As you say, 
> probability really has meaning only in the 1p picture. So the guy who sees 
> 1000 spin-ups in the 1000 trials will conclude that the probability of 
> spin-up is very close to one. That is why it makes sense to say that the 
> probability is one. The fact that this one guy sees this is certain in 
> Many-worlds (This may be another meaning of probability, but an event that is 
> certain to happen is usually referred to as having probability one.).
> 
> The trouble comes when you use the same term 'probability' to refer to the 
> fact that this guy is just one of the 2^N guys who are generated in this 
> experiment. The fact that he may be in the minority does not alter the fact 
> that he exists, and infers a probability close to one for spin-up. The 3p 
> picture here is to consider that this guy is just chosen at random from a 
> uniform distribution over all 2^N copies at the end of the experiment. And I 
> find it difficult to give any sensible meaning to that idea. No one is 
> selecting anything at random from the the 2^N copies because that is to how 
> the copies come about -- it is all completely deterministic.
> 
> The guy who gets the 1000 spin-ups infers a probability close to one, so he 
> is entitled to think that the probability of getting an approximately even 
> number of ups and downs is very small: eps^1000*(1-eps)^1000 for eps very 
> close to zero. Similarly, guys who see approximately equal numbers of up and 
> down infers a probability close to 0.5. So they are entitled to conclude that 
> the probability of seeing all spin-up is vanishingly small, namely, 1/2^1000.
> 
> The main point I have been trying to make is that this is true whatever the 
> ratio of ups to downs is in the data that any individual observes. Everyone 
> concludes that their observed relative frequency is a good indicator of the 
> actual probability, and that other ratios of up:down are extr

Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-09 Thread Bruno Marchal

> On 8 Mar 2020, at 08:50, Bruce Kellett  wrote:
> 
> On Sun, Mar 8, 2020 at 5:32 PM Russell Standish  > wrote:
> On Fri, Mar 06, 2020 at 10:44:37AM +1100, Bruce Kellett wrote:
> 
> > That is, in fact, false. It does not generate the same strings as flipping a
> > coin in single world. Sure, each of the strings in Everett could have been
> > obtained from coin flips -- but then the probability of a sequence of 10,000
> > heads is very low, whereas in many-worlds you are guaranteed that one 
> > observer
> > will obtain this sequence. There is a profound difference between the two
> > cases.
> 
> You have made this statement multiple times, and it appears to be at
> the heart of our disagreement. I don't see what the profound
> difference is.
> 
> If I select a subset from the set of all strings of length N, for example all 
> strings with exactly N/3 1s, then I get a quite specific value for the 
> proportion of the whole that match it:
> 
> / N \
> || 2^{-N}  = p.
> \N/3/
> 
> Now this number p will also equal the probability of seeing exactly
> N/3 coins land head up when N coins are tossed.
> 
> What is the profound difference?
> 
> 
> Take a more extreme case. The probability of getting 1000 heads on 1000 coin 
> tosses is 1/2^1000.
> If you measure the spin components of an ensemble of identical spin-half 
> particles, there will certainly be one observer who sees 1000 spin-up 
> results. That is the difference -- the difference between probability of 
> 1/2^1000 and a probability of one.

That is the 3-1p probability. You forget that the uncertainty is on the 
experience. You did accept that there is an 1p-uncertainty. 




> 
> In fact in a recent podcast by Sean Carroll (that has been discussed on the 
> list previously), he makes the statement that this rare event (with 
> probability p = 1/2^1000) certainly occurs. In other words, he is claiming  
> that the probability is both 1/2^1000 and one. That this is a flat 
> contradiction appears to escape him. The difference in probabilities between 
> coin tosses and Everettian measurements couldn't be more stark.

The probability that someone get the sequence of only head is one, does not 
entaill that the probability that I am that one is 1. The flat contradiction 
disappear when you keep in mind that the uncertainty, that we wish to quantify 
in a manner or another, concerns the particular 1p accessible experience.

Bruno




> 
> Bruce
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/CAFxXSLQC%3DCTYjUbZ4BHE78YuUrMTWkOHEV_%3DW6LB4Q4_pJ-SyA%40mail.gmail.com
>  
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/2B41B3C9-228A-423B-B6C3-29135EFC84F1%40ulb.ac.be.


Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-09 Thread Bruno Marchal

> On 5 Mar 2020, at 12:12, Bruce Kellett  wrote:
> 
> On Thu, Mar 5, 2020 at 10:05 PM Bruno Marchal  > wrote:
> On 5 Mar 2020, at 05:52, Bruce Kellett  > wrote:
>> On Thu, Mar 5, 2020 at 3:23 PM 'Brent Meeker' via Everything List 
>> mailto:everything-list@googlegroups.com>> 
>> wrote:
>> On 3/4/2020 7:54 PM, Bruce Kellett wrote:
>>> On Thu, Mar 5, 2020 at 2:02 PM 'Brent Meeker' via Everything List 
>>> >> > wrote:
>>> On 3/4/2020 6:45 PM, Bruce Kellett wrote:
 On Thu, Mar 5, 2020 at 1:34 PM 'Brent Meeker' via Everything List 
 >>> > wrote:
 On 3/4/2020 6:18 PM, Bruce Kellett wrote:
> 
> But one cannot just assume the Born rule in this case -- one has to use 
> the data to verify the probabilistic predictions. And the observers on 
> the majority of branches will get data that disconfirms the Born rule. 
> (For any value of the probability, the proportion of observers who get 
> data consistent with this value decreases as N becomes large.)
 
 No, that's where I was disagreeing with you.  If "consistent with" is 
 defined as being within some given fraction, the proportion increases as N 
 becomes large.  If the probability of the an even is p and q=1-p then the 
 proportion of events in N trials within one std-deviation of p approaches 
 1/e and N->oo and the width of the one std-deviation range goes down at 
 1/sqrt(N).  So the distribution of values over the ensemble of observers 
 becomes concentrated near the expected value, i.e. is consistent with that 
 value.
 
 
 But what is the expected value? Does that not depend on the inferred 
 probabilities? The probability p is not a given -- it can only be inferred 
 from the observed data. And different observers will infer different 
 values of p. Then certainly, each observer will think that the 
 distribution of values over the 2^N observers will be concentrated near 
 his inferred value of p. The trouble is that that this is true whatever 
 value of p the observer infers -- i.e., for whatever branch of the 
 ensemble he is on.
>>> 
>>> Not if the branches are unequally weighted (or numbered), as Carroll seems 
>>> to assume, and those weights (or numbers) define the probability of the 
>>> branch in accordance with the Born rule.  I'm not arguing that this doesn't 
>>> have to be put in "by hand".  I'm arguing it is a way of assigning measures 
>>> to the multiple worlds so that even though all the results occur, almost 
>>> all observers will find results close to the Born rule, i.e. that 
>>> self-locating uncertainty will imply the right statistics.
>>> 
>>> But the trouble is that Everett assumes that all outcomes occur on every 
>>> trial. So all the branches occur with certainty -- there is no "weight" 
>>> that differentiates different branches. That is to assume that the branches 
>>> occur with the probabilities that they would have in a single-world 
>>> scenario. To assume that branches have different weights is in direct 
>>> contradiction to the basic postulates the the many-worlds approach. It is 
>>> not that one can "put in the weights by hand"; it is that any assignment of 
>>> such weights contradicts that basis of the interpretation, which is that 
>>> all branches occur with certainty.
>> 
>> All branches occur with certainty so long as their weight>0.  Yes, Everett 
>> simply assumed they all occur.  Take a simple branch counting model.  Assume 
>> that at each trial a there are a 100 branches and a of them are |0> and b 
>> are |1> and the values are independent of the prior values in the sequence.  
>> So long as a and b > 0.1 every value, either |0> or |1> will occur at every 
>> branching.  But almost all observers, seeing only one sequence thru the 
>> branches, will infer P(0)~|a|^2 and P(1)~|b|^2.
>> 
>> Do you really disagree that there is a way to assign weights or 
>> probabilities to the sequences that reproduces the same statistics as 
>> repeating the N trials many times in one world?  It's no more than saying 
>> that one-world is an ergodic process.
>> 
>>  
>> I am saying that assigning weights or probabilities in Everett, by hand 
>> according to the Born rule, is incoherent.
> 
> I think that it is incoherent with a preconception of the notion of “world”. 
> There are only consistent histories, and in fact "consistent histories 
> supported by a continuum of computations”. You take Everett to much literally.
> 
> 
> I thought you were the one that claimed that Everett had essentially solved 
> all the problems……

On the contrary. Everett is going in the “right (with respect to mechanism) 
direction, but he is using Mechanism more or less explicitly, and this requires 
to take into account *all* computations, not just the quantum one.


> 
> But actually, 

Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-09 Thread Bruno Marchal

> On 5 Mar 2020, at 12:07, Bruce Kellett  wrote:
> 
> On Thu, Mar 5, 2020 at 9:59 PM Bruno Marchal  > wrote:
> On 5 Mar 2020, at 04:54, Bruce Kellett  > wrote:
>> On Thu, Mar 5, 2020 at 2:02 PM 'Brent Meeker' via Everything List 
>> mailto:everything-list@googlegroups.com>> 
>> wrote:
>> On 3/4/2020 6:45 PM, Bruce Kellett wrote:
>>> On Thu, Mar 5, 2020 at 1:34 PM 'Brent Meeker' via Everything List 
>>> >> > wrote:
>>> On 3/4/2020 6:18 PM, Bruce Kellett wrote:
 
 But one cannot just assume the Born rule in this case -- one has to use 
 the data to verify the probabilistic predictions. And the observers on the 
 majority of branches will get data that disconfirms the Born rule. (For 
 any value of the probability, the proportion of observers who get data 
 consistent with this value decreases as N becomes large.)
>>> 
>>> No, that's where I was disagreeing with you.  If "consistent with" is 
>>> defined as being within some given fraction, the proportion increases as N 
>>> becomes large.  If the probability of the an even is p and q=1-p then the 
>>> proportion of events in N trials within one std-deviation of p approaches 
>>> 1/e and N->oo and the width of the one std-deviation range goes down at 
>>> 1/sqrt(N).  So the distribution of values over the ensemble of observers 
>>> becomes concentrated near the expected value, i.e. is consistent with that 
>>> value.
>>> 
>>> 
>>> But what is the expected value? Does that not depend on the inferred 
>>> probabilities? The probability p is not a given -- it can only be inferred 
>>> from the observed data. And different observers will infer different values 
>>> of p. Then certainly, each observer will think that the distribution of 
>>> values over the 2^N observers will be concentrated near his inferred value 
>>> of p. The trouble is that that this is true whatever value of p the 
>>> observer infers -- i.e., for whatever branch of the ensemble he is on.
>> 
>> Not if the branches are unequally weighted (or numbered), as Carroll seems 
>> to assume, and those weights (or numbers) define the probability of the 
>> branch in accordance with the Born rule.  I'm not arguing that this doesn't 
>> have to be put in "by hand".  I'm arguing it is a way of assigning measures 
>> to the multiple worlds so that even though all the results occur, almost all 
>> observers will find results close to the Born rule, i.e. that self-locating 
>> uncertainty will imply the right statistics.
>> 
>> But the trouble is that Everett assumes that all outcomes occur on every 
>> trial. So all the branches occur with certainty —
> 
> In the 3p view, but then the “self-locating” idea explains that QM predicts 
> that the observers abstained  do not see the “other branches” (“they don’t 
> even feel the split”, as Everett argued correctly).
> 
> 
> But each individual can test the probability predictions from the 
> first-person data obtained on his branch. And most will find that the Born 
> rule is disconfirmed if Everett is true.

That is not in Everett, and Graham, like Hartle and Gell'man explain why we 
must not compute or infer the probability. Either you add the Born Rule, or you 
use the Paulette Février calculus, or Gleason theorem, to get the 
probabilities. The coefficient of the terms, when squared, gives the relative 
probabilities on the consistent histories. That is rather clear also in the 
work of Griffith and Omnes. 




> 
>> there is no "weight" that differentiates different branches.
> 
> Then the Born rule is false, and the whole of QM is false.
> 
> No, QM is not false. It is only Everett that is disconfirmed by experiment.

?



> 
> Everett + mechanism + Gleason do solve the core of the problem.
> 
> No. As discussed with Brent, the Born rule cannot be derived within the 
> framework of Everettian QM. Gleason's theorem is useful only if you have a 
> prior proof of the existence of a probability distribution.

The quantum formalism imposes such distribution, although in a relative way 
(which is threatening the notion of worlds, but this is no problem with 
mechanism, as world are pure phenomenological construct made by the relative 
numbers in arithmetic). 




> And you cannot achieve that within the Everettian context. Even postulating 
> the Born rule ad hoc and imposing it by hand does not solve the problems with 
> Everettian QM.

Yes, you need to go beyond Everett, and compute the probabilities on all 
computations, and abandon the notion of worlds, and eventually this asks to 
abandon physicalism.


> 
> (Except that we can’t use the universal wave no more, but then we do recover 
> it in arithmetic, like it was necessary, so no problem at all, except 
> difficult mathematics …).
> 
> 
> 
> 
>> That is to assume that the branches occur with the probabilities that they 
>> would have in a single-world scenario. To assume

Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-09 Thread Bruno Marchal

> On 5 Mar 2020, at 11:59, Bruce Kellett  wrote:
> 
> On Thu, Mar 5, 2020 at 9:46 PM Bruno Marchal  > wrote:
> On 5 Mar 2020, at 01:40, Bruce Kellett  > wrote:
>> On Thu, Mar 5, 2020 at 10:39 AM Stathis Papaioannou > > wrote:
>> On Thu, 5 Mar 2020 at 09:46, Bruce Kellett > > wrote:
>> 
>> The greater problem is that any idea of probability founders when all 
>> outcomes occur for any measurement. Or have you not followed the arguments I 
>> have been making that shows this to be the case?
>> 
>> I think it worth noting that to some people it is obvious that if an entity 
>> is to be duplicated in two places it should have a 1/2 expectation of 
>> finding itself in one or other place while to other people it is obvious 
>> that there should be no such expectation.
>> 
>> 
>> Hence my point that intuition is usually faulty in such cases -- the 
>> straightforward testing of any intuition with repeated trials shows the 
>> unreliability of such intuitions.
> 
> It did not. You were confusing the first person account with the third person 
> account.
> 
> Bullshit. There is no such confusion. You are just using a rhetorical 
> flourish to avoid facing the real issues.
> 
>  
> QM predicts that all measurement outcome are obtained, and by linearity, that 
> all observers obtained could not have predicted it, for the same reason 
> nobody can predict the outcome in the WM self)duplication experience. Those 
> who claim the contrary have to say at some point that the Helsinki guy has 
> died, but then Mechanism is refuted.
> 
> 
> Of course no one can predict the outcome of a quantum spin measurement on a 
> random spin-half particle. Just as no one can predict the his 1p outcome in 
> WM-duplication.

OK. That was my point. That’s the very point John Clark disagree with. 
If you agree that we cannot predict that 1p outcome,  you agree with what I 
call the 1p-indeterminacy. 


> That  is the point I have been making -- there is no useful notion of 
> probability available in either case.

Once you agree that there is an 1p indeterminacy, it is reasonable to ask 
oneself if there I no probability or uncertainty calculus, and in the ideal 
cases of the though experiment, the binomial distribution makes sense. But this 
is used only to illustrate the 1p indeterminacy. The mathematics here suggest a 
quantum credibility, who “certainly” case, or “yes-no experience” is described 
by the modal logic of self-reference. That is for later.

Bruno


> 
> Bruce
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/CAFxXSLT1V8Pbe%2BCkHQbNKdDy005rk0B%2BxeoC_Tizd%3Dsw7YchFQ%40mail.gmail.com
>  
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/4CCED817-90F2-43B6-BC7A-7067268CE212%40ulb.ac.be.


Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-09 Thread Bruno Marchal

> On 5 Mar 2020, at 11:54, Bruce Kellett  wrote:
> 
> On Thu, Mar 5, 2020 at 9:39 PM Bruno Marchal  > wrote:
> On 5 Mar 2020, at 00:39, Stathis Papaioannou  > wrote:
>> 
>> I think it worth noting that to some people it is obvious that if an entity 
>> is to be duplicated in two places it should have a 1/2 expectation of 
>> finding itself in one or other place while to other people it is obvious 
>> that there should be no such expectation.
> 
> It is not just obvious. It is derivable from the simplest definition of 
> “first person” and “third person”.
> 
> This is simply false. It cannot be derived from anything. The truth is that 
> testing any such notion about  the probability by repeating the trial shows 
> that no single value of the probability is appropriate. Alternatively, for 
> most 1p observers, any particular theory about the probability will be 
> disconfirmed.

The P(W or M) = 1 is confirmed by all copies. P(W and M) = 1 is refuted by all 
copies.




> The first person data is the particular bit string recorded by an individual.

Yes, but seen from that individual’s point of view.



> From the 3p perspective,

Note that the question is about the 1p perspective ….


> there are 2^N different 1p bit strings after N trials.


In Helsinki you know in advance that you will get only one of the result among 
the 2^N different bit strings.

Bruno



> 
> Bruce
> 
>  
> All arguments presented against the 1p-indeterminacy have always been 
> refuted, and almost all time by pointing on a confusion between first person 
> and third person.  The first person id defined by the owner of the personal 
> memory taken with them in the box, and the third person is described by the 
> personal memory of those outside the box.
> 
> 
> 
> 
>> This seems to be an immediate judgement on considering the question, with 
>> attempts at rational justification perhaps following but not being the 
>> primary determinant of belief. A parallel is Newcomb’s paradox: on learning 
>> of it some people immediately feel it is obvious you should choose one box 
>> and others immediately feel you should choose both boxes.
> 
> 
> I think that the Newcomb situation is far more complex, or that the 
> self-duplication is far more easy, at least for anyone who admits even weak 
> form of Mechanism. To believe that there is no indeterminacy is like 
> believing that all amoebas have telepathic power. 
> 
> The only reason I can see to refuse the first person indeterminacy is the 
> comprehension that it leads to the end of physicalism, that is a long lasting 
> comfortable habit of thought. People tend to hate change of paradigm.  
> 
> Bruno
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/CAFxXSLQ_9TuO2n8ggPP4UggctLLtQJKHpvJqkD7vUnPrg-%2B6hA%40mail.gmail.com
>  
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/F2A36980-F0B6-44B3-867A-D607B6C84351%40ulb.ac.be.


Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-08 Thread Bruce Kellett
On Sun, Mar 8, 2020 at 11:54 PM smitra  wrote:

> On 08-03-2020 11:56, Bruce Kellett wrote:
> >
> > Yes, I think the Carroll's comment was just sloppy. The trouble is
> > that this sort of sloppiness permeates all of these discussions. As
> > you say, probability really has meaning only in the 1p picture. So the
> > guy who sees 1000 spin-ups in the 1000 trials will conclude that the
> > probability of spin-up is very close to one. That is why it makes
> > sense to say that the probability is one. The fact that this one guy
> > sees this is certain in Many-worlds (This may be another meaning of
> > probability, but an event that is certain to happen is usually
> > referred to as having probability one.).
> >
> > The trouble comes when you use the same term 'probability' to refer to
> > the fact that this guy is just one of the 2^N guys who are generated
> > in this experiment. The fact that he may be in the minority does not
> > alter the fact that he exists, and infers a probability close to one
> > for spin-up. The 3p picture here is to consider that this guy is just
> > chosen at random from a uniform distribution over all 2^N copies at
> > the end of the experiment. And I find it difficult to give any
> > sensible meaning to that idea. No one is selecting anything at random
> > from the the 2^N copies because that is to how the copies come about
> > -- it is all completely deterministic.
> >
> > The guy who gets the 1000 spin-ups infers a probability close to one,
> > so he is entitled to think that the probability of getting an
> > approximately even number of ups and downs is very small:
> > eps^1000*(1-eps)^1000 for eps very close to zero. Similarly, guys who
> > see approximately equal numbers of up and down infers a probability
> > close to 0.5. So they are entitled to conclude that the probability of
> > seeing all spin-up is vanishingly small, namely, 1/2^1000.
> >
> > The main point I have been trying to make is that this is true
> > whatever the ratio of ups to downs is in the data that any individual
> > observes. Everyone concludes that their observed relative frequency is
> > a good indicator of the actual probability, and that other ratios of
> > up:down are extremely unlikely. This is a simple consequence of the
> > fact that probability is, as you say, a 1p notion, and can only be
> > estimated from the actual data that an individual obtains. Since
> > people get different data, they get different estimates of the
> > probability, covering the entire range [0,1]; no 3p notion of
> > probability is available -- probabilities do not make sense in the
> > Everettian case when all outcomes occur. This is the basic argument
> > that Kent makes in arxiv:0905.0624.
>
> It's not true that everyone concludes that their observed relative
> frequency is
> a good indicator of the actual probability. Precisely in cases where
> there is a large deviation of the statistics from the actual probability
> will this also be visible in the observed data.


You appear to assume that there is an actual probability in these
situations. There is no evidence for that in Everett.

> It's only when you
> consider the case where the statistical fluctuation has affected all the
> data in a self-consistent way that this becomes hidden. But, of course,
> nothing limits that freak observer from doing a few more measurements.
>

I think you are referring to the possibility that sub-sequences of data do
not reflect the overall probability. Yes, but that is always the case. Why
do you think that experimenters at the LHC see so many apparently
significant results that go away with more data? The experimenter does not
know from his data that it is 'freak'. If he does more trials, or repeats
the experiment, the data may converge to some result, or they may not. If
Everett is correct, and there is no true probability, then the fact that
the data appear to converge is just a miracle -- or Everett is wrong. I
think the latter is more likely.

Bruce


The laws of physics may make it inevitable that there are observers who
> will happen to observe such large statistical deviations that they'll
> draw the wrong conclusions about the laws of physics. That fact is not
> evidence for or against such laws of physics. Experiments can still
> settle the question if the laws of physics are correct. Pointing to
> freak observers is not a good argument, because all these freak
> observers need to do is do more experiments to demonstrate that their
> previous observations are a statistical fluke.
>
> One can then continue to select those observers who'll continue to see
> statistical flukes. But the problem is then that these observers need to
> stop at some point, being satisfied with their observations implying the
> wrong theory. This means that not just the spin experiment, but
> everything else must also have been a statistical fluke in such a way as
> to imply the wrong theory in a consistent way. So, for centuries a large
> number of

Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-08 Thread Bruce Kellett
On Mon, Mar 9, 2020 at 5:29 AM 'Brent Meeker' via Everything List <
everything-list@googlegroups.com> wrote:

> On 3/8/2020 3:56 AM, Bruce Kellett wrote:
>
> On Sun, Mar 8, 2020 at 7:46 PM Russell Standish 
> wrote:
>
>> On Sun, Mar 08, 2020 at 06:50:52PM +1100, Bruce Kellett wrote:
>> > On Sun, Mar 8, 2020 at 5:32 PM Russell Standish 
>> wrote:
>> >
>> > On Fri, Mar 06, 2020 at 10:44:37AM +1100, Bruce Kellett wrote:
>> >
>> > > That is, in fact, false. It does not generate the same strings as
>> > flipping a
>> > > coin in single world. Sure, each of the strings in Everett could
>> have
>> > been
>> > > obtained from coin flips -- but then the probability of a
>> sequence of
>> > 10,000
>> > > heads is very low, whereas in many-worlds you are guaranteed that
>> one
>> > observer
>> > > will obtain this sequence. There is a profound difference between
>> the two
>> > > cases.
>> >
>> > You have made this statement multiple times, and it appears to be at
>> > the heart of our disagreement. I don't see what the profound
>> > difference is.
>> >
>> > If I select a subset from the set of all strings of length N, for
>> example
>> > all strings with exactly N/3 1s, then I get a quite specific value
>> for the
>> > proportion of the whole that match it:
>> >
>> > / N \
>> > || 2^{-N}  = p.
>> > \N/3/
>> >
>> > Now this number p will also equal the probability of seeing exactly
>> > N/3 coins land head up when N coins are tossed.
>> >
>> > What is the profound difference?
>> >
>> >
>> >
>> > Take a more extreme case. The probability of getting 1000 heads on 1000
>> coin
>> > tosses is 1/2^1000.
>> > If you measure the spin components of an ensemble of identical spin-half
>> > particles, there will certainly be one observer who sees 1000 spin-up
>> results.
>> > That is the difference -- the difference between probability of
>> 1/2^1000 and a
>> > probability of one.
>> >
>> > In fact in a recent podcast by Sean Carroll (that has been discussed on
>> the
>> > list previously), he makes the statement that this rare event (with
>> probability
>> > p = 1/2^1000) certainly occurs. In other words, he is claiming  that the
>> > probability is both 1/2^1000 and one. That this is a flat contradiction
>> appears
>> > to escape him. The difference in probabilities between coin tosses and
>> > Everettian measurements couldn't be more stark.
>>
>> That is because you're talking about different things. The rare event
>> that 1 in 2^1000 observers see certainly occurs. In this case
>> certainty does not refer to probability 1, as no probabilities are
>> applicable in that 3p picture. Probabilities in the MWI sense refers
>> to what an observer will see next, it is a 1p concept.
>>
>> And that 1p context, I do not see any difference in how probabilities
>> are interpreted, nor in their numerical values.
>>
>> Perhaps Caroll is being sloppy. If so, I would think that could be
>> forgiven.
>>
>
>
> Yes, I think the Carroll's comment was just sloppy. The trouble is that
> this sort of sloppiness permeates all of these discussions. As you say,
> probability really has meaning only in the 1p picture. So the guy who sees
> 1000 spin-ups in the 1000 trials will conclude that the probability of
> spin-up is very close to one. That is why it makes sense to say that the
> probability is one. The fact that this one guy sees this is certain in
> Many-worlds (This may be another meaning of probability, but an event that
> is certain to happen is usually referred to as having probability one.).
>
> The trouble comes when you use the same term 'probability' to refer to the
> fact that this guy is just one of the 2^N guys who are generated in this
> experiment. The fact that he may be in the minority does not alter the fact
> that he exists, and infers a probability close to one for spin-up. The 3p
> picture here is to consider that this guy is just chosen at random from a
> uniform distribution over all 2^N copies at the end of the experiment. And
> I find it difficult to give any sensible meaning to that idea. No one is
> selecting anything at random from the the 2^N copies because that is to how
> the copies come about -- it is all completely deterministic.
>
> The guy who gets the 1000 spin-ups infers a probability close to one, so
> he is entitled to think that the probability of getting an approximately
> even number of ups and downs is very small: eps^1000*(1-eps)^1000 for eps
> very close to zero. Similarly, guys who see approximately equal numbers of
> up and down infers a probability close to 0.5. So they are entitled to
> conclude that the probability of seeing all spin-up is vanishingly small,
> namely, 1/2^1000.
>
> The main point I have been trying to make is that this is true whatever
> the ratio of ups to downs is in the data that any individual observes.
> Everyone concludes that their observed relative frequency is a good
> indicato

Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-08 Thread 'Brent Meeker' via Everything List



On 3/8/2020 3:56 AM, Bruce Kellett wrote:
On Sun, Mar 8, 2020 at 7:46 PM Russell Standish > wrote:


On Sun, Mar 08, 2020 at 06:50:52PM +1100, Bruce Kellett wrote:
> On Sun, Mar 8, 2020 at 5:32 PM Russell Standish
mailto:li...@hpcoders.com.au>> wrote:
>
>     On Fri, Mar 06, 2020 at 10:44:37AM +1100, Bruce Kellett wrote:
>
>     > That is, in fact, false. It does not generate the same
strings as
>     flipping a
>     > coin in single world. Sure, each of the strings in Everett
could have
>     been
>     > obtained from coin flips -- but then the probability of a
sequence of
>     10,000
>     > heads is very low, whereas in many-worlds you are
guaranteed that one
>     observer
>     > will obtain this sequence. There is a profound difference
between the two
>     > cases.
>
>     You have made this statement multiple times, and it appears
to be at
>     the heart of our disagreement. I don't see what the profound
>     difference is.
>
>     If I select a subset from the set of all strings of length
N, for example
>     all strings with exactly N/3 1s, then I get a quite specific
value for the
>     proportion of the whole that match it:
>
>     / N \
>     |    | 2^{-N}  = p.
>     \N/3/
>
>     Now this number p will also equal the probability of seeing
exactly
>     N/3 coins land head up when N coins are tossed.
>
>     What is the profound difference?
>
>
>
> Take a more extreme case. The probability of getting 1000 heads
on 1000 coin
> tosses is 1/2^1000.
> If you measure the spin components of an ensemble of identical
spin-half
> particles, there will certainly be one observer who sees 1000
spin-up results.
> That is the difference -- the difference between probability of
1/2^1000 and a
> probability of one.
>
> In fact in a recent podcast by Sean Carroll (that has been
discussed on the
> list previously), he makes the statement that this rare event
(with probability
> p = 1/2^1000) certainly occurs. In other words, he is claiming
 that the
> probability is both 1/2^1000 and one. That this is a flat
contradiction appears
> to escape him. The difference in probabilities between coin
tosses and
> Everettian measurements couldn't be more stark.

That is because you're talking about different things. The rare event
that 1 in 2^1000 observers see certainly occurs. In this case
certainty does not refer to probability 1, as no probabilities are
applicable in that 3p picture. Probabilities in the MWI sense refers
to what an observer will see next, it is a 1p concept.

And that 1p context, I do not see any difference in how probabilities
are interpreted, nor in their numerical values.

Perhaps Caroll is being sloppy. If so, I would think that could be
forgiven.



Yes, I think the Carroll's comment was just sloppy. The trouble is 
that this sort of sloppiness permeates all of these discussions. As 
you say, probability really has meaning only in the 1p picture. So the 
guy who sees 1000 spin-ups in the 1000 trials will conclude that the 
probability of spin-up is very close to one. That is why it makes 
sense to say that the probability is one. The fact that this one guy 
sees this is certain in Many-worlds (This may be another meaning of 
probability, but an event that is certain to happen is usually 
referred to as having probability one.).


The trouble comes when you use the same term 'probability' to refer to 
the fact that this guy is just one of the 2^N guys who are generated 
in this experiment. The fact that he may be in the minority does not 
alter the fact that he exists, and infers a probability close to one 
for spin-up. The 3p picture here is to consider that this guy is just 
chosen at random from a uniform distribution over all 2^N copies at 
the end of the experiment. And I find it difficult to give any 
sensible meaning to that idea. No one is selecting anything at random 
from the the 2^N copies because that is to how the copies come about 
-- it is all completely deterministic.


The guy who gets the 1000 spin-ups infers a probability close to one, 
so he is entitled to think that the probability of getting an 
approximately even number of ups and downs is very small: 
eps^1000*(1-eps)^1000 for eps very close to zero. Similarly, guys who 
see approximately equal numbers of up and down infers a probability 
close to 0.5. So they are entitled to conclude that the probability of 
seeing all spin-up is vanishingly small, namely, 1/2^1000.


The main point I have been trying to make is that this is true 
whatever the ratio of ups to downs is in the data that any individual 
observes. Everyone concludes that their observed relative frequency is 
a good indicato

Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-08 Thread 'Brent Meeker' via Everything List



On 3/8/2020 12:08 AM, Bruce Kellett wrote:
On Sun, Mar 8, 2020 at 6:14 PM Russell Standish > wrote:


On Thu, Mar 05, 2020 at 09:45:38PM +1100, Bruce Kellett wrote:
> On Thu, Mar 5, 2020 at 5:26 PM Russell Standish
mailto:li...@hpcoders.com.au>> wrote:
>
>     But a very large proportion of them (→1 as N→∞) will report
being
>     within ε (called a confidence interval) of 50% for any given ε>0
>     chosen at the outset of the experiment. This is simply the
law of
>     large numbers theorem. You can't focus on the vanishingly small
>     population that lie outside the confidence interval.
>
>
> This is wrong.

Them's fighting words. Prove it!


I have, in other posts and below.

> In the binary situation where both outcomes occur for every
> trial, there are 2^N binary sequences for N repetitions of the
experiment. This
> set of binary sequences exhausts the possibilities, so the same
sequence is
> obtained for any two-component initial state -- regardless of
the amplitudes.

> You appear to assume that the natural probability in this
situation is p = 0.5
> and, what is more, your appeal to the law of large numbers
applies only for
> single-world probabilities, in which there is only one outcome
on each trial.

I didn't mention proability once in the above paragraph, not even
implicitly. I used the term "proportion". That the proportion will be
equal to the probability in a single universe case is a frequentist
assumption, and should be uncontroversial, but goes beyond what I
stated above.


Sure. But the proportion of the 2^N sequences that exhibit any 
particular p value (proportion of 1's) decreases with N.


> In order to infer a probability of p = 0.5, your branch data
must have
> approximately equal numbers of zeros and ones. The number of
branches with
> equal numbers of zeros and ones is given by the binomial
coefficient. For large
> even N = 2M trials, this coefficient is N!/M!*M!. Using the Stirling
> approximation to the factorial for large N, this goes as
2^N/sqrt(N) (within
> factors of order one). Since there are 2^N sequences, the
proportion with n_0 =
> n_1 vanishes as 1/sqrt(N) for N large.

I wasn't talking about that. I was talking about the proportion of
sequences whose ratio of 0 bits to 1 bits lie within ε of 0.5, rather
than the proportion of sequences that have exactly equal 0 or 1
bits. That proportion grows as sqrt N.



No, it falls as 1/sqrt(N). Remember, the confidence interval depends 
on the standard deviation, and that falls as 1/sqrt(n). Consequently 
deviations from equal numbers of zeros and ones for p to be within the 
CI of 0.5 must decline as n becomes large



> Now sequences with small departures from equal numbers will
still give
> probabilities within the confidence interval of p = 0.5. But
this confidence
> interval also shrinks as 1/sqrt(N) as N increases, so these
additional
> sequences do not contribute a growing number of cases giving p ~
0.5 as N
> increases.

The confidence interval ε is fixed.


No, it is not. The width of, say the 95% CI, decreases with N since 
the standard deviation falls as 1/sqrt(N).


Right.  But that's just a different way of saying the density of results 
concentrates around the expected value.  The CI interval in constructed 
to contain a certain fraction, but it's width contracts as 1/sqrt(N).   
Or if you take a fixed deviation interval around the expected value, 
e.g. 0.333_+_0.01  then the proportion within that interval goes to 1 as 
N->oo.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/92e9a166-bef2-26f0-9c28-30d5f2298730%40verizon.net.


Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-08 Thread smitra

On 08-03-2020 11:56, Bruce Kellett wrote:

On Sun, Mar 8, 2020 at 7:46 PM Russell Standish
 wrote:


On Sun, Mar 08, 2020 at 06:50:52PM +1100, Bruce Kellett wrote:

On Sun, Mar 8, 2020 at 5:32 PM Russell Standish

 wrote:


On Fri, Mar 06, 2020 at 10:44:37AM +1100, Bruce Kellett wrote:


That is, in fact, false. It does not generate the same

strings as

flipping a

coin in single world. Sure, each of the strings in Everett

could have

been

obtained from coin flips -- but then the probability of a

sequence of

10,000

heads is very low, whereas in many-worlds you are guaranteed

that one

observer

will obtain this sequence. There is a profound difference

between the two

cases.


You have made this statement multiple times, and it appears to

be at

the heart of our disagreement. I don't see what the profound
difference is.

If I select a subset from the set of all strings of length N,

for example

all strings with exactly N/3 1s, then I get a quite specific

value for the

proportion of the whole that match it:

/ N \
|| 2^{-N}  = p.
\N/3/

Now this number p will also equal the probability of seeing

exactly

N/3 coins land head up when N coins are tossed.

What is the profound difference?



Take a more extreme case. The probability of getting 1000 heads on

1000 coin

tosses is 1/2^1000.
If you measure the spin components of an ensemble of identical

spin-half

particles, there will certainly be one observer who sees 1000

spin-up results.

That is the difference -- the difference between probability of

1/2^1000 and a

probability of one.

In fact in a recent podcast by Sean Carroll (that has been

discussed on the

list previously), he makes the statement that this rare event

(with probability

p = 1/2^1000) certainly occurs. In other words, he is claiming

that the

probability is both 1/2^1000 and one. That this is a flat

contradiction appears

to escape him. The difference in probabilities between coin tosses

and

Everettian measurements couldn't be more stark.


That is because you're talking about different things. The rare
event
that 1 in 2^1000 observers see certainly occurs. In this case
certainty does not refer to probability 1, as no probabilities are
applicable in that 3p picture. Probabilities in the MWI sense refers
to what an observer will see next, it is a 1p concept.

And that 1p context, I do not see any difference in how
probabilities
are interpreted, nor in their numerical values.

Perhaps Caroll is being sloppy. If so, I would think that could be
forgiven.


Yes, I think the Carroll's comment was just sloppy. The trouble is
that this sort of sloppiness permeates all of these discussions. As
you say, probability really has meaning only in the 1p picture. So the
guy who sees 1000 spin-ups in the 1000 trials will conclude that the
probability of spin-up is very close to one. That is why it makes
sense to say that the probability is one. The fact that this one guy
sees this is certain in Many-worlds (This may be another meaning of
probability, but an event that is certain to happen is usually
referred to as having probability one.).

The trouble comes when you use the same term 'probability' to refer to
the fact that this guy is just one of the 2^N guys who are generated
in this experiment. The fact that he may be in the minority does not
alter the fact that he exists, and infers a probability close to one
for spin-up. The 3p picture here is to consider that this guy is just
chosen at random from a uniform distribution over all 2^N copies at
the end of the experiment. And I find it difficult to give any
sensible meaning to that idea. No one is selecting anything at random
from the the 2^N copies because that is to how the copies come about
-- it is all completely deterministic.

The guy who gets the 1000 spin-ups infers a probability close to one,
so he is entitled to think that the probability of getting an
approximately even number of ups and downs is very small:
eps^1000*(1-eps)^1000 for eps very close to zero. Similarly, guys who
see approximately equal numbers of up and down infers a probability
close to 0.5. So they are entitled to conclude that the probability of
seeing all spin-up is vanishingly small, namely, 1/2^1000.

The main point I have been trying to make is that this is true
whatever the ratio of ups to downs is in the data that any individual
observes. Everyone concludes that their observed relative frequency is
a good indicator of the actual probability, and that other ratios of
up:down are extremely unlikely. This is a simple consequence of the
fact that probability is, as you say, a 1p notion, and can only be
estimated from the actual data that an individual obtains. Since
people get different data, they get different estimates of the
probability, covering the entire range [0,1]; no 3p notion of
probability is available -- probabilities do not make sense in the
Everettian case when all outcomes occur. This is the basic argument
that Kent makes in arxiv

Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-08 Thread Bruce Kellett
On Sun, Mar 8, 2020 at 7:59 PM Russell Standish 
wrote:

> On Sun, Mar 08, 2020 at 07:08:25PM +1100, Bruce Kellett wrote:
> > On Sun, Mar 8, 2020 at 6:14 PM Russell Standish 
> wrote:
> >
> > On Thu, Mar 05, 2020 at 09:45:38PM +1100, Bruce Kellett wrote:
> > > On Thu, Mar 5, 2020 at 5:26 PM Russell Standish <
> li...@hpcoders.com.au>
> > wrote:
> > >
> > > But a very large proportion of them (→1 as N→∞) will report
> being
> > > within ε (called a confidence interval) of 50% for any given
> ε>0
> > > chosen at the outset of the experiment. This is simply the law
> of
> > > large numbers theorem. You can't focus on the vanishingly small
> > > population that lie outside the confidence interval.
> > >
> > >
> > > This is wrong.
> >
> > Them's fighting words. Prove it!
> >
> >
> > I have, in other posts and below.
>
> You didn't do it below, that's why I said prove it. What you wrote
> below had little bearing on what I wrote.
>

I outlines the proof in my reply to your other post tonight. Besides, the
proof is in Kent's paper arXiv:0905.0624.

> > In the binary situation where both outcomes occur for every
> > > trial, there are 2^N binary sequences for N repetitions of the
> > experiment. This
> > > set of binary sequences exhausts the possibilities, so the same
> sequence
> > is
> > > obtained for any two-component initial state -- regardless of the
> > amplitudes.
> >
> > > You appear to assume that the natural probability in this
> situation is p
> > = 0.5
> > > and, what is more, your appeal to the law of large numbers applies
> only
> > for
> > > single-world probabilities, in which there is only one outcome on
> each
> > trial.
> >
> > I didn't mention proability once in the above paragraph, not even
> > implicitly. I used the term "proportion". That the proportion will be
> > equal to the probability in a single universe case is a frequentist
> > assumption, and should be uncontroversial, but goes beyond what I
> > stated above.
> >
> >
> > Sure. But the proportion of the 2^N sequences that exhibit any
> particular p
> > value (proportion of 1's) decreases with N.
> >
>
> So what?

You claim that the proportion  reporting p ~ 0.5 goes to one as N --> oo.
That is manifestly false. The absolute number increases with the number of
trials, but the proportion of the 2^N copies at the end of the N trials
decreases as 1/sqrt(N).


> > In order to infer a probability of p = 0.5, your branch data must
> have
> > > approximately equal numbers of zeros and ones. The number of
> branches
> > with
> > > equal numbers of zeros and ones is given by the binomial
> coefficient. For
> > large
> > > even N = 2M trials, this coefficient is N!/M!*M!. Using the
> Stirling
> > > approximation to the factorial for large N, this goes as
> 2^N/sqrt(N)
> > (within
> > > factors of order one). Since there are 2^N sequences, the
> proportion with
> > n_0 =
> > > n_1 vanishes as 1/sqrt(N) for N large.
>


This is the nub of the proof you wanted.

> I wasn't talking about that. I was talking about the proportion of
> > sequences whose ratio of 0 bits to 1 bits lie within ε of 0.5, rather
> > than the proportion of sequences that have exactly equal 0 or 1
> > bits. That proportion grows as sqrt N.
> >
> >
> >
> > No, it falls as 1/sqrt(N). Remember, the confidence interval depends on
> the
> > standard deviation, and that falls as 1/sqrt(n). Consequently deviations
> from
> > equal numbers of zeros and ones for p to be within the CI of 0.5 must
> decline
> > as n becomes large
> >
>
> The value ε defined above is fixed at the outset. It is independent of
> N. Maybe I incorrectly called it a confidence interval, although it is
> surely related.
>

Calling it a confidence interval certainly threw me. If it is meant to be a
fixed interval independent of N, then OK. But that is not a useful concept.
For any fixed interval around p = 0.5, relative frequencies at the limits
of that interval will eventually estimate p values for which the CI does
not include 0.5. So they can no longer infer that the probability is 0.5.
Since the CI decreases with N, the proportion of the total who infer any
particular value for the probability decreases with N.



> The number of bitstrings having a ratio of 0 to 1 within ε of 0.5
> grows as √N.
>
> IIRC, a confidence interval is the interval of a fixed proportion, ie we
> can be 95% confident that strings will have a ratio between 49.5% and
> 51.5%. That interval (49.5% and 51.5%) will decrease as √N for fixed
> confidence level (95%).
>
> >
> >
> > > Now sequences with small departures from equal numbers will still
> give
> > > probabilities within the confidence interval of p = 0.5. But this
> > confidence
> > > interval also shrinks as 1/sqrt(N) as N increases, so these
> additional
> 

Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-08 Thread Bruce Kellett
On Sun, Mar 8, 2020 at 7:46 PM Russell Standish 
wrote:

> On Sun, Mar 08, 2020 at 06:50:52PM +1100, Bruce Kellett wrote:
> > On Sun, Mar 8, 2020 at 5:32 PM Russell Standish 
> wrote:
> >
> > On Fri, Mar 06, 2020 at 10:44:37AM +1100, Bruce Kellett wrote:
> >
> > > That is, in fact, false. It does not generate the same strings as
> > flipping a
> > > coin in single world. Sure, each of the strings in Everett could
> have
> > been
> > > obtained from coin flips -- but then the probability of a sequence
> of
> > 10,000
> > > heads is very low, whereas in many-worlds you are guaranteed that
> one
> > observer
> > > will obtain this sequence. There is a profound difference between
> the two
> > > cases.
> >
> > You have made this statement multiple times, and it appears to be at
> > the heart of our disagreement. I don't see what the profound
> > difference is.
> >
> > If I select a subset from the set of all strings of length N, for
> example
> > all strings with exactly N/3 1s, then I get a quite specific value
> for the
> > proportion of the whole that match it:
> >
> > / N \
> > || 2^{-N}  = p.
> > \N/3/
> >
> > Now this number p will also equal the probability of seeing exactly
> > N/3 coins land head up when N coins are tossed.
> >
> > What is the profound difference?
> >
> >
> >
> > Take a more extreme case. The probability of getting 1000 heads on 1000
> coin
> > tosses is 1/2^1000.
> > If you measure the spin components of an ensemble of identical spin-half
> > particles, there will certainly be one observer who sees 1000 spin-up
> results.
> > That is the difference -- the difference between probability of 1/2^1000
> and a
> > probability of one.
> >
> > In fact in a recent podcast by Sean Carroll (that has been discussed on
> the
> > list previously), he makes the statement that this rare event (with
> probability
> > p = 1/2^1000) certainly occurs. In other words, he is claiming  that the
> > probability is both 1/2^1000 and one. That this is a flat contradiction
> appears
> > to escape him. The difference in probabilities between coin tosses and
> > Everettian measurements couldn't be more stark.
>
> That is because you're talking about different things. The rare event
> that 1 in 2^1000 observers see certainly occurs. In this case
> certainty does not refer to probability 1, as no probabilities are
> applicable in that 3p picture. Probabilities in the MWI sense refers
> to what an observer will see next, it is a 1p concept.
>
> And that 1p context, I do not see any difference in how probabilities
> are interpreted, nor in their numerical values.
>
> Perhaps Caroll is being sloppy. If so, I would think that could be
> forgiven.
>


Yes, I think the Carroll's comment was just sloppy. The trouble is that
this sort of sloppiness permeates all of these discussions. As you say,
probability really has meaning only in the 1p picture. So the guy who sees
1000 spin-ups in the 1000 trials will conclude that the probability of
spin-up is very close to one. That is why it makes sense to say that the
probability is one. The fact that this one guy sees this is certain in
Many-worlds (This may be another meaning of probability, but an event that
is certain to happen is usually referred to as having probability one.).

The trouble comes when you use the same term 'probability' to refer to the
fact that this guy is just one of the 2^N guys who are generated in this
experiment. The fact that he may be in the minority does not alter the fact
that he exists, and infers a probability close to one for spin-up. The 3p
picture here is to consider that this guy is just chosen at random from a
uniform distribution over all 2^N copies at the end of the experiment. And
I find it difficult to give any sensible meaning to that idea. No one is
selecting anything at random from the the 2^N copies because that is to how
the copies come about -- it is all completely deterministic.

The guy who gets the 1000 spin-ups infers a probability close to one, so he
is entitled to think that the probability of getting an approximately even
number of ups and downs is very small: eps^1000*(1-eps)^1000 for eps very
close to zero. Similarly, guys who see approximately equal numbers of up
and down infers a probability close to 0.5. So they are entitled to
conclude that the probability of seeing all spin-up is vanishingly small,
namely, 1/2^1000.

The main point I have been trying to make is that this is true whatever the
ratio of ups to downs is in the data that any individual observes. Everyone
concludes that their observed relative frequency is a good indicator of the
actual probability, and that other ratios of up:down are extremely
unlikely. This is a simple consequence of the fact that probability is, as
you say, a 1p notion, and can only be estimated from the actual data that
an individual obtains. Since people get different data, they get differ

Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-08 Thread Russell Standish
On Sun, Mar 08, 2020 at 07:08:25PM +1100, Bruce Kellett wrote:
> On Sun, Mar 8, 2020 at 6:14 PM Russell Standish  wrote:
> 
> On Thu, Mar 05, 2020 at 09:45:38PM +1100, Bruce Kellett wrote:
> > On Thu, Mar 5, 2020 at 5:26 PM Russell Standish 
> wrote:
> >
> >     But a very large proportion of them (→1 as N→∞) will report being
> >     within ε (called a confidence interval) of 50% for any given ε>0
> >     chosen at the outset of the experiment. This is simply the law of
> >     large numbers theorem. You can't focus on the vanishingly small
> >     population that lie outside the confidence interval.
> >
> >
> > This is wrong.
> 
> Them's fighting words. Prove it!
> 
> 
> I have, in other posts and below.

You didn't do it below, that's why I said prove it. What you wrote
below had little bearing on what I wrote.

> 
> 
> > In the binary situation where both outcomes occur for every
> > trial, there are 2^N binary sequences for N repetitions of the
> experiment. This
> > set of binary sequences exhausts the possibilities, so the same sequence
> is
> > obtained for any two-component initial state -- regardless of the
> amplitudes.
> 
> > You appear to assume that the natural probability in this situation is p
> = 0.5
> > and, what is more, your appeal to the law of large numbers applies only
> for
> > single-world probabilities, in which there is only one outcome on each
> trial.
> 
> I didn't mention proability once in the above paragraph, not even
> implicitly. I used the term "proportion". That the proportion will be
> equal to the probability in a single universe case is a frequentist
> assumption, and should be uncontroversial, but goes beyond what I
> stated above.
> 
> 
> Sure. But the proportion of the 2^N sequences that exhibit any particular p
> value (proportion of 1's) decreases with N.
> 

So what?

> 
> > In order to infer a probability of p = 0.5, your branch data must have
> > approximately equal numbers of zeros and ones. The number of branches
> with
> > equal numbers of zeros and ones is given by the binomial coefficient. 
> For
> large
> > even N = 2M trials, this coefficient is N!/M!*M!. Using the Stirling
> > approximation to the factorial for large N, this goes as 2^N/sqrt(N)
> (within
> > factors of order one). Since there are 2^N sequences, the proportion 
> with
> n_0 =
> > n_1 vanishes as 1/sqrt(N) for N large. 
> 
> I wasn't talking about that. I was talking about the proportion of
> sequences whose ratio of 0 bits to 1 bits lie within ε of 0.5, rather
> than the proportion of sequences that have exactly equal 0 or 1
> bits. That proportion grows as sqrt N.
> 
> 
> 
> No, it falls as 1/sqrt(N). Remember, the confidence interval depends on the
> standard deviation, and that falls as 1/sqrt(n). Consequently deviations from
> equal numbers of zeros and ones for p to be within the CI of 0.5 must decline
> as n becomes large
>

The value ε defined above is fixed at the outset. It is independent of
N. Maybe I incorrectly called it a confidence interval, although it is
surely related. 

The number of bitstrings having a ratio of 0 to 1 within ε of 0.5
grows as √N.

IIRC, a confidence interval is the interval of a fixed proportion, ie we can be 
95% confident that strings will have a ratio between 49.5% and 51.5%. That 
interval (49.5% and 51.5%) will decrease as √N for fixed confidence level 
(95%). 

> 
> 
> > Now sequences with small departures from equal numbers will still give
> > probabilities within the confidence interval of p = 0.5. But this
> confidence
> > interval also shrinks as 1/sqrt(N) as N increases, so these additional
> > sequences do not contribute a growing number of cases giving p ~ 0.5 as 
> N
> > increases.
> 
> The confidence interval ε is fixed.
> 
> 
> No, it is not. The width of, say the 95% CI, decreases with N since the
> standard deviation falls as 1/sqrt(N).

Which only demonstrates my point. An increasing number of strings will
lie in the fixed interval ε. I apologise if I used the term "confidence
interval" in a nonstandard way.


-- 


Dr Russell StandishPhone 0425 253119 (mobile)
Principal, High Performance Coders hpco...@hpcoders.com.au
  http://www.hpcoders.com.au


-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/20200308085904.GE2903%40zen.


Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-08 Thread Russell Standish
On Sun, Mar 08, 2020 at 06:50:52PM +1100, Bruce Kellett wrote:
> On Sun, Mar 8, 2020 at 5:32 PM Russell Standish  wrote:
> 
> On Fri, Mar 06, 2020 at 10:44:37AM +1100, Bruce Kellett wrote:
> 
> > That is, in fact, false. It does not generate the same strings as
> flipping a
> > coin in single world. Sure, each of the strings in Everett could have
> been
> > obtained from coin flips -- but then the probability of a sequence of
> 10,000
> > heads is very low, whereas in many-worlds you are guaranteed that one
> observer
> > will obtain this sequence. There is a profound difference between the 
> two
> > cases.
> 
> You have made this statement multiple times, and it appears to be at
> the heart of our disagreement. I don't see what the profound
> difference is.
> 
> If I select a subset from the set of all strings of length N, for example
> all strings with exactly N/3 1s, then I get a quite specific value for the
> proportion of the whole that match it:
> 
> / N \
> |    | 2^{-N}  = p.
> \N/3/
> 
> Now this number p will also equal the probability of seeing exactly
> N/3 coins land head up when N coins are tossed.
> 
> What is the profound difference?
> 
> 
> 
> Take a more extreme case. The probability of getting 1000 heads on 1000 coin
> tosses is 1/2^1000.
> If you measure the spin components of an ensemble of identical spin-half
> particles, there will certainly be one observer who sees 1000 spin-up results.
> That is the difference -- the difference between probability of 1/2^1000 and a
> probability of one.
> 
> In fact in a recent podcast by Sean Carroll (that has been discussed on the
> list previously), he makes the statement that this rare event (with 
> probability
> p = 1/2^1000) certainly occurs. In other words, he is claiming  that the
> probability is both 1/2^1000 and one. That this is a flat contradiction 
> appears
> to escape him. The difference in probabilities between coin tosses and
> Everettian measurements couldn't be more stark.

That is because you're talking about different things. The rare event
that 1 in 2^1000 observers see certainly occurs. In this case
certainty does not refer to probability 1, as no probabilities are
applicable in that 3p picture. Probabilities in the MWI sense refers
to what an observer will see next, it is a 1p concept.

And that 1p context, I do not see any difference in how probabilities
are interpreted, nor in their numerical values.

Perhaps Caroll is being sloppy. If so, I would think that could be forgiven.


-- 


Dr Russell StandishPhone 0425 253119 (mobile)
Principal, High Performance Coders hpco...@hpcoders.com.au
  http://www.hpcoders.com.au


-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/20200308084635.GD2903%40zen.


Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-08 Thread Bruce Kellett
On Sun, Mar 8, 2020 at 6:14 PM Russell Standish 
wrote:

> On Thu, Mar 05, 2020 at 09:45:38PM +1100, Bruce Kellett wrote:
> > On Thu, Mar 5, 2020 at 5:26 PM Russell Standish 
> wrote:
> >
> > But a very large proportion of them (→1 as N→∞) will report being
> > within ε (called a confidence interval) of 50% for any given ε>0
> > chosen at the outset of the experiment. This is simply the law of
> > large numbers theorem. You can't focus on the vanishingly small
> > population that lie outside the confidence interval.
> >
> >
> > This is wrong.
>
> Them's fighting words. Prove it!
>

I have, in other posts and below.

> In the binary situation where both outcomes occur for every
> > trial, there are 2^N binary sequences for N repetitions of the
> experiment. This
> > set of binary sequences exhausts the possibilities, so the same sequence
> is
> > obtained for any two-component initial state -- regardless of the
> amplitudes.
>
> > You appear to assume that the natural probability in this situation is p
> = 0.5
> > and, what is more, your appeal to the law of large numbers applies only
> for
> > single-world probabilities, in which there is only one outcome on each
> trial.
>
> I didn't mention proability once in the above paragraph, not even
> implicitly. I used the term "proportion". That the proportion will be
> equal to the probability in a single universe case is a frequentist
> assumption, and should be uncontroversial, but goes beyond what I
> stated above.
>

Sure. But the proportion of the 2^N sequences that exhibit any particular p
value (proportion of 1's) decreases with N.

> In order to infer a probability of p = 0.5, your branch data must have
> > approximately equal numbers of zeros and ones. The number of branches
> with
> > equal numbers of zeros and ones is given by the binomial coefficient.
> For large
> > even N = 2M trials, this coefficient is N!/M!*M!. Using the Stirling
> > approximation to the factorial for large N, this goes as 2^N/sqrt(N)
> (within
> > factors of order one). Since there are 2^N sequences, the proportion
> with n_0 =
> > n_1 vanishes as 1/sqrt(N) for N large.
>
> I wasn't talking about that. I was talking about the proportion of
> sequences whose ratio of 0 bits to 1 bits lie within ε of 0.5, rather
> than the proportion of sequences that have exactly equal 0 or 1
> bits. That proportion grows as sqrt N.
>


No, it falls as 1/sqrt(N). Remember, the confidence interval depends on the
standard deviation, and that falls as 1/sqrt(n). Consequently deviations
from equal numbers of zeros and ones for p to be within the CI of 0.5 must
decline as n becomes large


> Now sequences with small departures from equal numbers will still give
> > probabilities within the confidence interval of p = 0.5. But this
> confidence
> > interval also shrinks as 1/sqrt(N) as N increases, so these additional
> > sequences do not contribute a growing number of cases giving p ~ 0.5 as N
> > increases.
>
> The confidence interval ε is fixed.
>

No, it is not. The width of, say the 95% CI, decreases with N since the
standard deviation falls as 1/sqrt(N).

Bruce

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAFxXSLTTikTege169WoO-yN-MpxWsT1JX5NY4VN3-0FH3b0ybg%40mail.gmail.com.


Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-07 Thread Bruce Kellett
On Sun, Mar 8, 2020 at 5:32 PM Russell Standish 
wrote:

> On Fri, Mar 06, 2020 at 10:44:37AM +1100, Bruce Kellett wrote:
>
> > That is, in fact, false. It does not generate the same strings as
> flipping a
> > coin in single world. Sure, each of the strings in Everett could have
> been
> > obtained from coin flips -- but then the probability of a sequence of
> 10,000
> > heads is very low, whereas in many-worlds you are guaranteed that one
> observer
> > will obtain this sequence. There is a profound difference between the two
> > cases.
>
> You have made this statement multiple times, and it appears to be at
> the heart of our disagreement. I don't see what the profound
> difference is.
>
> If I select a subset from the set of all strings of length N, for example
> all strings with exactly N/3 1s, then I get a quite specific value for the
> proportion of the whole that match it:
>
> / N \
> || 2^{-N}  = p.
> \N/3/
>
> Now this number p will also equal the probability of seeing exactly
> N/3 coins land head up when N coins are tossed.
>
> What is the profound difference?
>


Take a more extreme case. The probability of getting 1000 heads on 1000
coin tosses is 1/2^1000.
If you measure the spin components of an ensemble of identical spin-half
particles, there will certainly be one observer who sees 1000 spin-up
results. That is the difference -- the difference between probability of
1/2^1000 and a probability of one.

In fact in a recent podcast by Sean Carroll (that has been discussed on the
list previously), he makes the statement that this rare event (with
probability p = 1/2^1000) certainly occurs. In other words, he is claiming
 that the probability is both 1/2^1000 and one. That this is a flat
contradiction appears to escape him. The difference in probabilities
between coin tosses and Everettian measurements couldn't be more stark.

Bruce

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAFxXSLQC%3DCTYjUbZ4BHE78YuUrMTWkOHEV_%3DW6LB4Q4_pJ-SyA%40mail.gmail.com.


Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-07 Thread Russell Standish
On Thu, Mar 05, 2020 at 09:45:38PM +1100, Bruce Kellett wrote:
> On Thu, Mar 5, 2020 at 5:26 PM Russell Standish  wrote:
> 
> 
> But a very large proportion of them (→1 as N→∞) will report being
> within ε (called a confidence interval) of 50% for any given ε>0
> chosen at the outset of the experiment. This is simply the law of
> large numbers theorem. You can't focus on the vanishingly small
> population that lie outside the confidence interval.
> 
> 
> This is wrong.

Them's fighting words. Prove it!

> In the binary situation where both outcomes occur for every
> trial, there are 2^N binary sequences for N repetitions of the experiment. 
> This
> set of binary sequences exhausts the possibilities, so the same sequence is
> obtained for any two-component initial state -- regardless of the amplitudes.

> You appear to assume that the natural probability in this situation is p = 0.5
> and, what is more, your appeal to the law of large numbers applies only for
> single-world probabilities, in which there is only one outcome on each trial.

I didn't mention proability once in the above paragraph, not even
implicitly. I used the term "proportion". That the proportion will be
equal to the probability in a single universe case is a frequentist
assumption, and should be uncontroversial, but goes beyond what I
stated above.

> 
> In order to infer a probability of p = 0.5, your branch data must have
> approximately equal numbers of zeros and ones. The number of branches with
> equal numbers of zeros and ones is given by the binomial coefficient. For 
> large
> even N = 2M trials, this coefficient is N!/M!*M!. Using the Stirling
> approximation to the factorial for large N, this goes as 2^N/sqrt(N) (within
> factors of order one). Since there are 2^N sequences, the proportion with n_0 
> =
> n_1 vanishes as 1/sqrt(N) for N large. 

I wasn't talking about that. I was talking about the proportion of
sequences whose ratio of 0 bits to 1 bits lie within ε of 0.5, rather
than the proportion of sequences that have exactly equal 0 or 1
bits. That proportion grows as sqrt N.


> 
> Now sequences with small departures from equal numbers will still give
> probabilities within the confidence interval of p = 0.5. But this confidence
> interval also shrinks as 1/sqrt(N) as N increases, so these additional
> sequences do not contribute a growing number of cases giving p ~ 0.5 as N
> increases.

The confidence interval ε is fixed.

So, again within factors of order unity, the proportion of sequences
> consistent with p = 0.5 decreases without limit as N increases. So it is not
> the case that a very large proportion of the binary strings will report p =
> 0.5. The proportion lying outside the confidence interval of p = 0.5 is not
> vanishingly small -- it grows with N.
> 
> 
> 
> > The crux of the matter is that all branches are equivalent when both
> outcomes
> > occur on every trial, so all observers will infer that their observed
> relative
> > frequencies reflect the actual probabilities. Since there are observers
> for all
> > possibilities for p in the range [0,1], and not all can be correct, no
> sensible
> > probability value can be assigned to such duplication experiments.
> 
> I don't see why not. Faced with a coin flip toss, I would assume a
> 50/50 chance of seeing heads or tails. Faced with a history of 100
> heads, I might start to investigate the coin for bias, and perhaps by
> Bayesian arguments give the biased coin theory greater weight than the
> theory that I've just experience a 1 in 2^100 event, but in any case
> it is just statistics, and it is the same whether all oputcomes have
> been realised or not.
> 
> 
> The trouble with this analogy is that coin tosses are single-world events --
> there is only one outcome for each toss. Consequently, any intuitions about
> probabilities based on such comparisons are not relevant to the Everettian 
> case
> in which every outcome occurs for every toss. Your intuition that it is the
> same whether all outcomes are realised or not is simply mistaken.
> 
> 
> > The problem is even worse in quantum mechanics, where you measure a 
> state
> such
> > as
> >
> >      |psi> = a|0> + b|1>.
> >
> > When both outcomes occur on every trial, the result of a sequence of N
> trials
> > is all possible binary strings of length N, (all 2^N of them). You then
> notice
> > that this set of all possible strings is obtained whatever non-zero
> values of a
> > and b you assume. The assignment of some propbability relation to the
> > coefficients is thus seen to be meaningless -- all probabilities occur
> equal
> > for any non-zero choices of a and b.
> >
> 
> For the outcome of any particular binary string, sure. But if we
> classify the outcome strings - say ones with a recognisable pattern,
> or when replayed through a CD pl

Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-07 Thread Russell Standish
On Fri, Mar 06, 2020 at 10:44:37AM +1100, Bruce Kellett wrote:

> 
> 
> That is, in fact, false. It does not generate the same strings as flipping a
> coin in single world. Sure, each of the strings in Everett could have been
> obtained from coin flips -- but then the probability of a sequence of 10,000
> heads is very low, whereas in many-worlds you are guaranteed that one observer
> will obtain this sequence. There is a profound difference between the two
> cases.

You have made this statement multiple times, and it appears to be at
the heart of our disagreement. I don't see what the profound
difference is.

If I select a subset from the set of all strings of length N, for example all 
strings with exactly N/3 1s, then I get a quite specific value for the 
proportion of the whole that match it:

/ N \
|| 2^{-N}  = p.
\N/3/

Now this number p will also equal the probability of seeing exactly
N/3 coins land head up when N coins are tossed.

What is the profound difference?

-- 


Dr Russell StandishPhone 0425 253119 (mobile)
Principal, High Performance Coders hpco...@hpcoders.com.au
  http://www.hpcoders.com.au


-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/20200308062905.GZ2903%40zen.


Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-06 Thread Bruce Kellett
On Sat, Mar 7, 2020 at 1:04 PM 'Brent Meeker' via Everything List <
everything-list@googlegroups.com> wrote:

>
> What do you think about identifying what one finds as an observer as a
> probability of being one of the leaves of the branching MWI tree, i.e.
> interpreting self-location uncertainty by probability.  I see no problem
> with looking at those leaves as an ensemble and one's experience as an
> element (a sequence of results) as a probabilistic sample from this
> ensemble.  The fact that no one can "see" the ensemble is like any
> probability example in which the ensemble is usually just hypothetical,
> i.e. what could have happened (or what Kastner calls "possibility space").
>

This is Sean's self-locating uncertainty. The problem is that the ensemble
within which one is to self-locate has to be divided up according to the
Born rule. You can do this, as you suggested, by having multiple branches
in ratios according to the Born probabilities -- a possibility that I do
not think can be achieved because of the limitation on the number of
possible bit strings for binary outcomes. The other possibility is Sean's
idea of branch weights, or 'thicknesses'. But that does not appear to
multiply the number of members of the ensemble according to the Born
probabilities. Sean is essentially saying "Just assume the appropriate
probability distribution over the ensemble, then self-select." That does
not really solve any problem -- it is just begging the question.

So I can still see problems with these approaches, particularly when the
probability one infers from the data on the selected branch has to agree
with the probability distribution over branches -- can't see it, to tell
the truth.

Bruce

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAFxXSLRoA1_JQhDYFU12rSAQHwk6gN_9HyZHqRTbF-tRBBvQpw%40mail.gmail.com.


Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-06 Thread 'Brent Meeker' via Everything List



On 3/6/2020 5:07 PM, Bruce Kellett wrote:
On Fri, Mar 6, 2020 at 5:22 PM 'Brent Meeker' via Everything List 
> wrote:


On 3/5/2020 10:07 PM, Bruce Kellett wrote:

In the full set of all 2^N branches there will, of course, be
branches in which this is the case. But that is just because when
every possible bit string is included, that possibility will also
occur. The problem is that the proportion of branches for which
this is the case becomes small as N increases.


But not the proportion of branches which are within a fixed
deviation from 2:1.  That proportion will increase with N.

I can see that I'm going to have to write a program to produce and
example for you.


I look forward to such a program -- my computer programming skills 
have abandoned me..


The trouble is that my intuition does not stretch to what happens in 
the branch multiplication situation -- I can convince myself either 
way


Kent covers this scenario in his paper (arxiv:0905.0624). He writes:

"Consider a replicating multiverse, with a machine like the first one, 
in which branches arise as the result of technologically advanced 
beings running simulations. Whenever the red button is pressed in a 
simulated universe, that universe is deleted, and successor universes 
with outcomes 0 and 1 written on the tape are initiated. Suppose, in 
this case, that each time, the beings create three identical 
simulations with outcome 0, and just one with outcome 1. From the 
perspective of the inhabitants, there is no way to detect that 
outcomes 0 and 1 are being treated differently, and so they represent 
them in their theories with one branch each. In fact, though, given 
this representation, there is an at least arguably natural sense in 
which they ought to assign to the outcome 0 branch three times the 
importance of the outcome 1 branch: in other words, they ought to 
assign branch weights (3/4,1/4).


"They don't know this. But suppose that they believe that there are 
unknown weights attached to the branches. What happens now? After N 
runs of the experiment, there will actually be 4^N simulations, 
although in the inhabitants' theoretical representation, these are 
represented by 2^N branches. Of the 4^N simulation, almost all (for 
large N) will contain close to 3N/4 zeros and N/4 ones."


This is where my intuition breaks down -- this is by no means obvious 
to me, though I know that this is what you predicted for the 3:1 case 
we discussed before. My problem with this conclusion is that there are 
only 2^ distinct bit strings of length N. So the 4^N simulations must 
contain a lot of duplications. In fact, 4^N is immeasurably larger 
than 2^N: 4^N/2^N = 2^N. So there must be an infinite number of 
replicates as N --> oo. Why should those bit strings with the ratio 
4:1 of zeros to ones be favoured in the duplications? Would not all 
strings be duplicated uniformly, so that the 4^N simulations will 
contain exactly the same ratio of 4:1 ratio bit strings as the 
original 2^N possible bit strings does. My intuition is clearly 
different from Kent's and your's.


Now Kent goes on:
"Now, I think I can see how to run some, though not all, of an 
argument that supports this conclusion. The branch importance measure 
defined by inhabitants who find relative frequency 3/4 of zeros 
corresponds to the counting measure on simulations. If we could argue, 
for instance by appealing to symmetry, that each of the 4^N 
simulations is equally important, then this branch importance measure 
would indeed be justified. If we could also argue, perhaps using some 
form of anthropic reasoning, that there is an equal chance of finding 
oneself in any of the 4^N simulations, then the chance of finding 
oneself in a simulation in which one concludes that the branch weights 
are (very close to) (3/4,1/4) would be very close to one. ... There 
would indeed then seem to be a sense in which the branch weights 
define which subsets of the branches are important for theory 
confirmation.


"It seems hard to make this argument rigorous. In particular, the 
notion of 'chance of finding oneself' in a particular simulation 
doesn't seem easy to define properly. Still, we have an arguable 
natural measure on simulations, the counting measure, according to 
which most of the inhabitants will arrive at (close to) the right 
theory of branch weights. That might perhaps be progress."



It is clear that Kent is far from convinced by this. And I have 
indicated that I am far from convinced even of things that Kent seems 
to find intuitively obvious. This needs to be worked through more 
carefully -- I remain unconvinced that branch duplication provides a 
way of getting probabilities into the data.


What do you think about identifying what one finds as an observer as a 
probability of being one of the leaves of the branching MWI tree, i.e. 
interpreting self-location uncertainty by

Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-06 Thread Bruce Kellett
On Fri, Mar 6, 2020 at 5:22 PM 'Brent Meeker' via Everything List <
everything-list@googlegroups.com> wrote:

> On 3/5/2020 10:07 PM, Bruce Kellett wrote:
>
> In the full set of all 2^N branches there will, of course, be branches in
> which this is the case. But that is just because when every possible bit
> string is included, that possibility will also occur. The problem is that
> the proportion of branches for which this is the case becomes small as N
> increases.
>
>
> But not the proportion of branches which are within a fixed deviation from
> 2:1.  That proportion will increase with N.
>
> I can see that I'm going to have to write a program to produce and example
> for you.
>

I look forward to such a program -- my computer programming skills have
abandoned me..

The trouble is that my intuition does not stretch to what happens in the
branch multiplication situation -- I can convince myself either way

Kent covers this scenario in his paper (arxiv:0905.0624). He writes:

"Consider a replicating multiverse, with a machine like the first one, in
which branches arise as the result of technologically advanced beings
running simulations. Whenever the red button is pressed in a simulated
universe, that universe is deleted, and successor universes with outcomes 0
and 1 written on the tape are initiated. Suppose, in this case, that each
time, the beings create three identical simulations with outcome 0, and
just one with outcome 1. From the perspective of the inhabitants, there is
no way to detect that outcomes 0 and 1 are being treated differently, and
so they represent them in their theories with one branch each. In fact,
though, given this representation, there is an at least arguably natural
sense in which they ought to assign to the outcome 0 branch three times the
importance of the outcome 1 branch: in other words, they ought to assign
branch weights (3/4,1/4).

"They don't know this. But suppose that they believe that there are unknown
weights attached to the branches. What happens now? After N runs of the
experiment, there will actually be 4^N simulations, although in the
inhabitants' theoretical representation, these are represented by 2^N
branches. Of the 4^N simulation, almost all (for large N) will contain
close to 3N/4 zeros and N/4 ones."

This is where my intuition breaks down -- this is by no means obvious to
me, though I know that this is what you predicted for the 3:1 case we
discussed before. My problem with this conclusion is that there are only 2^
distinct bit strings of length N. So the 4^N simulations must contain a lot
of duplications. In fact, 4^N is immeasurably larger than 2^N: 4^N/2^N =
2^N. So there must be an infinite number of replicates as N --> oo. Why
should those bit strings with the ratio 4:1 of zeros to ones be favoured in
the duplications? Would not all strings be duplicated uniformly, so that
the 4^N simulations will contain exactly the same ratio of 4:1 ratio bit
strings as the original 2^N possible bit strings does. My intuition is
clearly different from Kent's and your's.

Now Kent goes on:
"Now, I think I can see how to run some, though not all, of an argument
that supports this conclusion. The branch importance measure defined by
inhabitants who find relative frequency 3/4 of zeros corresponds to the
counting measure on simulations. If we could argue, for instance by
appealing to symmetry, that each of the 4^N simulations is equally
important, then this branch importance measure would indeed be justified.
If we could also argue, perhaps using some form of anthropic reasoning,
that there is an equal chance of finding oneself in any of the 4^N
simulations, then the chance of finding oneself in a simulation in which
one concludes that the branch weights are (very close to) (3/4,1/4) would
be very close to one. ... There would indeed then seem to be a sense in
which the branch weights define which subsets of the branches are important
for theory confirmation.

"It seems hard to make this argument rigorous. In particular, the notion of
'chance of finding oneself' in a particular simulation doesn't seem easy to
define properly. Still, we have an arguable natural measure on simulations,
the counting measure, according to which most of the inhabitants will
arrive at (close to) the right theory of branch weights. That might perhaps
be progress."


It is clear that Kent is far from convinced by this. And I have indicated
that I am far from convinced even of things that Kent seems to find
intuitively obvious. This needs to be worked through more carefully -- I
remain unconvinced that branch duplication provides a way of getting
probabilities into the data.

Bruce

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid

Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-05 Thread Bruce Kellett
On Fri, Mar 6, 2020 at 5:22 PM 'Brent Meeker' via Everything List <
everything-list@googlegroups.com> wrote:

> On 3/5/2020 10:07 PM, Bruce Kellett wrote:
>
> On Fri, Mar 6, 2020 at 11:33 AM Bruce Kellett 
> wrote:
>
>> On Fri, Mar 6, 2020 at 11:08 AM 'Brent Meeker' via Everything List <
>> everything-list@googlegroups.com> wrote:
>>
>>> On 3/5/2020 3:33 PM, Bruce Kellett wrote:
>>>
>>> No, it doesn't. Just think about what each observer sees from within
>>> his branch.
>>>
>>>
>>> It's what an observer has seen when he calculates the statistics after N
>>> trials.  If a>b there will be proportionately more observers who saw more
>>> 0s than those who saw more 1s.  Suppose that a2=2/3 and b2=1/3.  Then at
>>> each measurement split there will be two observers who see 0 and one who
>>> sees 1.  So after N trials there will be N^3 observers and most of them
>>> will have seen approximately twice as many 0s as 1s.
>>>
>>
>>
>> From within any branch the observer is unaware of other branches, so he
>> cannot see these weights. His statistics will depend only on the results on
>> his branch. In order for multiple branches to count as probabilities, you
>> have to appeal to some Self-Selecting-Assumption (SSA) in the 3p sense: you
>> have to consider that the observer self-selects at randem from the set of
>> all observers. Then, since there are more branches according to the
>> weights, the probability that the randomly selected observer will see a
>> branch that is multiplied over the ensemble will depend on the number of
>> branches with that exact sequence. But this is not how it works in
>> practise, because each observer can ever only see data within his branch,
>> even if that observer is selected at random from among all observers, he
>> will calculate statistics that are independent of any branch weights.
>>
>> Bruce
>>
>
> To put this another way., if a=sqrt(2/3) and b=sqrt(1/3), then if an
> observer is to conclude, from his data, that 0 is twice as likely as 1, he
> must see approximately twice as many zeros as ones. This cannot be achieved
> by simply multiplying the number of branches on a zero result. Multiplying
> the number of branches does not change the data within each branch,
>
>
> Sure it does.  The observer is twice as likely to add on 0 branches to his
> sequence of observations as to ad a 1 branch.  So more observers will see
> an excess of 0s over 1s.
>


The observer does not get to add branches to his sequence at will. Whether
more observers see an excess of zeros or not does not affect what each
individual observer sees.

so observers will obtain exactly the same statistics as they would for
> a=b=1/sqrt(2). As I have repeatedly said, the data on each (and every)
> branch is independent of the weights or coefficients. This is a trivial
> consequence of having every result occur on every trial. Even if zero has
> weight 0.99, and one has weight 0.01, at each fork there is still one
> branch corresponding to zero, and one branch corresponding to one.
>
>
> That was Everett's original idea.  But if at each trial there are 99 forks
> with |0> and 1 fork with |1>  then there will be many observers who have
> observed only |0>'s after say 20 trials and few or none who will have
> observed only |1>'s .
>

But it is not a question of how many observers see a particular string: the
issue is what each observer sees from his own data. Since this is a
deviation from Everett's relative state idea, you have departed from the
Schrodinger equation, and have not really replaced it with a viable
dynamical equation that will multiply branches in the required way.

Multiplying the number of zero branches at each fork does not change the
> statistics within individual branches.
>
>
> Yes it does.
>

Think again -- that is just an absurd comment. Every time a zero occurs in
a sequence, another identical sequence is added. That does not change
anything within the sequence. There are, after all, only 2^N possible
binary bit strings of length N.

Whatever the observed sequence up to given trial, the observer is more
> likely to add a |0> to his sequence on the next trial if there are more
> zero branches.
>

As above -- the observer does not get to add anything to his sequence -- it
is data that he is given. Actually, in the 2:1 ratio of zero branches to
one branches, one will end up with 3^N branches in total (since each
duplicated zero could be coded as 2, giving 3 branches to be added at each
fork). And there is a separate observer for each branch. It is what these
observers can infer from their data that is important -- not how many of
them there are.

And it is the data from within his branch that the physicist must use to
> test the theory. Even if he is selected at random from some population
> where the number of branches is proportional to the weights, he still has
> only the data from within a single branch against which to test the theory.
> Multiplying branches is as irrelevant as imposing branch

Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-05 Thread 'Brent Meeker' via Everything List



On 3/5/2020 10:07 PM, Bruce Kellett wrote:
On Fri, Mar 6, 2020 at 11:33 AM Bruce Kellett > wrote:


On Fri, Mar 6, 2020 at 11:08 AM 'Brent Meeker' via Everything List
mailto:everything-list@googlegroups.com>> wrote:

On 3/5/2020 3:33 PM, Bruce Kellett wrote:

No, it doesn't. Just think about what each observer sees from
within his branch.


It's what an observer has seen when he calculates the
statistics after N trials.  If a>b there will be
proportionately more observers who saw more 0s than those who
saw more 1s.  Suppose that a2=2/3 and b2=1/3.  Then at each
measurement split there will be two observers who see 0 and
one who sees 1.  So after N trials there will be N^3 observers
and most of them will have seen approximately twice as many 0s
as 1s.



From within any branch the observer is unaware of other branches,
so he cannot see these weights. His statistics will depend only on
the results on his branch. In order for multiple branches to count
as probabilities, you have to appeal to some
Self-Selecting-Assumption (SSA) in the 3p sense: you have to
consider that the observer self-selects at randem from the set of
all observers. Then, since there are more branches according to
the weights, the probability that the randomly selected observer
will see a branch that is multiplied over the ensemble will depend
on the number of branches with that exact sequence. But this is
not how it works in practise, because each observer can ever only
see data within his branch, even if that observer is selected at
random from among all observers, he will calculate statistics that
are independent of any branch weights.

Bruce


To put this another way., if a=sqrt(2/3) and b=sqrt(1/3), then if an 
observer is to conclude, from his data, that 0 is twice as likely as 
1, he must see approximately twice as many zeros as ones. This cannot 
be achieved by simply multiplying the number of branches on a zero 
result. Multiplying the number of branches does not change the data 
within each branch,


Sure it does.  The observer is twice as likely to add on 0 branches to 
his sequence of observations as to ad a 1 branch.  So more observers 
will see an excess of 0s over 1s.


so observers will obtain exactly the same statistics as they would for 
a=b=1/sqrt(2). As I have repeatedly said, the data on each (and every) 
branch is independent of the weights or coefficients. This is a 
trivial consequence of having every result occur on every trial. Even 
if zero has weight 0.99, and one has weight 0.01, at each fork there 
is still one branch corresponding to zero, and one branch 
corresponding to one.


That was Everett's original idea.  But if at each trial there are 99 
forks with |0> and 1 fork with |1>  then there will be many observers 
who have observed only |0>'s after say 20 trials and few or none who 
will have observed only |1>'s .


Multiplying the number of zero branches at each fork does not change 
the statistics within individual branches.


Yes it does.  Whatever the observed sequence up to given trial, the 
observer is more likely to add a |0> to his sequence on the next trial 
if there are more zero branches.


And it is the data from within his branch that the physicist must use 
to test the theory. Even if he is selected at random from some 
population where the number of branches is proportional to the 
weights, he still has only the data from within a single branch 
against which to test the theory. Multiplying branches is as 
irrelevant as imposing branch weights.


That is where I think the attempt to force the Born rule on to Everett 
must inevitable fail -- there is no way that one can arrange fork 
dynamics so that there will always be twice as many zeros as ones 
along each branch (for the case a^2=2/3, b^2=1/3).


Not along each observed sequence.  But there will be many more sequences 
with twice as many zeros  than sequences with other proportions.




In the full set of all 2^N branches there will, of course, be branches 
in which this is the case. But that is just because when every 
possible bit string is included, that possibility will also occur. The 
problem is that the proportion of branches for which this is the case 
becomes small as N increases.


But not the proportion of branches which are within a fixed deviation 
from 2:1.  That proportion will increase with N.


I can see that I'm going to have to write a program to produce and 
example for you.


Brent

Consequently, the majority of observers will conclude that the Born 
rule is disconfirmed. This is not in accordance with observation, so 
Everett fails as a scientific theory -- it cannot account for our 
observation of probabilistic results.


Bruce
--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.

Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-05 Thread Bruce Kellett
On Fri, Mar 6, 2020 at 11:33 AM Bruce Kellett  wrote:

> On Fri, Mar 6, 2020 at 11:08 AM 'Brent Meeker' via Everything List <
> everything-list@googlegroups.com> wrote:
>
>> On 3/5/2020 3:33 PM, Bruce Kellett wrote:
>>
>> No, it doesn't. Just think about what each observer sees from within his
>> branch.
>>
>>
>> It's what an observer has seen when he calculates the statistics after N
>> trials.  If a>b there will be proportionately more observers who saw more
>> 0s than those who saw more 1s.  Suppose that a2=2/3 and b2=1/3.  Then at
>> each measurement split there will be two observers who see 0 and one who
>> sees 1.  So after N trials there will be N^3 observers and most of them
>> will have seen approximately twice as many 0s as 1s.
>>
>
>
> From within any branch the observer is unaware of other branches, so he
> cannot see these weights. His statistics will depend only on the results on
> his branch. In order for multiple branches to count as probabilities, you
> have to appeal to some Self-Selecting-Assumption (SSA) in the 3p sense: you
> have to consider that the observer self-selects at randem from the set of
> all observers. Then, since there are more branches according to the
> weights, the probability that the randomly selected observer will see a
> branch that is multiplied over the ensemble will depend on the number of
> branches with that exact sequence. But this is not how it works in
> practise, because each observer can ever only see data within his branch,
> even if that observer is selected at random from among all observers, he
> will calculate statistics that are independent of any branch weights.
>
> Bruce
>

To put this another way., if a=sqrt(2/3) and b=sqrt(1/3), then if an
observer is to conclude, from his data, that 0 is twice as likely as 1, he
must see approximately twice as many zeros as ones. This cannot be achieved
by simply multiplying the number of branches on a zero result. Multiplying
the number of branches does not change the data within each branch, so
observers will obtain exactly the same statistics as they would for
a=b=1/sqrt(2). As I have repeatedly said, the data on each (and every)
branch is independent of the weights or coefficients. This is a trivial
consequence of having every result occur on every trial. Even if zero has
weight 0.99, and one has weight 0.01, at each fork there is still one
branch corresponding to zero, and one branch corresponding to one.
Multiplying the number of zero branches at each fork does not change the
statistics within individual branches. And it is the data from within his
branch that the physicist must use to test the theory. Even if he is
selected at random from some population where the number of branches is
proportional to the weights, he still has only the data from within a
single branch against which to test the theory. Multiplying branches is as
irrelevant as imposing branch weights.

That is where I think the attempt to force the Born rule on to Everett must
inevitable fail -- there is no way that one can arrange fork dynamics so
that there will always be twice as many zeros as ones along each branch
(for the case a^2=2/3, b^2=1/3).

In the full set of all 2^N branches there will, of course, be branches in
which this is the case. But that is just because when every possible bit
string is included, that possibility will also occur. The problem is that
the proportion of branches for which this is the case becomes small as N
increases. Consequently, the majority of observers will conclude that the
Born rule is disconfirmed. This is not in accordance with observation, so
Everett fails as a scientific theory -- it cannot account for our
observation of probabilistic results.

Bruce

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAFxXSLTgbtOztrq%3DWSqnTCsaLOm8gxmgJSMwYGcGYcho5uv8sw%40mail.gmail.com.


Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-05 Thread Bruce Kellett
On Fri, Mar 6, 2020 at 11:08 AM 'Brent Meeker' via Everything List <
everything-list@googlegroups.com> wrote:

> On 3/5/2020 3:33 PM, Bruce Kellett wrote:
>
> On Fri, Mar 6, 2020 at 10:18 AM 'Brent Meeker' via Everything List <
> everything-list@googlegroups.com> wrote:
>
>> On 3/5/2020 2:01 PM, Bruce Kellett wrote:
>>
>> On Fri, Mar 6, 2020 at 8:17 AM 'Brent Meeker' via Everything List <
>> everything-list@googlegroups.com> wrote:
>>
>>> On 3/5/2020 3:07 AM, Bruce Kellett wrote:
>>>
>>> there is no "weight" that differentiates different branches.


 Then the Born rule is false, and the whole of QM is false.

>>>
>>> No, QM is not false. It is only Everett that is disconfirmed by
>>> experiment.
>>>
>>> Everett + mechanism + Gleason do solve the core of the problem.

>>>
>>> No. As discussed with Brent, the Born rule cannot be derived within the
>>> framework of Everettian QM. Gleason's theorem is useful only if you have a
>>> prior proof of the existence of a probability distribution. And you cannot
>>> achieve that within the Everettian context. Even postulating the Born rule
>>> ad hoc and imposing it by hand does not solve the problems with Everettian
>>> QM.
>>>
>>> What needs to be derived or postulated is a probability measure on
>>> Everett's multiple worlds.  I agree that it can't be derived.  But I don't
>>> see that it can't be postulated that at each split the branches are given a
>>> weight (or a multiplicity) so that over the ensemble of branches the Born
>>> rule is statistically supported, i.e. almost all sequences will satisfy the
>>> Born rule in the limit of long sequences.
>>>
>>
>> Unfortunately, that does not work. Linearity means that any weight that
>> you assign to particular result remains outside the strings, so data within
>> each string are independent of any such assigned weights. The weights would
>> not, therefore, show up in any experimental results. The weights can only
>> work in a single-world version of the model.
>>
>>
>> True.  But the multiplicity still works.
>>
>
> No, it doesn't. Just think about what each observer sees from within his
> branch.
>
>
> It's what an observer has seen when he calculates the statistics after N
> trials.  If a>b there will be proportionately more observers who saw more
> 0s than those who saw more 1s.  Suppose that a2=2/3 and b2=1/3.  Then at
> each measurement split there will be two observers who see 0 and one who
> sees 1.  So after N trials there will be N^3 observers and most of them
> will have seen approximately twice as many 0s as 1s.
>


>From within any branch the observer is unaware of other branches, so he
cannot see these weights. His statistics will depend only on the results on
his branch. In order for multiple branches to count as probabilities, you
have to appeal to some Self-Selecting-Assumption (SSA) in the 3p sense: you
have to consider that the observer self-selects at randem from the set of
all observers. Then, since there are more branches according to the
weights, the probability that the randomly selected observer will see a
branch that is multiplied over the ensemble will depend on the number of
branches with that exact sequence. But this is not how it works in
practise, because each observer can ever only see data within his branch,
even if that observer is selected at random from among all observers, he
will calculate statistics that are independent of any branch weights.

Bruce

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAFxXSLSRNgiqwhz9bBBHtBa0U_%3Do-s5MNs6aVjguXRNByqunyw%40mail.gmail.com.


Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-05 Thread Bruce Kellett
On Fri, Mar 6, 2020 at 11:14 AM 'Brent Meeker' via Everything List <
everything-list@googlegroups.com> wrote:

> On 3/5/2020 3:44 PM, Bruce Kellett wrote:
>
> OR postulate that the splits are into many copies so that the branch count
>> gives the Born statistics.
>>
>
>
> That has possibilities, but I think it cannot work either. After all, each
> observer just sees a sequence of results -- he is unaware of other branches
> or sequences, so does not know how many branches are the same as his. The
> 1p/3p distinction comes into play again. Any attempt to make multiple
> branches reproduce probabilities necessarily confuses this distinction. You
> have to think in terms of what data an observer actually obtains. Thinking
> about what happens in the "other worlds" is illegitimate.
>
>
> Consider the many copies case as an ensemble and it will reproduce the
>> Born statistics even though it is deterministic.  This is easy to see
>> because every sequence a single observer has seen is the result of a random
>> choice at the split of which path you call "that observer".
>>
>
>
> But the weights do not influence that split, so the observer cannot see
> the weights.
>
>
> Not weights, multiple branches.
>

The observer cannot see multiple branches from within a branch, either.

Bruce

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAFxXSLT%2BL5kG%2BTWJu3p1qEs_dpD0EphWH_q-siGyztbXiedpdw%40mail.gmail.com.


Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-05 Thread 'Brent Meeker' via Everything List



On 3/5/2020 3:44 PM, Bruce Kellett wrote:


OR postulate that the splits are into many copies so that the
branch count gives the Born statistics.



That has possibilities, but I think it cannot work either. After all, 
each observer just sees a sequence of results -- he is unaware of 
other branches or sequences, so does not know how many branches are 
the same as his. The 1p/3p distinction comes into play again. Any 
attempt to make multiple branches reproduce probabilities necessarily 
confuses this distinction. You have to think in terms of what data an 
observer actually obtains. Thinking about what happens in the "other 
worlds" is illegitimate.



Consider the many copies case as an ensemble and it will reproduce
the Born statistics even though it is deterministic.  This is easy
to see because every sequence a single observer has seen is the
result of a random choice at the split of which path you call
"that observer".



But the weights do not influence that split, so the observer cannot 
see the weights.


Not weights, multiple branches.

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/c0b51f3b-d106-0f21-1822-df6d5088658c%40verizon.net.


Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-05 Thread 'Brent Meeker' via Everything List



On 3/5/2020 3:33 PM, Bruce Kellett wrote:
On Fri, Mar 6, 2020 at 10:18 AM 'Brent Meeker' via Everything List 
> wrote:


On 3/5/2020 2:01 PM, Bruce Kellett wrote:

On Fri, Mar 6, 2020 at 8:17 AM 'Brent Meeker' via Everything List
mailto:everything-list@googlegroups.com>> wrote:

On 3/5/2020 3:07 AM, Bruce Kellett wrote:



there is no "weight" that differentiates different
branches.


Then the Born rule is false, and the whole of QM is false.


No, QM is not false. It is only Everett that is disconfirmed
by experiment.

Everett + mechanism + Gleason do solve the core of the
problem.


No. As discussed with Brent, the Born rule cannot be derived
within the framework of Everettian QM. Gleason's theorem is
useful only if you have a prior proof of the existence of a
probability distribution. And you cannot achieve that within
the Everettian context. Even postulating the Born rule ad
hoc and imposing it by hand does not solve the problems with
Everettian QM.


What needs to be derived or postulated is a probability
measure on Everett's multiple worlds.  I agree that it can't
be derived.  But I don't see that it can't be postulated that
at each split the branches are given a weight (or a
multiplicity) so that over the ensemble of branches the Born
rule is statistically supported, i.e. almost all sequences
will satisfy the Born rule in the limit of long sequences.


Unfortunately, that does not work. Linearity means that any
weight that you assign to particular result remains outside the
strings, so data within each string are independent of any such
assigned weights. The weights would not, therefore, show up in
any experimental results. The weights can only work in a
single-world version of the model.


True.  But the multiplicity still works.


NO, it doesn't. Just think about what each observer sees from within 
his branch.


It's what an observer has seen when he calculates the statistics after N 
trials.  If a>b there will be proportionately more observers who saw 
more 0s than those who saw more 1s.  Suppose that a2=2/3 and b2=1/3.  
Then at each measurement split there will be two observers who see 0 and 
one who sees 1.  So after N trials there will be N^3 observers and most 
of them will have seen approximately twice as many 0s as 1s.


Brent



Bruce
--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com 
.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAFxXSLSnDmp3r2KkOg6%2BxJk937Ni5Tn2zx%3DPMYL8ZAp-D7yrHg%40mail.gmail.com 
.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/e1a39f03-1d3d-1510-3e19-214409feb88f%40verizon.net.


Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-05 Thread Bruce Kellett
On Fri, Mar 6, 2020 at 10:15 AM 'Brent Meeker' via Everything List <
everything-list@googlegroups.com> wrote:

> On 3/5/2020 1:57 PM, Bruce Kellett wrote:
>
> On Fri, Mar 6, 2020 at 8:08 AM 'Brent Meeker' via Everything List <
> everything-list@googlegroups.com> wrote:
>
>> On 3/5/2020 2:45 AM, Bruce Kellett wrote:
>>
>>
>> Now sequences with small departures from equal numbers will still give
>> probabilities within the confidence interval of p = 0.5. But this
>> confidence interval also shrinks as 1/sqrt(N) as N increases, so these
>> additional sequences do not contribute a growing number of cases giving p ~
>> 0.5 as N increases. So, again within factors of order unity, the proportion
>> of sequences consistent with p = 0.5 decreases without limit as N
>> increases. So it is not the case that a very large proportion of the binary
>> strings will report p = 0.5. The proportion lying outside the confidence
>> interval of p = 0.5 is not vanishingly small -- it grows with N.
>>
>>
>> I agree with you argument about unequal probabilities, in which all the
>> binomial sequences occur anyway leading to inference of p=0.5.  But in the
>> above paragraph you are wrong about the how the probability density
>> function of the observed value changes as N->oo.  For any given interval
>> around the true value, p=0.5, the fraction of observed values within that
>> interval increases as N->oo.  For example in N=100 trials, the proportion
>> of observers who calculate an estimate of p in the interval (0.45 0.55) is
>> 0.68.  For N=500 it's 0.975.  For N=1000 it's 0.998.
>>
>> Confidence intervals are constructed to include the true value with some
>> fixed probability.  But that interval becomes narrower as 1/sqrt(N).
>> So the proportion lying inside and outside the interval is relatively
>> constant, but the interval gets narrower.
>>
>
>
> I think I am beginning to see why we are disagreeing on this. You are
> using the normal approximation to the binomial distribution for a large
> sequence of trials with some fixed probability of success on each trial. In
> other words, it is as though you consider the 2^N binary strings of length
> N to have been generated by some random process, such as coin tosses or the
> like, with some prior fixed probability value. Each string is then
> constructed as though the random process takes place in a single word, so
> that there is only one outcome for each toss.
>
> Given such an ensemble, the statistics you cite are undoubtedly correct:
> as the length of the string increases, the proportion of each string within
> some interval of the given probability increases -- that is what the normal
> approximation to the binomial gives you. And as N increases, the confidence
> interval shrinks, so the proportion within a confidence interval is
> approximately constant. But note these are the proportions within each
> string as generated with some fixed probability value. If you take an
> ensemble of such strings, the the result is even more apparent, and the
> proportion of strings in which the probability deviates significantly from
> the prior fixed value decreases without limit.
>
> That is all very fine. The problem is that this is not the ensemble of
> strings that I am considering!
>
> The set of all possible bit strings of length N is not generated by some
> random process with some fixed probability. The set is generated entirely
> deterministically, with no mention whatsoever of any probability. Just
> think about where these strings come from. You measure the spin of a
> spin-half particle. The result is 0 in one branch and 1 in the other. Then
> the process is repeated, independently in each branch, so the 1-branch
> splits into a 11-branch and a 10-branch; and the 0-branch splits into a
> 01-branch and a 00-branch. This process goes on for N repetitions,
> generating all possible bit strings of length N in an entirely
> deterministic fashion. The process is illustrated by Sean Carroll on page
> 134 of his book.
>
> Given the nature of the ensemble of bit strings that I am considering, the
> statistical results I quote are correct, and your statistics are completely
> inappropriate. This may be why we have been talking at cross purposes. I
> suspect that Russell has a similar misconception about the nature of the
> bit strings under consideration, since he talked about statistical results
> that could only have been obtained from an ensemble of randomly generated
> strings.
>
>
> Yes, I understand that.  And I understand that you have been talking about
> Everett's original idea in which at each split both results obtain, one in
> each branch...with no attribute of weight or probability or other measure.
> It's just 0 and 1.  Which generates all strings of zeros and ones.  This
> ensemble of sequences has the same statistics as random coin flipping
> sequences, even though it's deterministic.
>


That is, in fact, false. It does not generate the same strings as flipping
a coin

Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-05 Thread Bruce Kellett
On Fri, Mar 6, 2020 at 10:18 AM 'Brent Meeker' via Everything List <
everything-list@googlegroups.com> wrote:

> On 3/5/2020 2:01 PM, Bruce Kellett wrote:
>
> On Fri, Mar 6, 2020 at 8:17 AM 'Brent Meeker' via Everything List <
> everything-list@googlegroups.com> wrote:
>
>> On 3/5/2020 3:07 AM, Bruce Kellett wrote:
>>
>> there is no "weight" that differentiates different branches.
>>>
>>>
>>> Then the Born rule is false, and the whole of QM is false.
>>>
>>
>> No, QM is not false. It is only Everett that is disconfirmed by
>> experiment.
>>
>> Everett + mechanism + Gleason do solve the core of the problem.
>>>
>>
>> No. As discussed with Brent, the Born rule cannot be derived within the
>> framework of Everettian QM. Gleason's theorem is useful only if you have a
>> prior proof of the existence of a probability distribution. And you cannot
>> achieve that within the Everettian context. Even postulating the Born rule
>> ad hoc and imposing it by hand does not solve the problems with Everettian
>> QM.
>>
>> What needs to be derived or postulated is a probability measure on
>> Everett's multiple worlds.  I agree that it can't be derived.  But I don't
>> see that it can't be postulated that at each split the branches are given a
>> weight (or a multiplicity) so that over the ensemble of branches the Born
>> rule is statistically supported, i.e. almost all sequences will satisfy the
>> Born rule in the limit of long sequences.
>>
>
> Unfortunately, that does not work. Linearity means that any weight that
> you assign to particular result remains outside the strings, so data within
> each string are independent of any such assigned weights. The weights would
> not, therefore, show up in any experimental results. The weights can only
> work in a single-world version of the model.
>
>
> True.  But the multiplicity still works.
>

NO, it doesn't. Just think about what each observer sees from within his
branch.

Bruce

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAFxXSLSnDmp3r2KkOg6%2BxJk937Ni5Tn2zx%3DPMYL8ZAp-D7yrHg%40mail.gmail.com.


Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-05 Thread 'Brent Meeker' via Everything List



On 3/5/2020 2:01 PM, Bruce Kellett wrote:
On Fri, Mar 6, 2020 at 8:17 AM 'Brent Meeker' via Everything List 
> wrote:


On 3/5/2020 3:07 AM, Bruce Kellett wrote:



there is no "weight" that differentiates different branches.


Then the Born rule is false, and the whole of QM is false.


No, QM is not false. It is only Everett that is disconfirmed by
experiment.

Everett + mechanism + Gleason do solve the core of the problem.


No. As discussed with Brent, the Born rule cannot be derived
within the framework of Everettian QM. Gleason's theorem is
useful only if you have a prior proof of the existence of a
probability distribution. And you cannot achieve that within the
Everettian context. Even postulating the Born rule ad hoc and
imposing it by hand does not solve the problems with Everettian QM.


What needs to be derived or postulated is a probability measure on
Everett's multiple worlds.  I agree that it can't be derived.  But
I don't see that it can't be postulated that at each split the
branches are given a weight (or a multiplicity) so that over the
ensemble of branches the Born rule is statistically supported,
i.e. almost all sequences will satisfy the Born rule in the limit
of long sequences.


Unfortunately, that does not work. Linearity means that any weight 
that you assign to particular result remains outside the strings, so 
data within each string are independent of any such assigned weights. 
The weights would not, therefore, show up in any experimental results. 
The weights can only work in a single-world version of the model.


True.  But the multiplicity still works.

Brent



Bruce
--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com 
.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAFxXSLRDXKknUAs7yGbgVsdmhaD9-yY9S8ixzZw3u%2BghqEMqPw%40mail.gmail.com 
.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/83ba3e02-3175-1243-bccc-5276e350cd4e%40verizon.net.


Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-05 Thread 'Brent Meeker' via Everything List



On 3/5/2020 1:57 PM, Bruce Kellett wrote:
On Fri, Mar 6, 2020 at 8:08 AM 'Brent Meeker' via Everything List 
> wrote:


On 3/5/2020 2:45 AM, Bruce Kellett wrote:


Now sequences with small departures from equal numbers will still
give probabilities within the confidence interval of p = 0.5. But
this confidence interval also shrinks as 1/sqrt(N) as N
increases, so these additional sequences do not contribute a
growing number of cases giving p ~ 0.5 as N increases. So, again
within factors of order unity, the proportion of sequences
consistent with p = 0.5 decreases without limit as N increases.
So it is not the case that a very large proportion of the binary
strings will report p = 0.5. The proportion lying outside the
confidence interval of p = 0.5 is not vanishingly small -- it
grows with N.


I agree with you argument about unequal probabilities, in which
all the binomial sequences occur anyway leading to inference of
p=0.5.  But in the above paragraph you are wrong about the how the
probability density function of the observed value changes as
N->oo.  For any given interval around the true value, p=0.5, the
fraction of observed values within that interval increases as
N->oo.  For example in N=100 trials, the proportion of observers
who calculate an estimate of p in the interval (0.45 0.55) is
0.68.  For N=500 it's 0.975.  For N=1000 it's 0.998.

Confidence intervals are constructed to include the true value
with some fixed probability.  But that interval becomes narrower
as 1/sqrt(N).
So the proportion lying inside and outside the interval is
relatively constant, but the interval gets narrower.



I think I am beginning to see why we are disagreeing on this. You are 
using the normal approximation to the binomial distribution for a 
large sequence of trials with some fixed probability of success on 
each trial. In other words, it is as though you consider the 2^N 
binary strings of length N to have been generated by some random 
process, such as coin tosses or the like, with some prior fixed 
probability value. Each string is then constructed as though the 
random process takes place in a single word, so that there is only one 
outcome for each toss.


Given such an ensemble, the statistics you cite are undoubtedly 
correct: as the length of the string increases, the proportion of each 
string within some interval of the given probability increases -- that 
is what the normal approximation to the binomial gives you. And as N 
increases, the confidence interval shrinks, so the proportion within a 
confidence interval is approximately constant. But note these are the 
proportions within each string as generated with some fixed 
probability value. If you take an ensemble of such strings, the the 
result is even more apparent, and the proportion of strings in which 
the probability deviates significantly from the prior fixed value 
decreases without limit.


That is all very fine. The problem is that this is not the ensemble of 
strings that I am considering!


The set of all possible bit strings of length N is not generated by 
some random process with some fixed probability. The set is generated 
entirely deterministically, with no mention whatsoever of any 
probability. Just think about where these strings come from. You 
measure the spin of a spin-half particle. The result is 0 in one 
branch and 1 in the other. Then the process is repeated, independently 
in each branch, so the 1-branch splits into a 11-branch and a 
10-branch; and the 0-branch splits into a 01-branch and a 00-branch. 
This process goes on for N repetitions, generating all possible bit 
strings of length N in an entirely deterministic fashion. The process 
is illustrated by Sean Carroll on page 134 of his book.


Given the nature of the ensemble of bit strings that I am considering, 
the statistical results I quote are correct, and your statistics are 
completely inappropriate. This may be why we have been talking at 
cross purposes. I suspect that Russell has a similar misconception 
about the nature of the bit strings under consideration, since he 
talked about statistical results that could only have been obtained 
from an ensemble of randomly generated strings.


Yes, I understand that.  And I understand that you have been talking 
about Everett's original idea in which at each split both results 
obtain, one in each branch...with no attribute of weight or probability 
or other measure.  It's just 0 and 1.  Which generates all strings of 
zeros and ones.  This ensemble of sequences has the same statistics as 
random coin flipping sequences, even though it's deterministic.  But it 
doesn't have the same statistics as flipping an unfair coin, i.e. when 
a=/=b.  So to have a multiple world interpretation that produces 
statistics agreeing with the Born rule one has to either assign weights

Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-05 Thread Bruce Kellett
On Fri, Mar 6, 2020 at 8:17 AM 'Brent Meeker' via Everything List <
everything-list@googlegroups.com> wrote:

> On 3/5/2020 3:07 AM, Bruce Kellett wrote:
>
> there is no "weight" that differentiates different branches.
>>
>>
>> Then the Born rule is false, and the whole of QM is false.
>>
>
> No, QM is not false. It is only Everett that is disconfirmed by experiment.
>
> Everett + mechanism + Gleason do solve the core of the problem.
>>
>
> No. As discussed with Brent, the Born rule cannot be derived within the
> framework of Everettian QM. Gleason's theorem is useful only if you have a
> prior proof of the existence of a probability distribution. And you cannot
> achieve that within the Everettian context. Even postulating the Born rule
> ad hoc and imposing it by hand does not solve the problems with Everettian
> QM.
>
> What needs to be derived or postulated is a probability measure on
> Everett's multiple worlds.  I agree that it can't be derived.  But I don't
> see that it can't be postulated that at each split the branches are given a
> weight (or a multiplicity) so that over the ensemble of branches the Born
> rule is statistically supported, i.e. almost all sequences will satisfy the
> Born rule in the limit of long sequences.
>

Unfortunately, that does not work. Linearity means that any weight that you
assign to particular result remains outside the strings, so data within
each string are independent of any such assigned weights. The weights would
not, therefore, show up in any experimental results. The weights can only
work in a single-world version of the model.

Bruce

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAFxXSLRDXKknUAs7yGbgVsdmhaD9-yY9S8ixzZw3u%2BghqEMqPw%40mail.gmail.com.


Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-05 Thread Bruce Kellett
On Fri, Mar 6, 2020 at 8:08 AM 'Brent Meeker' via Everything List <
everything-list@googlegroups.com> wrote:

> On 3/5/2020 2:45 AM, Bruce Kellett wrote:
>
>
> Now sequences with small departures from equal numbers will still give
> probabilities within the confidence interval of p = 0.5. But this
> confidence interval also shrinks as 1/sqrt(N) as N increases, so these
> additional sequences do not contribute a growing number of cases giving p ~
> 0.5 as N increases. So, again within factors of order unity, the proportion
> of sequences consistent with p = 0.5 decreases without limit as N
> increases. So it is not the case that a very large proportion of the binary
> strings will report p = 0.5. The proportion lying outside the confidence
> interval of p = 0.5 is not vanishingly small -- it grows with N.
>
>
> I agree with you argument about unequal probabilities, in which all the
> binomial sequences occur anyway leading to inference of p=0.5.  But in the
> above paragraph you are wrong about the how the probability density
> function of the observed value changes as N->oo.  For any given interval
> around the true value, p=0.5, the fraction of observed values within that
> interval increases as N->oo.  For example in N=100 trials, the proportion
> of observers who calculate an estimate of p in the interval (0.45 0.55) is
> 0.68.  For N=500 it's 0.975.  For N=1000 it's 0.998.
>
> Confidence intervals are constructed to include the true value with some
> fixed probability.  But that interval becomes narrower as 1/sqrt(N).
> So the proportion lying inside and outside the interval is relatively
> constant, but the interval gets narrower.
>


I think I am beginning to see why we are disagreeing on this. You are using
the normal approximation to the binomial distribution for a large sequence
of trials with some fixed probability of success on each trial. In other
words, it is as though you consider the 2^N binary strings of length N to
have been generated by some random process, such as coin tosses or the
like, with some prior fixed probability value. Each string is then
constructed as though the random process takes place in a single word, so
that there is only one outcome for each toss.

Given such an ensemble, the statistics you cite are undoubtedly correct: as
the length of the string increases, the proportion of each string within
some interval of the given probability increases -- that is what the normal
approximation to the binomial gives you. And as N increases, the confidence
interval shrinks, so the proportion within a confidence interval is
approximately constant. But note these are the proportions within each
string as generated with some fixed probability value. If you take an
ensemble of such strings, the the result is even more apparent, and the
proportion of strings in which the probability deviates significantly from
the prior fixed value decreases without limit.

That is all very fine. The problem is that this is not the ensemble of
strings that I am considering!

The set of all possible bit strings of length N is not generated by some
random process with some fixed probability. The set is generated entirely
deterministically, with no mention whatsoever of any probability. Just
think about where these strings come from. You measure the spin of a
spin-half particle. The result is 0 in one branch and 1 in the other. Then
the process is repeated, independently in each branch, so the 1-branch
splits into a 11-branch and a 10-branch; and the 0-branch splits into a
01-branch and a 00-branch. This process goes on for N repetitions,
generating all possible bit strings of length N in an entirely
deterministic fashion. The process is illustrated by Sean Carroll on page
134 of his book.

Given the nature of the ensemble of bit strings that I am considering, the
statistical results I quote are correct, and your statistics are completely
inappropriate. This may be why we have been talking at cross purposes. I
suspect that Russell has a similar misconception about the nature of the
bit strings under consideration, since he talked about statistical results
that could only have been obtained from an ensemble of randomly generated
strings.

Bruce

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAFxXSLQeHcBgfa_SPc2AE02VwFFhKzmFbbmhW92%3DZhXOftUKBw%40mail.gmail.com.


Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-05 Thread 'Brent Meeker' via Everything List



On 3/5/2020 3:07 AM, Bruce Kellett wrote:



there is no "weight" that differentiates different branches.


Then the Born rule is false, and the whole of QM is false.


No, QM is not false. It is only Everett that is disconfirmed by 
experiment.


Everett + mechanism + Gleason do solve the core of the problem.


No. As discussed with Brent, the Born rule cannot be derived within 
the framework of Everettian QM. Gleason's theorem is useful only if 
you have a prior proof of the existence of a probability distribution. 
And you cannot achieve that within the Everettian context. Even 
postulating the Born rule ad hoc and imposing it by hand does not 
solve the problems with Everettian QM.


What needs to be derived or postulated is a probability measure on 
Everett's multiple worlds.  I agree that it can't be derived.  But I 
don't see that it can't be postulated that at each split the branches 
are given a weight (or a multiplicity) so that over the ensemble of 
branches the Born rule is statistically supported, i.e. almost all 
sequences will satisfy the Born rule in the limit of long sequences.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/d58c69f9-0469-62c2-2132-6a5cabd540e1%40verizon.net.


Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-05 Thread 'Brent Meeker' via Everything List



On 3/5/2020 2:45 AM, Bruce Kellett wrote:


Now sequences with small departures from equal numbers will still give 
probabilities within the confidence interval of p = 0.5. But this 
confidence interval also shrinks as 1/sqrt(N) as N increases, so these 
additional sequences do not contribute a growing number of cases 
giving p ~ 0.5 as N increases. So, again within factors of order 
unity, the proportion of sequences consistent with p = 0.5 decreases 
without limit as N increases. So it is not the case that a very large 
proportion of the binary strings will report p = 0.5. The proportion 
lying outside the confidence interval of p = 0.5 is not vanishingly 
small -- it grows with N.


I agree with you argument about unequal probabilities, in which all the 
binomial sequences occur anyway leading to inference of p=0.5. But in 
the above paragraph you are wrong about the how the probability density 
function of the observed value changes as N->oo.  For any given interval 
around the true value, p=0.5, the fraction of observed values within 
that interval increases as N->oo.  For example in N=100 trials, the 
proportion of observers who calculate an estimate of p in the interval 
(0.45 0.55) is 0.68. For N=500 it's 0.975.  For N=1000 it's 0.998.


Confidence intervals are constructed to include the true value with some 
fixed probability.  But that interval becomes narrower as 1/sqrt(N).
So the proportion lying inside and outside the interval is relatively 
constant, but the interval gets narrower.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/f0a35d39-6287-2955-bcf5-3f6bc01c1317%40verizon.net.


Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-05 Thread Bruce Kellett
On Thu, Mar 5, 2020 at 10:05 PM Bruno Marchal  wrote:

> On 5 Mar 2020, at 05:52, Bruce Kellett  wrote:
>
> On Thu, Mar 5, 2020 at 3:23 PM 'Brent Meeker' via Everything List <
> everything-list@googlegroups.com> wrote:
>
>> On 3/4/2020 7:54 PM, Bruce Kellett wrote:
>>
>> On Thu, Mar 5, 2020 at 2:02 PM 'Brent Meeker' via Everything List <
>> everything-list@googlegroups.com> wrote:
>>
>>> On 3/4/2020 6:45 PM, Bruce Kellett wrote:
>>>
>>> On Thu, Mar 5, 2020 at 1:34 PM 'Brent Meeker' via Everything List <
>>> everything-list@googlegroups.com> wrote:
>>>
 On 3/4/2020 6:18 PM, Bruce Kellett wrote:


 But one cannot just assume the Born rule in this case -- one has to use
 the data to verify the probabilistic predictions. And the observers on the
 majority of branches will get data that disconfirms the Born rule. (For any
 value of the probability, the proportion of observers who get data
 consistent with this value decreases as N becomes large.)


 No, that's where I was disagreeing with you.  If "consistent with" is
 defined as being within some given fraction, the proportion increases as N
 becomes large.  If the probability of the an even is p and q=1-p then the
 proportion of events in N trials within one std-deviation of p approaches
 1/e and N->oo and the width of the one std-deviation range goes down at
 1/sqrt(N).  So the distribution of values over the ensemble of observers
 becomes concentrated near the expected value, i.e. is consistent with that
 value.

>>>
>>>
>>> But what is the expected value? Does that not depend on the inferred
>>> probabilities? The probability p is not a given -- it can only be inferred
>>> from the observed data. And different observers will infer different values
>>> of p. Then certainly, each observer will think that the distribution of
>>> values over the 2^N observers will be concentrated near his inferred value
>>> of p. The trouble is that that this is true whatever value of p the
>>> observer infers -- i.e., for whatever branch of the ensemble he is on.
>>>
>>>
>>> Not if the branches are unequally weighted (or numbered), as Carroll
>>> seems to assume, and those weights (or numbers) define the probability of
>>> the branch in accordance with the Born rule.  I'm not arguing that this
>>> doesn't have to be put in "by hand".  I'm arguing it is a way of assigning
>>> measures to the multiple worlds so that even though all the results occur,
>>> almost all observers will find results close to the Born rule, i.e. that
>>> self-locating uncertainty will imply the right statistics.
>>>
>>
>> But the trouble is that Everett assumes that all outcomes occur on every
>> trial. So all the branches occur with certainty -- there is no "weight"
>> that differentiates different branches. That is to assume that the branches
>> occur with the probabilities that they would have in a single-world
>> scenario. To assume that branches have different weights is in direct
>> contradiction to the basic postulates the the many-worlds approach. It is
>> not that one can "put in the weights by hand"; it is that any assignment of
>> such weights contradicts that basis of the interpretation, which is that
>> all branches occur with certainty.
>>
>>
>> All branches occur with certainty so long as their weight>0.  Yes,
>> Everett simply assumed they all occur.  Take a simple branch counting
>> model.  Assume that at each trial a there are a 100 branches and a of them
>> are |0> and b are |1> and the values are independent of the prior values in
>> the sequence.  So long as a and b > 0.1 every value, either |0> or |1> will
>> occur at every branching.  But almost all observers, seeing only one
>> sequence thru the branches, will infer P(0)~|a|^2 and P(1)~|b|^2.
>>
>> Do you really disagree that there is a way to assign weights or
>> probabilities to the sequences that reproduces the same statistics as
>> repeating the N trials many times in one world?  It's no more than saying
>> that one-world is an ergodic process.
>>
>
>
> I am saying that assigning weights or probabilities in Everett, by hand
> according to the Born rule, is incoherent.
>
>
> I think that it is incoherent with a preconception of the notion of
> “world”. There are only consistent histories, and in fact "consistent
> histories supported by a continuum of computations”. You take Everett to
> much literally.
>


I thought you were the one that claimed that Everett had essentially solved
all the problems..

But actually, all I need for my proof is that every outcome occurs on every
trial, which is a very slim version of Everett. The proof of the
impossibility of a sensible notion of probability works just as well for
the classical deterministic case, such as your WM-duplication scenario.

Bruce

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails

Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-05 Thread Bruce Kellett
On Thu, Mar 5, 2020 at 9:59 PM Bruno Marchal  wrote:

> On 5 Mar 2020, at 04:54, Bruce Kellett  wrote:
>
> On Thu, Mar 5, 2020 at 2:02 PM 'Brent Meeker' via Everything List <
> everything-list@googlegroups.com> wrote:
>
>> On 3/4/2020 6:45 PM, Bruce Kellett wrote:
>>
>> On Thu, Mar 5, 2020 at 1:34 PM 'Brent Meeker' via Everything List <
>> everything-list@googlegroups.com> wrote:
>>
>>> On 3/4/2020 6:18 PM, Bruce Kellett wrote:
>>>
>>>
>>> But one cannot just assume the Born rule in this case -- one has to use
>>> the data to verify the probabilistic predictions. And the observers on the
>>> majority of branches will get data that disconfirms the Born rule. (For any
>>> value of the probability, the proportion of observers who get data
>>> consistent with this value decreases as N becomes large.)
>>>
>>>
>>> No, that's where I was disagreeing with you.  If "consistent with" is
>>> defined as being within some given fraction, the proportion increases as N
>>> becomes large.  If the probability of the an even is p and q=1-p then the
>>> proportion of events in N trials within one std-deviation of p approaches
>>> 1/e and N->oo and the width of the one std-deviation range goes down at
>>> 1/sqrt(N).  So the distribution of values over the ensemble of observers
>>> becomes concentrated near the expected value, i.e. is consistent with that
>>> value.
>>>
>>
>>
>> But what is the expected value? Does that not depend on the inferred
>> probabilities? The probability p is not a given -- it can only be inferred
>> from the observed data. And different observers will infer different values
>> of p. Then certainly, each observer will think that the distribution of
>> values over the 2^N observers will be concentrated near his inferred value
>> of p. The trouble is that that this is true whatever value of p the
>> observer infers -- i.e., for whatever branch of the ensemble he is on.
>>
>>
>> Not if the branches are unequally weighted (or numbered), as Carroll
>> seems to assume, and those weights (or numbers) define the probability of
>> the branch in accordance with the Born rule.  I'm not arguing that this
>> doesn't have to be put in "by hand".  I'm arguing it is a way of assigning
>> measures to the multiple worlds so that even though all the results occur,
>> almost all observers will find results close to the Born rule, i.e. that
>> self-locating uncertainty will imply the right statistics.
>>
>
> But the trouble is that Everett assumes that all outcomes occur on every
> trial. So all the branches occur with certainty —
>
>
> In the 3p view, but then the “self-locating” idea explains that QM
> predicts that the observers abstained  do not see the “other branches”
> (“they don’t even feel the split”, as Everett argued correctly).
>


But each individual can test the probability predictions from the
first-person data obtained on his branch. And most will find that the Born
rule is disconfirmed if Everett is true.

there is no "weight" that differentiates different branches.
>
>
> Then the Born rule is false, and the whole of QM is false.
>

No, QM is not false. It is only Everett that is disconfirmed by experiment.

Everett + mechanism + Gleason do solve the core of the problem.
>

No. As discussed with Brent, the Born rule cannot be derived within the
framework of Everettian QM. Gleason's theorem is useful only if you have a
prior proof of the existence of a probability distribution. And you cannot
achieve that within the Everettian context. Even postulating the Born rule
ad hoc and imposing it by hand does not solve the problems with Everettian
QM.

(Except that we can’t use the universal wave no more, but then we do
> recover it in arithmetic, like it was necessary, so no problem at all,
> except difficult mathematics …).
>
>
>
>
> That is to assume that the branches occur with the probabilities that they
> would have in a single-world scenario. To assume that branches have
> different weights is in direct contradiction to the basic postulates the
> the many-worlds approach.
>
>
> Since the paper by Graham, nobody count the worlds by the distinguishable
> outcome, but use Gleason or Kochen, or other manner to attribute a
> weighting.
>

And that is contradicted by the data.

It is not that one can "put in the weights by hand"; it is that any
> assignment of such weights contradicts that basis of the interpretation,
> which is that all branches occur with certainty.
>
>
>
> They all occur with certainty, but the formalism explain why, from the
> first person perspective, they all occur with relative weighted
> uncertainties.
>


That is false. How many times do I have to prove to you that this does not
work.

Bruce

There is only “relative states”, some sharable, some non sharable.
>
> Bruno
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@goog

Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-05 Thread Bruno Marchal

> On 5 Mar 2020, at 05:52, Bruce Kellett  wrote:
> 
> On Thu, Mar 5, 2020 at 3:23 PM 'Brent Meeker' via Everything List 
> mailto:everything-list@googlegroups.com>> 
> wrote:
> On 3/4/2020 7:54 PM, Bruce Kellett wrote:
>> On Thu, Mar 5, 2020 at 2:02 PM 'Brent Meeker' via Everything List 
>> mailto:everything-list@googlegroups.com>> 
>> wrote:
>> On 3/4/2020 6:45 PM, Bruce Kellett wrote:
>>> On Thu, Mar 5, 2020 at 1:34 PM 'Brent Meeker' via Everything List 
>>> >> > wrote:
>>> On 3/4/2020 6:18 PM, Bruce Kellett wrote:
 
 But one cannot just assume the Born rule in this case -- one has to use 
 the data to verify the probabilistic predictions. And the observers on the 
 majority of branches will get data that disconfirms the Born rule. (For 
 any value of the probability, the proportion of observers who get data 
 consistent with this value decreases as N becomes large.)
>>> 
>>> No, that's where I was disagreeing with you.  If "consistent with" is 
>>> defined as being within some given fraction, the proportion increases as N 
>>> becomes large.  If the probability of the an even is p and q=1-p then the 
>>> proportion of events in N trials within one std-deviation of p approaches 
>>> 1/e and N->oo and the width of the one std-deviation range goes down at 
>>> 1/sqrt(N).  So the distribution of values over the ensemble of observers 
>>> becomes concentrated near the expected value, i.e. is consistent with that 
>>> value.
>>> 
>>> 
>>> But what is the expected value? Does that not depend on the inferred 
>>> probabilities? The probability p is not a given -- it can only be inferred 
>>> from the observed data. And different observers will infer different values 
>>> of p. Then certainly, each observer will think that the distribution of 
>>> values over the 2^N observers will be concentrated near his inferred value 
>>> of p. The trouble is that that this is true whatever value of p the 
>>> observer infers -- i.e., for whatever branch of the ensemble he is on.
>> 
>> Not if the branches are unequally weighted (or numbered), as Carroll seems 
>> to assume, and those weights (or numbers) define the probability of the 
>> branch in accordance with the Born rule.  I'm not arguing that this doesn't 
>> have to be put in "by hand".  I'm arguing it is a way of assigning measures 
>> to the multiple worlds so that even though all the results occur, almost all 
>> observers will find results close to the Born rule, i.e. that self-locating 
>> uncertainty will imply the right statistics.
>> 
>> But the trouble is that Everett assumes that all outcomes occur on every 
>> trial. So all the branches occur with certainty -- there is no "weight" that 
>> differentiates different branches. That is to assume that the branches occur 
>> with the probabilities that they would have in a single-world scenario. To 
>> assume that branches have different weights is in direct contradiction to 
>> the basic postulates the the many-worlds approach. It is not that one can 
>> "put in the weights by hand"; it is that any assignment of such weights 
>> contradicts that basis of the interpretation, which is that all branches 
>> occur with certainty.
> 
> All branches occur with certainty so long as their weight>0.  Yes, Everett 
> simply assumed they all occur.  Take a simple branch counting model.  Assume 
> that at each trial a there are a 100 branches and a of them are |0> and b are 
> |1> and the values are independent of the prior values in the sequence.  So 
> long as a and b > 0.1 every value, either |0> or |1> will occur at every 
> branching.  But almost all observers, seeing only one sequence thru the 
> branches, will infer P(0)~|a|^2 and P(1)~|b|^2.
> 
> Do you really disagree that there is a way to assign weights or probabilities 
> to the sequences that reproduces the same statistics as repeating the N 
> trials many times in one world?  It's no more than saying that one-world is 
> an ergodic process.
> 
>  
> I am saying that assigning weights or probabilities in Everett, by hand 
> according to the Born rule, is incoherent.

I think that it is incoherent with a preconception of the notion of “world”. 
There are only consistent histories, and in fact "consistent histories 
supported by a continuum of computations”. You take Everett to much literally.

Bruno



> 
> Consider a state, |psi> = a|0> + b|1>, and a branch such that the 
> single-world probability by the Born rule is p = 0.001. (Such a branch can 
> trivially be constructed, for example, with a^2 = 0.9 and b^2 = 0.1). Then 
> according to Everett, this branch is one of the 2^N branches that must occur 
> in N repeats of the experiment. But, by construction, the single world 
> probability of this branch is p = 0.001. So if MWI is to reproduce the 
> single-world probabilities, we have with certainty a branch with weight p = 
> 0.001. Now this is not to say that we certainly have a branch with 

Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-05 Thread Bruce Kellett
On Thu, Mar 5, 2020 at 9:46 PM Bruno Marchal  wrote:

> On 5 Mar 2020, at 01:40, Bruce Kellett  wrote:
>
> On Thu, Mar 5, 2020 at 10:39 AM Stathis Papaioannou 
> wrote:
>
>> On Thu, 5 Mar 2020 at 09:46, Bruce Kellett  wrote:
>>
>>>
>>> The greater problem is that any idea of probability founders when all
>>> outcomes occur for any measurement. Or have you not followed the arguments
>>> I have been making that shows this to be the case?
>>>
>>
>> I think it worth noting that to some people it is obvious that if an
>> entity is to be duplicated in two places it should have a 1/2 expectation
>> of finding itself in one or other place while to other people it is obvious
>> that there should be no such expectation.
>>
>
>
> Hence my point that intuition is usually faulty in such cases -- the
> straightforward testing of any intuition with repeated trials shows the
> unreliability of such intuitions.
>
>
> It did not. You were confusing the first person account with the third
> person account.
>

Bullshit. There is no such confusion. You are just using a rhetorical
flourish to avoid facing the real issues.



> QM predicts that all measurement outcome are obtained, and by linearity,
> that all observers obtained could not have predicted it, for the same
> reason nobody can predict the outcome in the WM self)duplication
> experience. Those who claim the contrary have to say at some point that the
> Helsinki guy has died, but then Mechanism is refuted.
>


Of course no one can predict the outcome of a quantum spin measurement on a
random spin-half particle. Just as no one can predict the his 1p outcome in
WM-duplication. That  is the point I have been making -- there is no useful
notion of probability available in either case.

Bruce

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAFxXSLT1V8Pbe%2BCkHQbNKdDy005rk0B%2BxeoC_Tizd%3Dsw7YchFQ%40mail.gmail.com.


Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-05 Thread Bruno Marchal

> On 5 Mar 2020, at 04:54, Bruce Kellett  wrote:
> 
> On Thu, Mar 5, 2020 at 2:02 PM 'Brent Meeker' via Everything List 
> mailto:everything-list@googlegroups.com>> 
> wrote:
> On 3/4/2020 6:45 PM, Bruce Kellett wrote:
>> On Thu, Mar 5, 2020 at 1:34 PM 'Brent Meeker' via Everything List 
>> mailto:everything-list@googlegroups.com>> 
>> wrote:
>> On 3/4/2020 6:18 PM, Bruce Kellett wrote:
>>> 
>>> But one cannot just assume the Born rule in this case -- one has to use the 
>>> data to verify the probabilistic predictions. And the observers on the 
>>> majority of branches will get data that disconfirms the Born rule. (For any 
>>> value of the probability, the proportion of observers who get data 
>>> consistent with this value decreases as N becomes large.)
>> 
>> No, that's where I was disagreeing with you.  If "consistent with" is 
>> defined as being within some given fraction, the proportion increases as N 
>> becomes large.  If the probability of the an even is p and q=1-p then the 
>> proportion of events in N trials within one std-deviation of p approaches 
>> 1/e and N->oo and the width of the one std-deviation range goes down at 
>> 1/sqrt(N).  So the distribution of values over the ensemble of observers 
>> becomes concentrated near the expected value, i.e. is consistent with that 
>> value.
>> 
>> 
>> But what is the expected value? Does that not depend on the inferred 
>> probabilities? The probability p is not a given -- it can only be inferred 
>> from the observed data. And different observers will infer different values 
>> of p. Then certainly, each observer will think that the distribution of 
>> values over the 2^N observers will be concentrated near his inferred value 
>> of p. The trouble is that that this is true whatever value of p the observer 
>> infers -- i.e., for whatever branch of the ensemble he is on.
> 
> Not if the branches are unequally weighted (or numbered), as Carroll seems to 
> assume, and those weights (or numbers) define the probability of the branch 
> in accordance with the Born rule.  I'm not arguing that this doesn't have to 
> be put in "by hand".  I'm arguing it is a way of assigning measures to the 
> multiple worlds so that even though all the results occur, almost all 
> observers will find results close to the Born rule, i.e. that self-locating 
> uncertainty will imply the right statistics.
> 
> But the trouble is that Everett assumes that all outcomes occur on every 
> trial. So all the branches occur with certainty —

In the 3p view, but then the “self-locating” idea explains that QM predicts 
that the observers abstained  do not see the “other branches” (“they don’t even 
feel the split”, as Everett argued correctly).




> there is no "weight" that differentiates different branches.

Then the Born rule is false, and the whole of QM is false. Everett + mechanism 
+ Gleason do solve the core of the problem. (Except that we can’t use the 
universal wave no more, but then we do recover it in arithmetic, like it was 
necessary, so no problem at all, except difficult mathematics …).




> That is to assume that the branches occur with the probabilities that they 
> would have in a single-world scenario. To assume that branches have different 
> weights is in direct contradiction to the basic postulates the the 
> many-worlds approach.

Since the paper by Graham, nobody count the worlds by the distinguishable 
outcome, but use Gleason or Kochen, or other manner to attribute a weighting. 



> It is not that one can "put in the weights by hand"; it is that any 
> assignment of such weights contradicts that basis of the interpretation, 
> which is that all branches occur with certainty.


They all occur with certainty, but the formalism explain why, from the first 
person perspective, they all occur with relative weighted uncertainties. There 
is only “relative states”, some sharable, some non sharable. 

Bruno


> 
> Bruce
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/CAFxXSLQRb1FGgGT-%2Bt%2Bb6FCzoU_PN9xB82bmK8zY5K1X4DNxCw%40mail.gmail.com
>  
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/0355090F-0A79-4355-BF25-78A652209E2D%40ulb.ac.be.


Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-05 Thread Bruce Kellett
On Thu, Mar 5, 2020 at 9:39 PM Bruno Marchal  wrote:

> On 5 Mar 2020, at 00:39, Stathis Papaioannou  wrote:
>
>
> I think it worth noting that to some people it is obvious that if an
> entity is to be duplicated in two places it should have a 1/2 expectation
> of finding itself in one or other place while to other people it is obvious
> that there should be no such expectation.
>
>
> It is not just obvious. It is derivable from the simplest definition of
> “first person” and “third person”.
>

This is simply false. It cannot be derived from anything. The truth is that
testing any such notion about  the probability by repeating the trial shows
that no single value of the probability is appropriate. Alternatively, for
most 1p observers, any particular theory about the probability will be
disconfirmed. The first person data is the particular bit string recorded
by an individual. From the 3p perspective, there are 2^N different 1p bit
strings after N trials.

Bruce



> All arguments presented against the 1p-indeterminacy have always been
> refuted, and almost all time by pointing on a confusion between first
> person and third person.  The first person id defined by the owner of the
> personal memory taken with them in the box, and the third person is
> described by the personal memory of those outside the box.
>
>
>
>
> This seems to be an immediate judgement on considering the question, with
> attempts at rational justification perhaps following but not being the
> primary determinant of belief. A parallel is Newcomb’s paradox: on learning
> of it some people immediately feel it is obvious you should choose one box
> and others immediately feel you should choose both boxes.
>
>
>
> I think that the Newcomb situation is far more complex, or that the
> self-duplication is far more easy, at least for anyone who admits even weak
> form of Mechanism. To believe that there is no indeterminacy is like
> believing that all amoebas have telepathic power.
>
> The only reason I can see to refuse the first person indeterminacy is the
> comprehension that it leads to the end of physicalism, that is a long
> lasting comfortable habit of thought. People tend to hate change of
> paradigm.
>
> Bruno
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAFxXSLQ_9TuO2n8ggPP4UggctLLtQJKHpvJqkD7vUnPrg-%2B6hA%40mail.gmail.com.


Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-05 Thread Bruno Marchal

> On 5 Mar 2020, at 01:40, Bruce Kellett  wrote:
> 
> On Thu, Mar 5, 2020 at 10:39 AM Stathis Papaioannou  > wrote:
> On Thu, 5 Mar 2020 at 09:46, Bruce Kellett  > wrote:
> 
> The greater problem is that any idea of probability founders when all 
> outcomes occur for any measurement. Or have you not followed the arguments I 
> have been making that shows this to be the case?
> 
> I think it worth noting that to some people it is obvious that if an entity 
> is to be duplicated in two places it should have a 1/2 expectation of finding 
> itself in one or other place while to other people it is obvious that there 
> should be no such expectation.
> 
> 
> Hence my point that intuition is usually faulty in such cases -- the 
> straightforward testing of any intuition with repeated trials shows the 
> unreliability of such intuitions.

It did not. You were confusing the first person account with the third person 
account. QM predicts that all measurement outcome are obtained, and by 
linearity, that all observers obtained could not have predicted it, for the 
same reason nobody can predict the outcome in the WM self)duplication 
experience. Those who claim the contrary have to say at some point that the 
Helsinki guy has died, but then Mechanism is refuted.

Bruno





> 
> This seems to be an immediate judgement on considering the question, with 
> attempts at rational justification perhaps following but not being the 
> primary determinant of belief. A parallel is Newcomb’s paradox: on learning 
> of it some people immediately feel it is obvious you should choose one box 
> and others immediately feel you should choose both boxes.
> 
> 
> Newcomb's 'paradox' seems to be just another illustration of the 
> unreliability of intuition in these situations. Except that Newcomb's paradox 
> relies on the unrealistic assumption of a perfect predictor. No such problems 
> beset the argument against intuition in the case of classical duplication, or 
> the case of binary quantum measurements. (See my simple outline of the 
> arguments in my reply to Russell.)
> 
> Bruce
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/CAFxXSLTtH%2B96zApvgW3qtE-%3DNTPDrrztH81e61uQ96ay95R4vw%40mail.gmail.com
>  
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/14166D34-3B73-4A29-822D-39AF164DBDF4%40ulb.ac.be.


Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-05 Thread Bruce Kellett
On Thu, Mar 5, 2020 at 5:26 PM Russell Standish 
wrote:

> On Thu, Mar 05, 2020 at 11:34:55AM +1100, Bruce Kellett wrote:
> > On Thu, Mar 5, 2020 at 10:39 AM Russell Standish 
> wrote:
> >
> > ISTM - probability is all about what an observer observes. Since the
> > observer cannot see all outcomes, an objection based on all outcomes
> > occurring seems moot to me.
> >
> >
> > The fact that the observer cannot see all outcomes is actually central
> to the
> > argument. If, in the person-duplication scenario, the participant naively
> > assumes a probability p = 0.5 for each outcome, such an intuition can
> only be
> > tested by repeating the duplication a number of times and inferring a
> > probability value from the observed outcomes. Since each observer can
> see only
> > the outcomes along his or her particular branch (and, ipso facto, is
> unaware of
> > the outcomes on other branches), as the number of trials N becomes very
> large,
> > only a vanishingly small proportion of observers will confirm their 50/50
> > prediction . This is a trivial calculation involving only the binomial
> > coefficient -- Brent and I discussed this a while ago, and Brent could
> not
> > fault the maths.
>
> But a very large proportion of them (→1 as N→∞) will report being
> within ε (called a confidence interval) of 50% for any given ε>0
> chosen at the outset of the experiment. This is simply the law of
> large numbers theorem. You can't focus on the vanishingly small
> population that lie outside the confidence interval.
>

This is wrong. In the binary situation where both outcomes occur for every
trial, there are 2^N binary sequences for N repetitions of the experiment.
This set of binary sequences exhausts the possibilities, so the same
sequence is obtained for any two-component initial state -- regardless of
the amplitudes. You appear to assume that the natural probability in this
situation is p = 0.5 and, what is more, your appeal to the law of large
numbers applies only for single-world probabilities, in which there is only
one outcome on each trial.

In order to infer a probability of p = 0.5, your branch data must have
approximately equal numbers of zeros and ones. The number of branches with
equal numbers of zeros and ones is given by the binomial coefficient. For
large even N = 2M trials, this coefficient is N!/M!*M!. Using the Stirling
approximation to the factorial for large N, this goes as 2^N/sqrt(N)
(within factors of order one). Since there are 2^N sequences, the
proportion with n_0 = n_1 vanishes as 1/sqrt(N) for N large.

Now sequences with small departures from equal numbers will still give
probabilities within the confidence interval of p = 0.5. But this
confidence interval also shrinks as 1/sqrt(N) as N increases, so these
additional sequences do not contribute a growing number of cases giving p ~
0.5 as N increases. So, again within factors of order unity, the proportion
of sequences consistent with p = 0.5 decreases without limit as N
increases. So it is not the case that a very large proportion of the binary
strings will report p = 0.5. The proportion lying outside the confidence
interval of p = 0.5 is not vanishingly small -- it grows with N.


> > The crux of the matter is that all branches are equivalent when both
> outcomes
> > occur on every trial, so all observers will infer that their observed
> relative
> > frequencies reflect the actual probabilities. Since there are observers
> for all
> > possibilities for p in the range [0,1], and not all can be correct, no
> sensible
> > probability value can be assigned to such duplication experiments.
>
> I don't see why not. Faced with a coin flip toss, I would assume a
> 50/50 chance of seeing heads or tails. Faced with a history of 100
> heads, I might start to investigate the coin for bias, and perhaps by
> Bayesian arguments give the biased coin theory greater weight than the
> theory that I've just experience a 1 in 2^100 event, but in any case
> it is just statistics, and it is the same whether all oputcomes have
> been realised or not.
>

The trouble with this analogy is that coin tosses are single-world events
-- there is only one outcome for each toss. Consequently, any intuitions
about probabilities based on such comparisons are not relevant to the
Everettian case in which every outcome occurs for every toss. Your
intuition that it is the same whether all outcomes are realised or not is
simply mistaken.

> The problem is even worse in quantum mechanics, where you measure a state
> such
> > as
> >
> >  |psi> = a|0> + b|1>.
> >
> > When both outcomes occur on every trial, the result of a sequence of N
> trials
> > is all possible binary strings of length N, (all 2^N of them). You then
> notice
> > that this set of all possible strings is obtained whatever non-zero
> values of a
> > and b you assume. The assignment of some propbability relation to the
> > coefficients is thus seen to be meaningless -- all probabilities occur
>

Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-05 Thread Bruno Marchal

> On 5 Mar 2020, at 00:39, Stathis Papaioannou  wrote:
> 
> 
> 
> On Thu, 5 Mar 2020 at 09:46, Bruce Kellett  > wrote:
> On Thu, Mar 5, 2020 at 9:31 AM Stathis Papaioannou  > wrote:
> On Thu, 5 Mar 2020 at 08:54, Bruce Kellett  > wrote:
> On Wed, Mar 4, 2020 at 11:01 PM Stathis Papaioannou  > wrote:
> On Fri, 28 Feb 2020 at 08:40, Bruce Kellett  > wrote:
> On Fri, Feb 28, 2020 at 4:21 AM 'Brent Meeker' via Everything List 
> mailto:everything-list@googlegroups.com>> 
> wrote:
> On 2/27/2020 3:45 AM, Bruce Kellett wrote:
>> 
>> That is probably what all this argument is actually about -- the maths show 
>> that there are no probabilities. Because there are no unique probabilities 
>> in the classical duplication case, the concept of probability has been shown 
>> to be inadmissible in the deterministic (Everettian) quantum case. The 
>> appeal by people like Deutsch and Wallace to betting quotients, or quantum 
>> credibility measures, are just ways of forcing a probabilistic 
>> interpretation on to quantum mechanics by hand -- they are not derivations 
>> of probability from within the deterministic theory. There are no 
>> probabilities in the deterministic theory, even from the 1p perspective, 
>> because the data are consistent with any prior assignment of a probability 
>> measure.
> 
> The probability enters from the self-location uncertainty; which is other 
> terms is saying: Assume each branch has the same probability (or some 
> weighting) for you being in that branch.  Then that is the probability that 
> you have observed the sequence of events that define that branch.
> 
> I think that is Sean Carroll's approach. I am uncertain as to whether this 
> really works or not. The concept of a 'weight' or 'thickness' for each branch 
> is difficult to reconcile with the first-person experience of probability: 
> which is obtained within the branch, so is independent of any overall 
> 'weight'. But that aside, self-locating uncertainty is just another idea 
> imposed on quantum mechanics and, like decision-theoretic ideas, it is 
> without theoretical foundation -- it is just imposed by fiat on a 
> deterministic theory. It makes  probability a subjective notion imposed on a 
> theory that is supposedly objective: there is an objective probability that a 
> radioactive nucleus will decay in a certain time period -- independent of our 
> subjective impressions, or self-location. (I can develop this thought 
> further, if required, but I think it shows Sean's approach to fail.)
> 
> Probability derived from self-locating uncertainty is an idea independent of 
> any particular physics. It is also independent of any theory of 
> consciousness, since we can imagine a non-conscious observer reasoning in the 
> same way. To some people it seems trivially obvious, to others it seems very 
> strange. I don’t know if which group one falls into correlates with any other 
> beliefs or attitudes.
> 
> As I said, self-locating uncertainty is just another idea imposed on the 
> quantum formalism without any real theoretical foundation -- "it is just 
> imposed by fiat on a deterministic theory." If nothing else, this shows that 
> Carroll's claim that Everett is just "plain-vanilla" quantum mechanics, 
> without any additional assumptions, is a load of self-deluded hogwash.
> 
> And as I said, probabilities derived from self-locating uncertainty is, for 
> many people, trivially obvious, just a special case of frequentist inference.
> 
> That is not a particularly solid basis on which to base a scientific theory. 
> The trivially obvious is seldom useful.
> 
> The greater problem is that any idea of probability founders when all 
> outcomes occur for any measurement. Or have you not followed the arguments I 
> have been making that shows this to be the case?
> 
> I think it worth noting that to some people it is obvious that if an entity 
> is to be duplicated in two places it should have a 1/2 expectation of finding 
> itself in one or other place while to other people it is obvious that there 
> should be no such expectation.

It is not just obvious. It is derivable from the simplest definition of “first 
person” and “third person”. All arguments presented against the 
1p-indeterminacy have always been refuted, and almost all time by pointing on a 
confusion between first person and third person.  The first person id defined 
by the owner of the personal memory taken with them in the box, and the third 
person is described by the personal memory of those outside the box.




> This seems to be an immediate judgement on considering the question, with 
> attempts at rational justification perhaps following but not being the 
> primary determinant of belief. A parallel is Newcomb’s paradox: on learning 
> of it some people immediately feel it is obvious you should choose

Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-04 Thread Russell Standish
On Thu, Mar 05, 2020 at 11:34:55AM +1100, Bruce Kellett wrote:
> On Thu, Mar 5, 2020 at 10:39 AM Russell Standish  
> wrote:
> 
> On Thu, Mar 05, 2020 at 09:46:34AM +1100, Bruce Kellett wrote:
> 
> > The greater problem is that any idea of probability founders when all
> outcomes
> > occur for any measurement. Or have you not followed the arguments I have
> been
> > making that shows this to be the case?
> >
> 
> I must admit I haven't followed the arguments either - admittedly, I
> haven't read your cited material.
> 
> ISTM - probability is all about what an observer observes. Since the
> observer cannot see all outcomes, an objection based on all outcomes
> occurring seems moot to me.
> 
> 
> The fact that the observer cannot see all outcomes is actually central to the
> argument. If, in the person-duplication scenario, the participant naively
> assumes a probability p = 0.5 for each outcome, such an intuition can only be
> tested by repeating the duplication a number of times and inferring a
> probability value from the observed outcomes. Since each observer can see only
> the outcomes along his or her particular branch (and, ipso facto, is unaware 
> of
> the outcomes on other branches), as the number of trials N becomes very large,
> only a vanishingly small proportion of observers will confirm their 50/50
> prediction . This is a trivial calculation involving only the binomial
> coefficient -- Brent and I discussed this a while ago, and Brent could not
> fault the maths.

But a very large proportion of them (→1 as N→∞) will report being
within ε (called a confidence interval) of 50% for any given ε>0
chosen at the outset of the experiment. This is simply the law of
large numbers theorem. You can't focus on the vanishingly small
population that lie outside the confidence interval.


> 
> The crux of the matter is that all branches are equivalent when both outcomes
> occur on every trial, so all observers will infer that their observed relative
> frequencies reflect the actual probabilities. Since there are observers for 
> all
> possibilities for p in the range [0,1], and not all can be correct, no 
> sensible
> probability value can be assigned to such duplication experiments.

I don't see why not. Faced with a coin flip toss, I would assume a
50/50 chance of seeing heads or tails. Faced with a history of 100
heads, I might start to investigate the coin for bias, and perhaps by
Bayesian arguments give the biased coin theory greater weight than the
theory that I've just experience a 1 in 2^100 event, but in any case
it is just statistics, and it is the same whether all oputcomes have
been realised or not.

> 
> The problem is even worse in quantum mechanics, where you measure a state such
> as
> 
>      |psi> = a|0> + b|1>.
> 
> When both outcomes occur on every trial, the result of a sequence of N trials
> is all possible binary strings of length N, (all 2^N of them). You then notice
> that this set of all possible strings is obtained whatever non-zero values of 
> a
> and b you assume. The assignment of some propbability relation to the
> coefficients is thus seen to be meaningless -- all probabilities occur equal
> for any non-zero choices of a and b.
> 

For the outcome of any particular binary string, sure. But if we
classify the outcome strings - say ones with a recognisable pattern,
or when replayed through a CD player reproduce the sounds of
Beethoven's ninth, we find that the overwhelming majority are simply
gobbledegook, random data. And the overwhelming majority of those will
have a roughly equal number of 0s and 1s. For each of these
categories, there will be a definite probability value, and not all
will be 2^-N. For instance, with Beethoven's ninth, that the tenor has
a cold in the 4th movement doesn't render the music not the ninth. So
there will be set of bitstrings that are recognisably the ninth
symphony, and a quite definite probability value.


> 
>  
> 
> You may counter that the assumption that an observer cannot see all
> outcomes is an extra thing "put in by hand", and you would be right,
> of course. It is not part of the Schroedinger equation. But I would
> strongly suspect that this assumption will be a natural outcome of a
> proper theory of consciousness, if/when we have one. Indeed, I
> highlight it in my book with the name "PROJECTION postulate".
> 
> This is, of course, at the heart of the 1p/3p distinction - and of
> course the classic taunts and misunderstandings between BM and JC
> (1p-3p confusion).
> 
> 
> I know that it is a factor of the 1p/3p distinction. My complaint has
> frequently been that advocates of the "p = 0.5 is obvious" school are often
> guilty of this confusion.
> 
> 
> Incidently, I've started reading Colin Hales's "Revolution of
> Scientific Structure", a fellow Melburnian and member of this
> list. The interesting proposition about this is Colin is p

Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-04 Thread Bruce Kellett
On Thu, Mar 5, 2020 at 3:23 PM 'Brent Meeker' via Everything List <
everything-list@googlegroups.com> wrote:

> On 3/4/2020 7:54 PM, Bruce Kellett wrote:
>
> On Thu, Mar 5, 2020 at 2:02 PM 'Brent Meeker' via Everything List <
> everything-list@googlegroups.com> wrote:
>
>> On 3/4/2020 6:45 PM, Bruce Kellett wrote:
>>
>> On Thu, Mar 5, 2020 at 1:34 PM 'Brent Meeker' via Everything List <
>> everything-list@googlegroups.com> wrote:
>>
>>> On 3/4/2020 6:18 PM, Bruce Kellett wrote:
>>>
>>>
>>> But one cannot just assume the Born rule in this case -- one has to use
>>> the data to verify the probabilistic predictions. And the observers on the
>>> majority of branches will get data that disconfirms the Born rule. (For any
>>> value of the probability, the proportion of observers who get data
>>> consistent with this value decreases as N becomes large.)
>>>
>>>
>>> No, that's where I was disagreeing with you.  If "consistent with" is
>>> defined as being within some given fraction, the proportion increases as N
>>> becomes large.  If the probability of the an even is p and q=1-p then the
>>> proportion of events in N trials within one std-deviation of p approaches
>>> 1/e and N->oo and the width of the one std-deviation range goes down at
>>> 1/sqrt(N).  So the distribution of values over the ensemble of observers
>>> becomes concentrated near the expected value, i.e. is consistent with that
>>> value.
>>>
>>
>>
>> But what is the expected value? Does that not depend on the inferred
>> probabilities? The probability p is not a given -- it can only be inferred
>> from the observed data. And different observers will infer different values
>> of p. Then certainly, each observer will think that the distribution of
>> values over the 2^N observers will be concentrated near his inferred value
>> of p. The trouble is that that this is true whatever value of p the
>> observer infers -- i.e., for whatever branch of the ensemble he is on.
>>
>>
>> Not if the branches are unequally weighted (or numbered), as Carroll
>> seems to assume, and those weights (or numbers) define the probability of
>> the branch in accordance with the Born rule.  I'm not arguing that this
>> doesn't have to be put in "by hand".  I'm arguing it is a way of assigning
>> measures to the multiple worlds so that even though all the results occur,
>> almost all observers will find results close to the Born rule, i.e. that
>> self-locating uncertainty will imply the right statistics.
>>
>
> But the trouble is that Everett assumes that all outcomes occur on every
> trial. So all the branches occur with certainty -- there is no "weight"
> that differentiates different branches. That is to assume that the branches
> occur with the probabilities that they would have in a single-world
> scenario. To assume that branches have different weights is in direct
> contradiction to the basic postulates the the many-worlds approach. It is
> not that one can "put in the weights by hand"; it is that any assignment of
> such weights contradicts that basis of the interpretation, which is that
> all branches occur with certainty.
>
>
> All branches occur with certainty so long as their weight>0.  Yes, Everett
> simply assumed they all occur.  Take a simple branch counting model.
> Assume that at each trial a there are a 100 branches and a of them are |0>
> and b are |1> and the values are independent of the prior values in the
> sequence.  So long as a and b > 0.1 every value, either |0> or |1> will
> occur at every branching.  But almost all observers, seeing only one
> sequence thru the branches, will infer P(0)~|a|^2 and P(1)~|b|^2.
>
> Do you really disagree that there is a way to assign weights or
> probabilities to the sequences that reproduces the same statistics as
> repeating the N trials many times in one world?  It's no more than saying
> that one-world is an ergodic process.
>


I am saying that assigning weights or probabilities in Everett, by hand
according to the Born rule, is incoherent.

Consider a state, |psi> = a|0> + b|1>, and a branch such that the
single-world probability by the Born rule is p = 0.001. (Such a branch can
trivially be constructed, for example, with a^2 = 0.9 and b^2 = 0.1). Then
according to Everett, this branch is one of the 2^N branches that must
occur in N repeats of the experiment. But, by construction, the single
world probability of this branch is p = 0.001. So if MWI is to reproduce
the single-world probabilities, we have with certainty a branch with weight
p = 0.001. Now this is not to say that we certainly have a branch with p =
0.001; it is, rather, the conjunction of two statements: (a) the branch
probability is p = 0.001, and (b) the branch probability is p = 1.0. These
two statements are incompatible, so any assignment of weights to Everettian
branches is incoherent.

Bruce

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiv

Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-04 Thread 'Brent Meeker' via Everything List



On 3/4/2020 7:54 PM, Bruce Kellett wrote:
On Thu, Mar 5, 2020 at 2:02 PM 'Brent Meeker' via Everything List 
> wrote:


On 3/4/2020 6:45 PM, Bruce Kellett wrote:

On Thu, Mar 5, 2020 at 1:34 PM 'Brent Meeker' via Everything List
mailto:everything-list@googlegroups.com>> wrote:

On 3/4/2020 6:18 PM, Bruce Kellett wrote:


But one cannot just assume the Born rule in this case -- one
has to use the data to verify the probabilistic predictions.
And the observers on the majority of branches will get data
that disconfirms the Born rule. (For any value of the
probability, the proportion of observers who get data
consistent with this value decreases as N becomes large.)


No, that's where I was disagreeing with you.  If "consistent
with" is defined as being within some given fraction, the
proportion increases as N becomes large.  If the probability
of the an even is p and q=1-p then the proportion of events
in N trials within one std-deviation of p approaches 1/e and
N->oo and the width of the one std-deviation range goes down
at 1/sqrt(N). So the distribution of values over the ensemble
of observers becomes concentrated near the expected value,
i.e. is consistent with that value.



But what is the expected value? Does that not depend on the
inferred probabilities? The probability p is not a given -- it
can only be inferred from the observed data. And different
observers will infer different values of p. Then certainly, each
observer will think that the distribution of values over the 2^N
observers will be concentrated near his inferred value of p. The
trouble is that that this is true whatever value of p the
observer infers -- i.e., for whatever branch of the ensemble he
is on.


Not if the branches are unequally weighted (or numbered), as
Carroll seems to assume, and those weights (or numbers) define the
probability of the branch in accordance with the Born rule.  I'm
not arguing that this doesn't have to be put in "by hand".  I'm
arguing it is a way of assigning measures to the multiple worlds
so that even though all the results occur, almost all observers
will find results close to the Born rule, i.e. that self-locating
uncertainty will imply the right statistics.


But the trouble is that Everett assumes that all outcomes occur on 
every trial. So all the branches occur with certainty -- there is no 
"weight" that differentiates different branches. That is to assume 
that the branches occur with the probabilities that they would have in 
a single-world scenario. To assume that branches have different 
weights is in direct contradiction to the basic postulates the the 
many-worlds approach. It is not that one can "put in the weights by 
hand"; it is that any assignment of such weights contradicts that 
basis of the interpretation, which is that all branches occur with 
certainty.


All branches occur with certainty so long as their weight>0. Yes, 
Everett simply assumed they all occur.  Take a simple branch counting 
model.  Assume that at each trial a there are a 100 branches and a of 
them are |0> and b are |1> and the values are independent of the prior 
values in the sequence.  So long as a and b > 0.1 every value, either 
|0> or |1> will occur at every branching.  But almost all observers, 
seeing only one sequence thru the branches, will infer P(0)~|a|^2 and 
P(1)~|b|^2.


Do you really disagree that there is a way to assign weights or 
probabilities to the sequences that reproduces the same statistics as 
repeating the N trials many times in one world?  It's no more than 
saying that one-world is an ergodic process.


Brent


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/080b0d35-4fa5-1da1-f8e8-fb684be9a21a%40verizon.net.


Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-04 Thread Bruce Kellett
On Thu, Mar 5, 2020 at 2:02 PM 'Brent Meeker' via Everything List <
everything-list@googlegroups.com> wrote:

> On 3/4/2020 6:45 PM, Bruce Kellett wrote:
>
> On Thu, Mar 5, 2020 at 1:34 PM 'Brent Meeker' via Everything List <
> everything-list@googlegroups.com> wrote:
>
>> On 3/4/2020 6:18 PM, Bruce Kellett wrote:
>>
>>
>> But one cannot just assume the Born rule in this case -- one has to use
>> the data to verify the probabilistic predictions. And the observers on the
>> majority of branches will get data that disconfirms the Born rule. (For any
>> value of the probability, the proportion of observers who get data
>> consistent with this value decreases as N becomes large.)
>>
>>
>> No, that's where I was disagreeing with you.  If "consistent with" is
>> defined as being within some given fraction, the proportion increases as N
>> becomes large.  If the probability of the an even is p and q=1-p then the
>> proportion of events in N trials within one std-deviation of p approaches
>> 1/e and N->oo and the width of the one std-deviation range goes down at
>> 1/sqrt(N).  So the distribution of values over the ensemble of observers
>> becomes concentrated near the expected value, i.e. is consistent with that
>> value.
>>
>
>
> But what is the expected value? Does that not depend on the inferred
> probabilities? The probability p is not a given -- it can only be inferred
> from the observed data. And different observers will infer different values
> of p. Then certainly, each observer will think that the distribution of
> values over the 2^N observers will be concentrated near his inferred value
> of p. The trouble is that that this is true whatever value of p the
> observer infers -- i.e., for whatever branch of the ensemble he is on.
>
>
> Not if the branches are unequally weighted (or numbered), as Carroll seems
> to assume, and those weights (or numbers) define the probability of the
> branch in accordance with the Born rule.  I'm not arguing that this doesn't
> have to be put in "by hand".  I'm arguing it is a way of assigning measures
> to the multiple worlds so that even though all the results occur, almost
> all observers will find results close to the Born rule, i.e. that
> self-locating uncertainty will imply the right statistics.
>

But the trouble is that Everett assumes that all outcomes occur on every
trial. So all the branches occur with certainty -- there is no "weight"
that differentiates different branches. That is to assume that the branches
occur with the probabilities that they would have in a single-world
scenario. To assume that branches have different weights is in direct
contradiction to the basic postulates the the many-worlds approach. It is
not that one can "put in the weights by hand"; it is that any assignment of
such weights contradicts that basis of the interpretation, which is that
all branches occur with certainty.

Bruce

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAFxXSLQRb1FGgGT-%2Bt%2Bb6FCzoU_PN9xB82bmK8zY5K1X4DNxCw%40mail.gmail.com.


Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-04 Thread 'Brent Meeker' via Everything List



On 3/4/2020 6:45 PM, Bruce Kellett wrote:
On Thu, Mar 5, 2020 at 1:34 PM 'Brent Meeker' via Everything List 
> wrote:


On 3/4/2020 6:18 PM, Bruce Kellett wrote:


But one cannot just assume the Born rule in this case -- one has
to use the data to verify the probabilistic predictions. And the
observers on the majority of branches will get data that
disconfirms the Born rule. (For any value of the probability, the
proportion of observers who get data consistent with this value
decreases as N becomes large.)


No, that's where I was disagreeing with you.  If "consistent with"
is defined as being within some given fraction, the proportion
increases as N becomes large.  If the probability of the an even
is p and q=1-p then the proportion of events in N trials within
one std-deviation of p approaches 1/e and N->oo and the width of
the one std-deviation range goes down at 1/sqrt(N).  So the
distribution of values over the ensemble of observers becomes
concentrated near the expected value, i.e. is consistent with that
value.



But what is the expected value? Does that not depend on the inferred 
probabilities? The probability p is not a given -- it can only be 
inferred from the observed data. And different observers will infer 
different values of p. Then certainly, each observer will think that 
the distribution of values over the 2^N observers will be concentrated 
near his inferred value of p. The trouble is that that this is true 
whatever value of p the observer infers -- i.e., for whatever branch 
of the ensemble he is on.


Not if the branches are unequally weighted (or numbered), as Carroll 
seems to assume, and those weights (or numbers) define the probability 
of the branch in accordance with the Born rule.  I'm not arguing that 
this doesn't have to be put in "by hand".  I'm arguing it is a way of 
assigning measures to the multiple worlds so that even though all the 
results occur, almost all observers will find results close to the Born 
rule, i.e. that self-locating uncertainty will imply the right statistics.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/9d9e09ab-c7a1-47bb-6d4c-efd7c8c22ffe%40verizon.net.


Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-04 Thread Bruce Kellett
On Thu, Mar 5, 2020 at 1:34 PM 'Brent Meeker' via Everything List <
everything-list@googlegroups.com> wrote:

> On 3/4/2020 6:18 PM, Bruce Kellett wrote:
>
>
> But one cannot just assume the Born rule in this case -- one has to use
> the data to verify the probabilistic predictions. And the observers on the
> majority of branches will get data that disconfirms the Born rule. (For any
> value of the probability, the proportion of observers who get data
> consistent with this value decreases as N becomes large.)
>
>
> No, that's where I was disagreeing with you.  If "consistent with" is
> defined as being within some given fraction, the proportion increases as N
> becomes large.  If the probability of the an even is p and q=1-p then the
> proportion of events in N trials within one std-deviation of p approaches
> 1/e and N->oo and the width of the one std-deviation range goes down at
> 1/sqrt(N).  So the distribution of values over the ensemble of observers
> becomes concentrated near the expected value, i.e. is consistent with that
> value.
>


But what is the expected value? Does that not depend on the inferred
probabilities? The probability p is not a given -- it can only be inferred
from the observed data. And different observers will infer different values
of p. Then certainly, each observer will think that the distribution of
values over the 2^N observers will be concentrated near his inferred value
of p. The trouble is that that this is true whatever value of p the
observer infers -- i.e., for whatever branch of the ensemble he is on.

Bruce

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAFxXSLREhH98Zg%2BsJCd-_tOp4vc%3DK0rStzvxnSdZPSgDqXqihw%40mail.gmail.com.


Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-04 Thread 'Brent Meeker' via Everything List



On 3/4/2020 6:18 PM, Bruce Kellett wrote:


But one cannot just assume the Born rule in this case -- one has to 
use the data to verify the probabilistic predictions. And the 
observers on the majority of branches will get data that disconfirms 
the Born rule. (For any value of the probability, the proportion of 
observes who get data consistent with this value decreases as N 
becomes large.)


No, that's where I was disagreeing with you.  If "consistent with" is 
defined as being within some given fraction, the proportion increases as 
N becomes large.  If the probability of the an even is p and q=1-p then 
the proportion of events in N trials within one std-deviation of p 
approaches 1/e and N->oo and the width of the one std-deviation range 
goes down at 1/sqrt(N).  So the distribution of values over the ensemble 
of observers becomes concentrated near the expected value, i.e. is 
consistent with that value.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/fdcad305-54f8-3706-31f9-caa4fe0fd5d6%40verizon.net.


Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-04 Thread Bruce Kellett
On Thu, Mar 5, 2020 at 12:41 PM 'Brent Meeker' via Everything List <
everything-list@googlegroups.com> wrote:

> On 3/4/2020 4:48 PM, Bruce Kellett wrote:
>
> On Thu, Mar 5, 2020 at 10:50 AM 'Brent Meeker' via Everything List <
> everything-list@googlegroups.com> wrote:
>
>>
>>  For example, if you take Zurek's quantum Darwinism to provide an
>> objective pointer basis then you can say, in this basis, off-diagonal terms
>> in the reduced density matrix that are so small they will never be observed
>> can be set to zero and then the diagonal terms are just the probability of
>> the (one) world that will be actual.
>>
>
> That is still a probabilistic assertion. And no derivation of
> probabilities for cases in which all outcomes occur is going to be
> successful.
>
>
> That's not what I'm arguing.  I arguing that there is a way to make sense
> of a and b as "weights" whose square magnitudes give probabilities.  The
> significance of Zurek's quantum Darwinism is that it provides and objective
> pointer basis.
>

That is not how Zurek derives a preferred pointer basis. Originally, his
derivation relied on the concept of einselectin -- robustness against
environmental decoherence. But he later modified this to eliminate the
implied dependence of einselection on the Born rule. His later derivation
of a pointer basis relied on the side that a measurement must leave a
distinct impression on the environment. Quantum Darwinism then comes into
play from the fact that the robustness of the pointer basis means that
multiple impression of the result can be imprinted on the environment
without changing the pointer reading, leading to the emergence of an
intersubjectively realized classical world.



>   Without that one can always says, no matter how small the off diagonal
> terms are, there's another equally valid basis in which they aren't small.
>
>
> The arguments for probability assignments given by Zurek (and Carroll and
> Wallace, among others) all rely, at some point, on the "intuition" that
> equal amplitudes equate to equal probabilities. It is that assumption that
> I have shown to be false.
>
>
> If you make it an axiom it ain't false.
>


But an axiom is useless if it does not give results in agreement with
experiment. And the assumption of equal probabilities from equal amplitudes
is disconfirmed by the majority of observers when there are repeated trials
with binary outcomes when both outcomes occur.

Bruce

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAFxXSLQdmsZ8zPqCY5dpYQeQGwrbEqHFsAqOR-Rcj80U6GTt0w%40mail.gmail.com.


Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-04 Thread Bruce Kellett
On Thu, Mar 5, 2020 at 12:51 PM 'Brent Meeker' via Everything List <
everything-list@googlegroups.com> wrote:

> On 3/4/2020 5:25 PM, Bruce Kellett wrote:
>
> On Thu, Mar 5, 2020 at 11:59 AM 'Brent Meeker' via Everything List <
> everything-list@googlegroups.com> wrote:
>
>> On 3/4/2020 4:34 PM, Bruce Kellett wrote:
>>
>>
>> The crux of the matter is that all branches are equivalent when both
>> outcomes occur on every trial, so all observers will infer that their
>> observed relative frequencies reflect the actual probabilities. Since there
>> are observers for all possibilities for p in the range [0,1], and not all
>> can be correct, no sensible probability value can be assigned to such
>> duplication experiments.
>>
>> The problem is even worse in quantum mechanics, where you measure a state
>> such as
>>
>>  |psi> = a|0> + b|1>.
>>
>> When both outcomes occur on every trial, the result of a sequence of N
>> trials is all possible binary strings of length N, (all 2^N of them). You
>> then notice that this set of all possible strings is obtained whatever
>> non-zero values of a and b you assume. The assignment of some propbability
>> relation to the coefficients is thus seen to be meaningless -- all
>> probabilities occur equal for any non-zero choices of a and b.
>>
>>
>> But  E(number|0>) = aN
>>
>
> Where does this come from? The weight of each branch is a^x*b^y for a
> branch with x zeros and y ones.
>
> But this weight is external to the branch, and the 1p probability
> estimates from within the branch are necessarily independent of the overall
> coefficient. The expectation for the number of zeros within any branch
> depends on the branch, but is independent of both a and b.
>
>
> Sorry, I see I didn't make it clear I was assuming the Born rule.  I was
> just pointing out that this makes an assignment of probabilities to the
> multiple worlds which is the same as looking at a single world as a member
> of an ensemble.
>


So you are taking the probability of each branch as (a^x*b^y)^2. (Note that
a and b are amplitudes, not probabilities, so your expectation above should
presumably be E(#0) = a^2*N. For a = b = 1/sqrt(2), this just means that
the expected number of zeros equals the expected number of ones, namely,
N/2. Which is rather trivial, given that there is exactly one zero and one
one on each trial -- independent of the amplitudes!

But one cannot just assume the Born rule in this case -- one has to use the
data to verify the probabilistic predictions. And the observers on the
majority of branches will get data that disconfirms the Born rule. (For any
value of the probability, the proportion of observes who get data
consistent with this value decreases as N becomes large.)

Bruce

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAFxXSLSJPDHjtjawS5Wk7vrCncQTgAo8UR5y6q8DiURbN3d0pA%40mail.gmail.com.


Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-04 Thread 'Brent Meeker' via Everything List



On 3/4/2020 5:25 PM, Bruce Kellett wrote:
On Thu, Mar 5, 2020 at 11:59 AM 'Brent Meeker' via Everything List 
> wrote:


On 3/4/2020 4:34 PM, Bruce Kellett wrote:


The crux of the matter is that all branches are equivalent when
both outcomes occur on every trial, so all observers will infer
that their observed relative frequencies reflect the actual
probabilities. Since there are observers for all possibilities
for p in the range [0,1], and not all can be correct, no sensible
probability value can be assigned to such duplication experiments.

The problem is even worse in quantum mechanics, where you measure
a state such as

     |psi> = a|0> + b|1>.

When both outcomes occur on every trial, the result of a sequence
of N trials is all possible binary strings of length N, (all 2^N
of them). You then notice that this set of all possible strings
is obtained whatever non-zero values of a and b you assume. The
assignment of some propbability relation to the coefficients is
thus seen to be meaningless -- all probabilities occur equal for
any non-zero choices of a and b.


But  E(number|0>) = aN


Where does this come from? The weight of each branch is a^x*b^y for a 
branch with x zeros and y ones.
But this weight is external to the branch, and the 1p probability 
estimates from within the branch are necessarily independent of the 
overall coefficient. The expectation for the number of zeros within 
any branch depends on the branch, but is independent of both a and b.


Sorry, I see I didn't make it clear I was assuming the Born rule.  I was 
just pointing out that this makes an assignment of probabilities to the 
multiple worlds which is the same as looking at a single world as a 
member of an ensemble.


Brent

I suspect that you are mixing the 1p and 3p viewpoints. Or else you 
are using the expectation for a single outcome per trial (not that for 
which both outcomes occur on every trial.)


Bruce


  and Var(number|0>) = abN.  The fraction x within one
std-deviation of the expected number is a constant

    F( a-sqrt[ab/N]oo.

Brent

--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com 
.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAFxXSLQRTO79k1M5P8LcQLJBqWhop_Hw9Ti6%2BrnTjR38yuT0CQ%40mail.gmail.com 
.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/2224809d-f7b0-20d7-eea7-3cf6dd2f540c%40verizon.net.


Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-04 Thread 'Brent Meeker' via Everything List



On 3/4/2020 4:48 PM, Bruce Kellett wrote:
On Thu, Mar 5, 2020 at 10:50 AM 'Brent Meeker' via Everything List 
> wrote:


On 3/4/2020 2:43 PM, Bruce Kellett wrote:

On Thu, Mar 5, 2020 at 9:15 AM 'Brent Meeker' via Everything List
mailto:everything-list@googlegroups.com>> wrote:


Whether MWI is a satisfactory interpretation or not; do you
have a preferred proposal for getting rid of the unobserved
macroscopic states that are predicted by the formalism with a
collapse postulate, e.g. gravitationally induced collapse,
transactional interpretation, or what?


I do not think the problem is solved at the moment. Penrose's
gravitational induced collapse still lacks a dynamical mechanism
for the collapse when the gravitational superposition become
unwieldy.


Have you looked at Laloe's paper which fills this in using some
Bohmian ideas.  arXiv:1905.12047v3 [quant-ph] 6 Sep 2019


Cramer's (Kastner's) transactional interpretation introduces a
whole new "possibility world", and relies on the failed absorber
theory of radiation.


I think the function of the possibility space is to avoid the
problems of the absorber theory.  The absorbtion is "transacted"
in possibility space.  I'm note sure how it handles free radiation
(e.g. the CMB) since nothing happens except by an exchange of
energy/information between an emitter and absorber.



No-go there. Bohm is the preferred option of many philosophers of
QM, but I think Flash-GRW is growing in plausibility. At least it
does give an underlying stochastic dynamics, so doesn't suffer
the problems of introducing probability that other approaches have.

It is still an open question, as far as I can see. The clear
thing is that Everett plainly fails to make any sense of
probability when all outcomes occur for any measurement.


I don't see that as particularly damning.  I just means you need
another postulate of the form "And /this/ is a probability measure."



But that does not get around the problem that the set of possible 
results from N trials on the state


 |psi> = a|0> and b|1>

for non-zero coefficients a and b, is independent of the coefficients 
a and b. So any experimental test of any probability idea, whether 
imposed by hand or not, is going to show that the probabilities are 
not related to the coefficients or branch weights.


 For example, if you take Zurek's quantum Darwinism to provide an
objective pointer basis then you can say, in this basis,
off-diagonal terms in the reduced density matrix that are so small
they will never be observed can be set to zero and then the
diagonal terms are just the probability of the (one) world that
will be actual.


That is still a probabilistic assertion. And no derivation of 
probabilities for cases in which all outcomes occur is going to be 
successful.


That's not what I'm arguing.  I arguing that there is a way to make 
sense of a and b as "weights" whose square magnitudes give 
probabilities.  The significance of Zurek's quantum Darwinism is that it 
provides and objective pointer basis.  Without that one can always says, 
no matter how small the off diagonal terms are, there's another equally 
valid basis in which they aren't small.




The arguments for probability assignments given by Zurek (and Carroll 
and Wallace, among others) all rely, at some point, on the "intuition" 
that equal amplitudes equate to equal probabilities. It is that 
assumption that I have shown to be false.


If you make it an axiom it ain't false.

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/a57fb170-6fe8-b715-9079-94bbb679b352%40verizon.net.


Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-04 Thread Bruce Kellett
On Thu, Mar 5, 2020 at 11:59 AM 'Brent Meeker' via Everything List <
everything-list@googlegroups.com> wrote:

> On 3/4/2020 4:34 PM, Bruce Kellett wrote:
>
>
> The crux of the matter is that all branches are equivalent when both
> outcomes occur on every trial, so all observers will infer that their
> observed relative frequencies reflect the actual probabilities. Since there
> are observers for all possibilities for p in the range [0,1], and not all
> can be correct, no sensible probability value can be assigned to such
> duplication experiments.
>
> The problem is even worse in quantum mechanics, where you measure a state
> such as
>
>  |psi> = a|0> + b|1>.
>
> When both outcomes occur on every trial, the result of a sequence of N
> trials is all possible binary strings of length N, (all 2^N of them). You
> then notice that this set of all possible strings is obtained whatever
> non-zero values of a and b you assume. The assignment of some propbability
> relation to the coefficients is thus seen to be meaningless -- all
> probabilities occur equal for any non-zero choices of a and b.
>
>
> But  E(number|0>) = aN
>

Where does this come from? The weight of each branch is a^x*b^y for a
branch with x zeros and y ones. But this weight is external to the branch,
and the 1p probability estimates from within the branch are necessarily
independent of the overall coefficient. The expectation for the number of
zeros within any branch depends on the branch, but is independent of both a
and b. I suspect that you are mixing the 1p and 3p viewpoints. Or else you
are using the expectation for a single outcome per trial (not that for
which both outcomes occur on every trial.)

Bruce



>   and Var(number|0>) = abN.  The fraction x within one std-deviation of
> the expected number is a constant
>
> F( a-sqrt[ab/N]
> So that fraction become more an more sharply confined around a as N->oo.
>
> Brent
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAFxXSLQRTO79k1M5P8LcQLJBqWhop_Hw9Ti6%2BrnTjR38yuT0CQ%40mail.gmail.com.


Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-04 Thread 'Brent Meeker' via Everything List



On 3/4/2020 4:34 PM, Bruce Kellett wrote:
On Thu, Mar 5, 2020 at 10:39 AM Russell Standish 
mailto:li...@hpcoders.com.au>> wrote:


On Thu, Mar 05, 2020 at 09:46:34AM +1100, Bruce Kellett wrote:

> The greater problem is that any idea of probability founders
when all outcomes
> occur for any measurement. Or have you not followed the
arguments I have been
> making that shows this to be the case?
>

I must admit I haven't followed the arguments either - admittedly, I
haven't read your cited material.

ISTM - probability is all about what an observer observes. Since the
observer cannot see all outcomes, an objection based on all outcomes
occurring seems moot to me.


The fact that the observer cannot see all outcomes is actually central 
to the argument. If, in the person-duplication scenario, the 
participant naively assumes a probability p = 0.5 for each outcome, 
such an intuition can only be tested by repeating the duplication a 
number of times and inferring a probability value from the observed 
outcomes. Since each observer can see only the outcomes along his or 
her particular branch (and, ipso facto, is unaware of the outcomes on 
other branches), as the number of trials N becomes very large, only a 
vanishingly small proportion of observers will confirm their 50/50 
prediction . This is a trivial calculation involving only the binomial 
coefficient -- Brent and I discussed this a while ago, and Brent could 
not fault the maths.


The crux of the matter is that all branches are equivalent when both 
outcomes occur on every trial, so all observers will infer that their 
observed relative frequencies reflect the actual probabilities. Since 
there are observers for all possibilities for p in the range [0,1], 
and not all can be correct, no sensible probability value can be 
assigned to such duplication experiments.


The problem is even worse in quantum mechanics, where you measure a 
state such as


 |psi> = a|0> + b|1>.

When both outcomes occur on every trial, the result of a sequence of N 
trials is all possible binary strings of length N, (all 2^N of them). 
You then notice that this set of all possible strings is obtained 
whatever non-zero values of a and b you assume. The assignment of some 
propbability relation to the coefficients is thus seen to be 
meaningless -- all probabilities occur equal for any non-zero choices 
of a and b.


But  E(number|0>) = aN  and Var(number|0>) = abN.  The fraction x within 
one std-deviation of the expected number is a constant


    F( a-sqrt[ab/N]oo.

Brent




You may counter that the assumption that an observer cannot see all
outcomes is an extra thing "put in by hand", and you would be right,
of course. It is not part of the Schroedinger equation. But I would
strongly suspect that this assumption will be a natural outcome of a
proper theory of consciousness, if/when we have one. Indeed, I
highlight it in my book with the name "PROJECTION postulate".

This is, of course, at the heart of the 1p/3p distinction - and of
course the classic taunts and misunderstandings between BM and JC
(1p-3p confusion).


I know that it is a factor of the 1p/3p distinction. My complaint has 
frequently been that advocates of the "p = 0.5 is obvious" school are 
often guilty of this confusion.


Incidently, I've started reading Colin Hales's "Revolution of
Scientific Structure", a fellow Melburnian and member of this
list. The interesting proposition about this is Colin is proposing
we're on the verge of a Kuhnian paradigm shift in relation to the role
of the observer in science, and the that this sort of misunderstanding
is a classic symptom of such a shift.



Elimination of the observer from physics was one of the prime 
motivations for Everett's 'relative state' idea. Given that 
'measurement' and 'the observer' play central roles in variants of the 
'Copenhagen' interpretation.


Bruce
--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com 
.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAFxXSLT2SH5bcRn9zc3WHbp%3DcGbgiBxK7aLs3GCCzjs7rQ3W%3Dw%40mail.gmail.com 
.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/6aad5eec-324e-eef8-f7db-b4968572cf4a%40verizon.net.


Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-04 Thread Bruce Kellett
On Thu, Mar 5, 2020 at 10:50 AM 'Brent Meeker' via Everything List <
everything-list@googlegroups.com> wrote:

> On 3/4/2020 2:43 PM, Bruce Kellett wrote:
>
> On Thu, Mar 5, 2020 at 9:15 AM 'Brent Meeker' via Everything List <
> everything-list@googlegroups.com> wrote:
>
>>
>> Whether MWI is a satisfactory interpretation or not; do you have a
>> preferred proposal for getting rid of the unobserved macroscopic states
>> that are predicted by the formalism with a collapse postulate, e.g.
>> gravitationally induced collapse, transactional interpretation, or what?
>>
>
> I do not think the problem is solved at the moment. Penrose's
> gravitational induced collapse still lacks a dynamical mechanism for the
> collapse when the gravitational superposition become unwieldy.
>
>
> Have you looked at Laloe's paper which fills this in using some Bohmian
> ideas.  arXiv:1905.12047v3 [quant-ph] 6 Sep 2019
>
> Cramer's (Kastner's) transactional interpretation introduces a whole new
> "possibility world", and relies on the failed absorber theory of radiation.
>
>
> I think the function of the possibility space is to avoid the problems of
> the absorber theory.  The absorbtion is "transacted" in possibility space.
> I'm note sure how it handles free radiation (e.g. the CMB) since nothing
> happens except by an exchange of energy/information between an emitter and
> absorber.
>
>
> No-go there. Bohm is the preferred option of many philosophers of QM, but
> I think Flash-GRW is growing in plausibility. At least it does give an
> underlying stochastic dynamics, so doesn't suffer the problems of
> introducing probability that other approaches have.
>
> It is still an open question, as far as I can see. The clear thing is that
> Everett plainly fails to make any sense of probability when all outcomes
> occur for any measurement.
>
>
> I don't see that as particularly damning.  I just means you need another
> postulate of the form "And *this* is a probability measure."
>


But that does not get around the problem that the set of possible results
from N trials on the state

 |psi> = a|0> and b|1>

for non-zero coefficients a and b, is independent of the coefficients a and
b. So any experimental test of any probability idea, whether imposed by
hand or not, is going to show that the probabilities are not related to the
coefficients or branch weights.



>  For example, if you take Zurek's quantum Darwinism to provide an
> objective pointer basis then you can say, in this basis, off-diagonal terms
> in the reduced density matrix that are so small they will never be observed
> can be set to zero and then the diagonal terms are just the probability of
> the (one) world that will be actual.
>

That is still a probabilistic assertion. And no derivation of probabilities
for cases in which all outcomes occur is going to be successful.

The arguments for probability assignments given by Zurek (and Carroll and
Wallace, among others) all rely, at some point, on the "intuition" that
equal amplitudes equate to equal probabilities. It is that assumption that
I have shown to be false.

Bruce

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAFxXSLQyeUey%2B4cj%3Dv%2BwGm%2BYKQubpgQ-SGeLk19iD1FXkCJbpw%40mail.gmail.com.


Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-04 Thread Bruce Kellett
On Thu, Mar 5, 2020 at 10:39 AM Stathis Papaioannou 
wrote:

> On Thu, 5 Mar 2020 at 09:46, Bruce Kellett  wrote:
>
>>
>> The greater problem is that any idea of probability founders when all
>> outcomes occur for any measurement. Or have you not followed the arguments
>> I have been making that shows this to be the case?
>>
>
> I think it worth noting that to some people it is obvious that if an
> entity is to be duplicated in two places it should have a 1/2 expectation
> of finding itself in one or other place while to other people it is obvious
> that there should be no such expectation.
>


Hence my point that intuition is usually faulty in such cases -- the
straightforward testing of any intuition with repeated trials shows the
unreliability of such intuitions.

This seems to be an immediate judgement on considering the question, with
> attempts at rational justification perhaps following but not being the
> primary determinant of belief. A parallel is Newcomb’s paradox: on learning
> of it some people immediately feel it is obvious you should choose one box
> and others immediately feel you should choose both boxes.
>


Newcomb's 'paradox' seems to be just another illustration of the
unreliability of intuition in these situations. Except that Newcomb's
paradox relies on the unrealistic assumption of a perfect predictor. No
such problems beset the argument against intuition in the case of classical
duplication, or the case of binary quantum measurements. (See my simple
outline of the arguments in my reply to Russell.)

Bruce

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAFxXSLTtH%2B96zApvgW3qtE-%3DNTPDrrztH81e61uQ96ay95R4vw%40mail.gmail.com.


Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-04 Thread Bruce Kellett
On Thu, Mar 5, 2020 at 10:39 AM Russell Standish 
wrote:

> On Thu, Mar 05, 2020 at 09:46:34AM +1100, Bruce Kellett wrote:
>
> > The greater problem is that any idea of probability founders when all
> outcomes
> > occur for any measurement. Or have you not followed the arguments I have
> been
> > making that shows this to be the case?
> >
>
> I must admit I haven't followed the arguments either - admittedly, I
> haven't read your cited material.
>
> ISTM - probability is all about what an observer observes. Since the
> observer cannot see all outcomes, an objection based on all outcomes
> occurring seems moot to me.
>

The fact that the observer cannot see all outcomes is actually central to
the argument. If, in the person-duplication scenario, the participant
naively assumes a probability p = 0.5 for each outcome, such an intuition
can only be tested by repeating the duplication a number of times and
inferring a probability value from the observed outcomes. Since each
observer can see only the outcomes along his or her particular branch (and,
ipso facto, is unaware of the outcomes on other branches), as the number of
trials N becomes very large, only a vanishingly small proportion of
observers will confirm their 50/50 prediction . This is a trivial
calculation involving only the binomial coefficient -- Brent and I
discussed this a while ago, and Brent could not fault the maths.

The crux of the matter is that all branches are equivalent when both
outcomes occur on every trial, so all observers will infer that their
observed relative frequencies reflect the actual probabilities. Since there
are observers for all possibilities for p in the range [0,1], and not all
can be correct, no sensible probability value can be assigned to such
duplication experiments.

The problem is even worse in quantum mechanics, where you measure a state
such as

 |psi> = a|0> + b|1>.

When both outcomes occur on every trial, the result of a sequence of N
trials is all possible binary strings of length N, (all 2^N of them). You
then notice that this set of all possible strings is obtained whatever
non-zero values of a and b you assume. The assignment of some propbability
relation to the coefficients is thus seen to be meaningless -- all
probabilities occur equal for any non-zero choices of a and b.




> You may counter that the assumption that an observer cannot see all
> outcomes is an extra thing "put in by hand", and you would be right,
> of course. It is not part of the Schroedinger equation. But I would
> strongly suspect that this assumption will be a natural outcome of a
> proper theory of consciousness, if/when we have one. Indeed, I
> highlight it in my book with the name "PROJECTION postulate".
>
> This is, of course, at the heart of the 1p/3p distinction - and of
> course the classic taunts and misunderstandings between BM and JC
> (1p-3p confusion).
>

I know that it is a factor of the 1p/3p distinction. My complaint has
frequently been that advocates of the "p = 0.5 is obvious" school are often
guilty of this confusion.

Incidently, I've started reading Colin Hales's "Revolution of
> Scientific Structure", a fellow Melburnian and member of this
> list. The interesting proposition about this is Colin is proposing
> we're on the verge of a Kuhnian paradigm shift in relation to the role
> of the observer in science, and the that this sort of misunderstanding
> is a classic symptom of such a shift.
>


Elimination of the observer from physics was one of the prime motivations
for Everett's 'relative state' idea. Given that 'measurement' and 'the
observer' play central roles in variants of the 'Copenhagen' interpretation.

Bruce

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAFxXSLT2SH5bcRn9zc3WHbp%3DcGbgiBxK7aLs3GCCzjs7rQ3W%3Dw%40mail.gmail.com.


Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-04 Thread 'Brent Meeker' via Everything List



On 3/4/2020 2:43 PM, Bruce Kellett wrote:
On Thu, Mar 5, 2020 at 9:15 AM 'Brent Meeker' via Everything List 
> wrote:


On 3/4/2020 1:54 PM, Bruce Kellett wrote:

On Wed, Mar 4, 2020 at 11:01 PM Stathis Papaioannou
mailto:stath...@gmail.com>> wrote:


Probability derived from self-locating uncertainty is an idea
independent of any particular physics. It is also independent
of any theory of consciousness, since we can imagine a
non-conscious observer reasoning in the same way. To some
people it seems trivially obvious, to others it seems very
strange. I don’t know if which group one falls into
correlates with any other beliefs or attitudes.


As I said, self-locating uncertainty is just another idea imposed
on the quantum formalism without any real theoretical foundation
-- "it is just imposed by fiat on a deterministic theory." If
nothing else, this shows that Carroll's claim that Everett is
just "plain-vanilla" quantum mechanics, without any additional
assumptions, is a load of self-deluded hogwash.


Whether MWI is a satisfactory interpretation or not; do you have a
preferred proposal for getting rid of the unobserved macroscopic
states that are predicted by the formalism with a collapse
postulate, e.g. gravitationally induced collapse, transactional
interpretation, or what?


I do not think the problem is solved at the moment. Penrose's 
gravitational induced collapse still lacks a dynamical mechanism for 
the collapse when the gravitational superposition become unwieldy.


Have you looked at Laloe's paper which fills this in using some Bohmian 
ideas.  arXiv:1905.12047v3 [quant-ph] 6 Sep 2019


Cramer's (Kastner's) transactional interpretation introduces a whole 
new "possibility world", and relies on the failed absorber theory of 
radiation.


I think the function of the possibility space is to avoid the problems 
of the absorber theory.  The absorbtion is "transacted" in possibility 
space.  I'm note sure how it handles free radiation (e.g. the CMB) since 
nothing happens except by an exchange of energy/information between an 
emitter and absorber.



No-go there. Bohm is the preferred option of many philosophers of QM, 
but I think Flash-GRW is growing in plausibility. At least it does 
give an underlying stochastic dynamics, so doesn't suffer the problems 
of introducing probability that other approaches have.


It is still an open question, as far as I can see. The clear thing is 
that Everett plainly fails to make any sense of probability when all 
outcomes occur for any measurement.


I don't see that as particularly damning.  I just means you need another 
postulate of the form "And /this/ is a probability measure."  For 
example, if you take Zurek's quantum Darwinism to provide an objective 
pointer basis then you can say, in this basis, off-diagonal terms in the 
reduced density matrix that are so small they will never be observed can 
be set to zero and then the diagonal terms are just the probability of 
the (one) world that will be actual.


Brent



Bruce
--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send 
an email to everything-list+unsubscr...@googlegroups.com 
.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAFxXSLRrkoaZg1hCy76%2BcD9cqwEBF0hxtwXV3Ft15yR4d6jRgw%40mail.gmail.com 
.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/977108c4-87f6-5c3a-fb96-345c02ccd3a3%40verizon.net.


Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-04 Thread Russell Standish
On Thu, Mar 05, 2020 at 09:46:34AM +1100, Bruce Kellett wrote:

> 
> The greater problem is that any idea of probability founders when all outcomes
> occur for any measurement. Or have you not followed the arguments I have been
> making that shows this to be the case?
> 

I must admit I haven't followed the arguments either - admittedly, I
haven't read your cited material.

ISTM - probability is all about what an observer observes. Since the
observer cannot see all outcomes, an objection based on all outcomes
occurring seems moot to me.

You may counter that the assumption that an observer cannot see all
outcomes is an extra thing "put in by hand", and you would be right,
of course. It is not part of the Schroedinger equation. But I would
strongly suspect that this assumption will be a natural outcome of a
proper theory of consciousness, if/when we have one. Indeed, I
highlight it in my book with the name "PROJECTION postulate".

This is, of course, at the heart of the 1p/3p distinction - and of
course the classic taunts and misunderstandings between BM and JC
(1p-3p confusion).

Incidently, I've started reading Colin Hales's "Revolution of
Scientific Structure", a fellow Melburnian and member of this
list. The interesting proposition about this is Colin is proposing
we're on the verge of a Kuhnian paradigm shift in relation to the role
of the observer in science, and the that this sort of misunderstanding
is a classic symptom of such a shift.

Cheers
-- 


Dr Russell StandishPhone 0425 253119 (mobile)
Principal, High Performance Coders hpco...@hpcoders.com.au
  http://www.hpcoders.com.au


-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/20200304233927.GE7315%40zen.


Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-04 Thread Stathis Papaioannou
On Thu, 5 Mar 2020 at 09:46, Bruce Kellett  wrote:

> On Thu, Mar 5, 2020 at 9:31 AM Stathis Papaioannou 
> wrote:
>
>> On Thu, 5 Mar 2020 at 08:54, Bruce Kellett  wrote:
>>
>>> On Wed, Mar 4, 2020 at 11:01 PM Stathis Papaioannou 
>>> wrote:
>>>
 On Fri, 28 Feb 2020 at 08:40, Bruce Kellett 
 wrote:

> On Fri, Feb 28, 2020 at 4:21 AM 'Brent Meeker' via Everything List <
> everything-list@googlegroups.com> wrote:
>
>> On 2/27/2020 3:45 AM, Bruce Kellett wrote:
>>
>>
>> That is probably what all this argument is actually about -- the
>> maths show that there are no probabilities. Because there are no unique
>> probabilities in the classical duplication case, the concept of 
>> probability
>> has been shown to be inadmissible in the deterministic (Everettian) 
>> quantum
>> case. The appeal by people like Deutsch and Wallace to betting quotients,
>> or quantum credibility measures, are just ways of forcing a probabilistic
>> interpretation on to quantum mechanics by hand -- they are not 
>> derivations
>> of probability from within the deterministic theory. There are no
>> probabilities in the deterministic theory, even from the 1p perspective,
>> because the data are consistent with any prior assignment of a 
>> probability
>> measure.
>>
>>
>> The probability enters from the self-location uncertainty; which is
>> other terms is saying: Assume each branch has the same probability (or 
>> some
>> weighting) for you being in that branch.  Then that is the probability 
>> that
>> you have observed the sequence of events that define that branch.
>>
>
> I think that is Sean Carroll's approach. I am uncertain as to whether
> this really works or not. The concept of a 'weight' or 'thickness' for 
> each
> branch is difficult to reconcile with the first-person experience of
> probability: which is obtained within the branch, so is independent of any
> overall 'weight'. But that aside, self-locating uncertainty is just 
> another
> idea imposed on quantum mechanics and, like decision-theoretic ideas, it 
> is
> without theoretical foundation -- it is just imposed by fiat on a
> deterministic theory. It makes  probability a subjective notion imposed on
> a theory that is supposedly objective: there is an objective probability
> that a radioactive nucleus will decay in a certain time period --
> independent of our subjective impressions, or self-location. (I can 
> develop
> this thought further, if required, but I think it shows Sean's approach to
> fail.)
>

 Probability derived from self-locating uncertainty is an idea
 independent of any particular physics. It is also independent of any theory
 of consciousness, since we can imagine a non-conscious observer reasoning
 in the same way. To some people it seems trivially obvious, to others it
 seems very strange. I don’t know if which group one falls into correlates
 with any other beliefs or attitudes.

>>>
>>> As I said, self-locating uncertainty is just another idea imposed on the
>>> quantum formalism without any real theoretical foundation -- "it is just
>>> imposed by fiat on a deterministic theory." If nothing else, this shows
>>> that Carroll's claim that Everett is just "plain-vanilla" quantum
>>> mechanics, without any additional assumptions, is a load of self-deluded
>>> hogwash.
>>>
>>
>> And as I said, probabilities derived from self-locating uncertainty is,
>> for many people, trivially obvious, just a special case of frequentist
>> inference.
>>
>
> That is not a particularly solid basis on which to base a scientific
> theory. The trivially obvious is seldom useful.
>
> The greater problem is that any idea of probability founders when all
> outcomes occur for any measurement. Or have you not followed the arguments
> I have been making that shows this to be the case?
>

I think it worth noting that to some people it is obvious that if an entity
is to be duplicated in two places it should have a 1/2 expectation of
finding itself in one or other place while to other people it is obvious
that there should be no such expectation. This seems to be an immediate
judgement on considering the question, with attempts at rational
justification perhaps following but not being the primary determinant of
belief. A parallel is Newcomb’s paradox: on learning of it some people
immediately feel it is obvious you should choose one box and others
immediately feel you should choose both boxes.

> --
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-l

Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-04 Thread Bruce Kellett
On Thu, Mar 5, 2020 at 9:31 AM Stathis Papaioannou 
wrote:

> On Thu, 5 Mar 2020 at 08:54, Bruce Kellett  wrote:
>
>> On Wed, Mar 4, 2020 at 11:01 PM Stathis Papaioannou 
>> wrote:
>>
>>> On Fri, 28 Feb 2020 at 08:40, Bruce Kellett 
>>> wrote:
>>>
 On Fri, Feb 28, 2020 at 4:21 AM 'Brent Meeker' via Everything List <
 everything-list@googlegroups.com> wrote:

> On 2/27/2020 3:45 AM, Bruce Kellett wrote:
>
>
> That is probably what all this argument is actually about -- the maths
> show that there are no probabilities. Because there are no unique
> probabilities in the classical duplication case, the concept of 
> probability
> has been shown to be inadmissible in the deterministic (Everettian) 
> quantum
> case. The appeal by people like Deutsch and Wallace to betting quotients,
> or quantum credibility measures, are just ways of forcing a probabilistic
> interpretation on to quantum mechanics by hand -- they are not derivations
> of probability from within the deterministic theory. There are no
> probabilities in the deterministic theory, even from the 1p perspective,
> because the data are consistent with any prior assignment of a probability
> measure.
>
>
> The probability enters from the self-location uncertainty; which is
> other terms is saying: Assume each branch has the same probability (or 
> some
> weighting) for you being in that branch.  Then that is the probability 
> that
> you have observed the sequence of events that define that branch.
>

 I think that is Sean Carroll's approach. I am uncertain as to whether
 this really works or not. The concept of a 'weight' or 'thickness' for each
 branch is difficult to reconcile with the first-person experience of
 probability: which is obtained within the branch, so is independent of any
 overall 'weight'. But that aside, self-locating uncertainty is just another
 idea imposed on quantum mechanics and, like decision-theoretic ideas, it is
 without theoretical foundation -- it is just imposed by fiat on a
 deterministic theory. It makes  probability a subjective notion imposed on
 a theory that is supposedly objective: there is an objective probability
 that a radioactive nucleus will decay in a certain time period --
 independent of our subjective impressions, or self-location. (I can develop
 this thought further, if required, but I think it shows Sean's approach to
 fail.)

>>>
>>> Probability derived from self-locating uncertainty is an idea
>>> independent of any particular physics. It is also independent of any theory
>>> of consciousness, since we can imagine a non-conscious observer reasoning
>>> in the same way. To some people it seems trivially obvious, to others it
>>> seems very strange. I don’t know if which group one falls into correlates
>>> with any other beliefs or attitudes.
>>>
>>
>> As I said, self-locating uncertainty is just another idea imposed on the
>> quantum formalism without any real theoretical foundation -- "it is just
>> imposed by fiat on a deterministic theory." If nothing else, this shows
>> that Carroll's claim that Everett is just "plain-vanilla" quantum
>> mechanics, without any additional assumptions, is a load of self-deluded
>> hogwash.
>>
>
> And as I said, probabilities derived from self-locating uncertainty is,
> for many people, trivially obvious, just a special case of frequentist
> inference.
>

That is not a particularly solid basis on which to base a scientific
theory. The trivially obvious is seldom useful.

The greater problem is that any idea of probability founders when all
outcomes occur for any measurement. Or have you not followed the arguments
I have been making that shows this to be the case?

Bruce

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAFxXSLQK0Zz-dSn1c7ZKTYVLSNMR9VGtL5xox1As74Muga%2BBmQ%40mail.gmail.com.


Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-04 Thread Bruce Kellett
On Thu, Mar 5, 2020 at 9:15 AM 'Brent Meeker' via Everything List <
everything-list@googlegroups.com> wrote:

> On 3/4/2020 1:54 PM, Bruce Kellett wrote:
>
> On Wed, Mar 4, 2020 at 11:01 PM Stathis Papaioannou 
> wrote:
>
>>
>> Probability derived from self-locating uncertainty is an idea independent
>> of any particular physics. It is also independent of any theory of
>> consciousness, since we can imagine a non-conscious observer reasoning in
>> the same way. To some people it seems trivially obvious, to others it seems
>> very strange. I don’t know if which group one falls into correlates with
>> any other beliefs or attitudes.
>>
>
> As I said, self-locating uncertainty is just another idea imposed on the
> quantum formalism without any real theoretical foundation -- "it is just
> imposed by fiat on a deterministic theory." If nothing else, this shows
> that Carroll's claim that Everett is just "plain-vanilla" quantum
> mechanics, without any additional assumptions, is a load of self-deluded
> hogwash.
>
>
> Whether MWI is a satisfactory interpretation or not; do you have a
> preferred proposal for getting rid of the unobserved macroscopic states
> that are predicted by the formalism with a collapse postulate, e.g.
> gravitationally induced collapse, transactional interpretation, or what?
>

I do not think the problem is solved at the moment. Penrose's gravitational
induced collapse still lacks a dynamical mechanism for the collapse when
the gravitational superposition become unwieldy. Cramer's (Kastner's)
transactional interpretation introduces a whole new "possibility world",
and relies on the failed absorber theory of radiation. No-go there. Bohm is
the preferred option of many philosophers of QM, but I think Flash-GRW is
growing in plausibility. At least it does give an underlying stochastic
dynamics, so doesn't suffer the problems of introducing probability that
other approaches have.

It is still an open question, as far as I can see. The clear thing is that
Everett plainly fails to make any sense of probability when all outcomes
occur for any measurement.

Bruce

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAFxXSLRrkoaZg1hCy76%2BcD9cqwEBF0hxtwXV3Ft15yR4d6jRgw%40mail.gmail.com.


Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-04 Thread Stathis Papaioannou
On Thu, 5 Mar 2020 at 08:54, Bruce Kellett  wrote:

> On Wed, Mar 4, 2020 at 11:01 PM Stathis Papaioannou 
> wrote:
>
>> On Fri, 28 Feb 2020 at 08:40, Bruce Kellett 
>> wrote:
>>
>>> On Fri, Feb 28, 2020 at 4:21 AM 'Brent Meeker' via Everything List <
>>> everything-list@googlegroups.com> wrote:
>>>
 On 2/27/2020 3:45 AM, Bruce Kellett wrote:


 That is probably what all this argument is actually about -- the maths
 show that there are no probabilities. Because there are no unique
 probabilities in the classical duplication case, the concept of probability
 has been shown to be inadmissible in the deterministic (Everettian) quantum
 case. The appeal by people like Deutsch and Wallace to betting quotients,
 or quantum credibility measures, are just ways of forcing a probabilistic
 interpretation on to quantum mechanics by hand -- they are not derivations
 of probability from within the deterministic theory. There are no
 probabilities in the deterministic theory, even from the 1p perspective,
 because the data are consistent with any prior assignment of a probability
 measure.


 The probability enters from the self-location uncertainty; which is
 other terms is saying: Assume each branch has the same probability (or some
 weighting) for you being in that branch.  Then that is the probability that
 you have observed the sequence of events that define that branch.

>>>
>>> I think that is Sean Carroll's approach. I am uncertain as to whether
>>> this really works or not. The concept of a 'weight' or 'thickness' for each
>>> branch is difficult to reconcile with the first-person experience of
>>> probability: which is obtained within the branch, so is independent of any
>>> overall 'weight'. But that aside, self-locating uncertainty is just another
>>> idea imposed on quantum mechanics and, like decision-theoretic ideas, it is
>>> without theoretical foundation -- it is just imposed by fiat on a
>>> deterministic theory. It makes  probability a subjective notion imposed on
>>> a theory that is supposedly objective: there is an objective probability
>>> that a radioactive nucleus will decay in a certain time period --
>>> independent of our subjective impressions, or self-location. (I can develop
>>> this thought further, if required, but I think it shows Sean's approach to
>>> fail.)
>>>
>>
>> Probability derived from self-locating uncertainty is an idea independent
>> of any particular physics. It is also independent of any theory of
>> consciousness, since we can imagine a non-conscious observer reasoning in
>> the same way. To some people it seems trivially obvious, to others it seems
>> very strange. I don’t know if which group one falls into correlates with
>> any other beliefs or attitudes.
>>
>
> As I said, self-locating uncertainty is just another idea imposed on the
> quantum formalism without any real theoretical foundation -- "it is just
> imposed by fiat on a deterministic theory." If nothing else, this shows
> that Carroll's claim that Everett is just "plain-vanilla" quantum
> mechanics, without any additional assumptions, is a load of self-deluded
> hogwash.
>

And as I said, probabilities derived from self-locating uncertainty is, for
many people, trivially obvious, just a special case of frequentist
inference.

> --
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAH%3D2ypXUah6wYyWJAf4fuJTR%2B6cKjUCj2pXLkoM745dowAm1xA%40mail.gmail.com.


Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-04 Thread 'Brent Meeker' via Everything List



On 3/4/2020 1:54 PM, Bruce Kellett wrote:
On Wed, Mar 4, 2020 at 11:01 PM Stathis Papaioannou 
mailto:stath...@gmail.com>> wrote:


On Fri, 28 Feb 2020 at 08:40, Bruce Kellett mailto:bhkellet...@gmail.com>> wrote:

On Fri, Feb 28, 2020 at 4:21 AM 'Brent Meeker' via Everything
List mailto:everything-list@googlegroups.com>> wrote:

On 2/27/2020 3:45 AM, Bruce Kellett wrote:


That is probably what all this argument is actually about
-- the maths show that there are no probabilities.
Because there are no unique probabilities in the
classical duplication case, the concept of probability
has been shown to be inadmissible in the deterministic
(Everettian) quantum case. The appeal by people like
Deutsch and Wallace to betting quotients, or quantum
credibility measures, are just ways of forcing a
probabilistic interpretation on to quantum mechanics by
hand -- they are not derivations of probability from
within the deterministic theory. There are no
probabilities in the deterministic theory, even from the
1p perspective, because the data are consistent with any
prior assignment of a probability measure.


The probability enters from the self-location uncertainty;
which is other terms is saying: Assume each branch has the
same probability (or some weighting) for you being in that
branch.  Then that is the probability that you have
observed the sequence of events that define that branch.


I think that is Sean Carroll's approach. I am uncertain as to
whether this really works or not. The concept of a 'weight' or
'thickness' for each branch is difficult to reconcile with the
first-person experience of probability: which is obtained
within the branch, so is independent of any overall 'weight'.
But that aside, self-locating uncertainty is just another idea
imposed on quantum mechanics and, like decision-theoretic
ideas, it is without theoretical foundation -- it is just
imposed by fiat on a deterministic theory. It makes
 probability a subjective notion imposed on a theory that is
supposedly objective: there is an objective probability that a
radioactive nucleus will decay in a certain time period --
independent of our subjective impressions, or self-location.
(I can develop this thought further, if required, but I think
it shows Sean's approach to fail.)


Probability derived from self-locating uncertainty is an idea
independent of any particular physics. It is also independent of
any theory of consciousness, since we can imagine a non-conscious
observer reasoning in the same way. To some people it seems
trivially obvious, to others it seems very strange. I don’t know
if which group one falls into correlates with any other beliefs or
attitudes.


As I said, self-locating uncertainty is just another idea imposed on 
the quantum formalism without any real theoretical foundation -- "it 
is just imposed by fiat on a deterministic theory." If nothing else, 
this shows that Carroll's claim that Everett is just "plain-vanilla" 
quantum mechanics, without any additional assumptions, is a load of 
self-deluded hogwash.


Whether MWI is a satisfactory interpretation or not; do you have a 
preferred proposal for getting rid of the unobserved macroscopic states 
that are predicted by the formalism with a collapse postulate, e.g. 
gravitationally induced collapse, transactional interpretation, or what?


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/99b9f632-a595-0c44-465f-bd1f8a5a059e%40verizon.net.


Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-04 Thread Bruce Kellett
On Wed, Mar 4, 2020 at 11:01 PM Stathis Papaioannou 
wrote:

> On Fri, 28 Feb 2020 at 08:40, Bruce Kellett  wrote:
>
>> On Fri, Feb 28, 2020 at 4:21 AM 'Brent Meeker' via Everything List <
>> everything-list@googlegroups.com> wrote:
>>
>>> On 2/27/2020 3:45 AM, Bruce Kellett wrote:
>>>
>>>
>>> That is probably what all this argument is actually about -- the maths
>>> show that there are no probabilities. Because there are no unique
>>> probabilities in the classical duplication case, the concept of probability
>>> has been shown to be inadmissible in the deterministic (Everettian) quantum
>>> case. The appeal by people like Deutsch and Wallace to betting quotients,
>>> or quantum credibility measures, are just ways of forcing a probabilistic
>>> interpretation on to quantum mechanics by hand -- they are not derivations
>>> of probability from within the deterministic theory. There are no
>>> probabilities in the deterministic theory, even from the 1p perspective,
>>> because the data are consistent with any prior assignment of a probability
>>> measure.
>>>
>>>
>>> The probability enters from the self-location uncertainty; which is
>>> other terms is saying: Assume each branch has the same probability (or some
>>> weighting) for you being in that branch.  Then that is the probability that
>>> you have observed the sequence of events that define that branch.
>>>
>>
>> I think that is Sean Carroll's approach. I am uncertain as to whether
>> this really works or not. The concept of a 'weight' or 'thickness' for each
>> branch is difficult to reconcile with the first-person experience of
>> probability: which is obtained within the branch, so is independent of any
>> overall 'weight'. But that aside, self-locating uncertainty is just another
>> idea imposed on quantum mechanics and, like decision-theoretic ideas, it is
>> without theoretical foundation -- it is just imposed by fiat on a
>> deterministic theory. It makes  probability a subjective notion imposed on
>> a theory that is supposedly objective: there is an objective probability
>> that a radioactive nucleus will decay in a certain time period --
>> independent of our subjective impressions, or self-location. (I can develop
>> this thought further, if required, but I think it shows Sean's approach to
>> fail.)
>>
>
> Probability derived from self-locating uncertainty is an idea independent
> of any particular physics. It is also independent of any theory of
> consciousness, since we can imagine a non-conscious observer reasoning in
> the same way. To some people it seems trivially obvious, to others it seems
> very strange. I don’t know if which group one falls into correlates with
> any other beliefs or attitudes.
>

As I said, self-locating uncertainty is just another idea imposed on the
quantum formalism without any real theoretical foundation -- "it is just
imposed by fiat on a deterministic theory." If nothing else, this shows
that Carroll's claim that Everett is just "plain-vanilla" quantum
mechanics, without any additional assumptions, is a load of self-deluded
hogwash.

Bruce

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAFxXSLRKUmzW3Db2kze0RHOKbTYkna_5gWnmxi3onB_KhFmUcw%40mail.gmail.com.


Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-04 Thread Bruno Marchal

> On 4 Mar 2020, at 13:00, Stathis Papaioannou  wrote:
> 
> 
> 
> On Fri, 28 Feb 2020 at 08:40, Bruce Kellett  > wrote:
> On Fri, Feb 28, 2020 at 4:21 AM 'Brent Meeker' via Everything List 
> mailto:everything-list@googlegroups.com>> 
> wrote:
> On 2/27/2020 3:45 AM, Bruce Kellett wrote:
>> 
>> That is probably what all this argument is actually about -- the maths show 
>> that there are no probabilities. Because there are no unique probabilities 
>> in the classical duplication case, the concept of probability has been shown 
>> to be inadmissible in the deterministic (Everettian) quantum case. The 
>> appeal by people like Deutsch and Wallace to betting quotients, or quantum 
>> credibility measures, are just ways of forcing a probabilistic 
>> interpretation on to quantum mechanics by hand -- they are not derivations 
>> of probability from within the deterministic theory. There are no 
>> probabilities in the deterministic theory, even from the 1p perspective, 
>> because the data are consistent with any prior assignment of a probability 
>> measure.
> 
> The probability enters from the self-location uncertainty; which is other 
> terms is saying: Assume each branch has the same probability (or some 
> weighting) for you being in that branch.  Then that is the probability that 
> you have observed the sequence of events that define that branch.
> 
> I think that is Sean Carroll's approach. I am uncertain as to whether this 
> really works or not. The concept of a 'weight' or 'thickness' for each branch 
> is difficult to reconcile with the first-person experience of probability: 
> which is obtained within the branch, so is independent of any overall 
> 'weight'. But that aside, self-locating uncertainty is just another idea 
> imposed on quantum mechanics and, like decision-theoretic ideas, it is 
> without theoretical foundation -- it is just imposed by fiat on a 
> deterministic theory. It makes  probability a subjective notion imposed on a 
> theory that is supposedly objective: there is an objective probability that a 
> radioactive nucleus will decay in a certain time period -- independent of our 
> subjective impressions, or self-location. (I can develop this thought 
> further, if required, but I think it shows Sean's approach to fail.)
> 
> Probability derived from self-locating uncertainty is an idea independent of 
> any particular physics. It is also independent of any theory of consciousness,

I agree. It is a priori independent. Now, if we accept a theory of 
consciousness based on Mechanism, the self-locating in arithmetic is 
unavoidable. It needs some non mechanist thesis to associated the mind with the 
appearance of matter.




> since we can imagine a non-conscious observer reasoning in the same way.

I agree. Non conscious but duplicable being are led to the same statistic. That 
is clear in our context, if only because it uses only a third person notion of 
first person, the personal memory. QM (without collapse even give a model where 
“duplication” can operate on a continuum. Everett use more “duplicability” than 
“Mechanism”, although in his long text, he does use some mechanism to describe 
the discrete memory of the observer when doing a sequence of measurement.



> To some people it seems trivially obvious, to others it seems very strange.

And some people find it obvious, until they heard the boss saying it is very 
strange. The contrary happens too, and is as much annoying. Sometimes people 
seem to fear their own thinking abilities.




> I don’t know if which group one falls into correlates with any other beliefs 
> or attitudes.

Many people dislike the idea that they are not unique. Some people believe 
wrongly that “many-worlds”  or “all computations” makes everything trivial, but 
computationalism shows that this is not the case (but that requires some 
knowledge in mathematical logic).

As long as theology is not back to science (where it was born), people will 
continue to believe that they can believe in what their want.

The separation of theology or metaphysics from science leaves the domain to the 
charlatans, who can then exploit fear and hope, to steal money and control 
people. It leads to the separation of the human science and the exact science, 
which makes not only human science inexact and exact science inhuman, but it 
makes exact science inexact and human science inhuman. 

Bruno







> -- 
> Stathis Papaioannou
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/CAH%3D2ypWb-n24FgchakZWNBw9ifk7HYdtz5LQnDENstYM_0xVaw%40mail.gmail.com
>  
> 

Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-04 Thread Stathis Papaioannou
On Fri, 28 Feb 2020 at 08:40, Bruce Kellett  wrote:

> On Fri, Feb 28, 2020 at 4:21 AM 'Brent Meeker' via Everything List <
> everything-list@googlegroups.com> wrote:
>
>> On 2/27/2020 3:45 AM, Bruce Kellett wrote:
>>
>>
>> That is probably what all this argument is actually about -- the maths
>> show that there are no probabilities. Because there are no unique
>> probabilities in the classical duplication case, the concept of probability
>> has been shown to be inadmissible in the deterministic (Everettian) quantum
>> case. The appeal by people like Deutsch and Wallace to betting quotients,
>> or quantum credibility measures, are just ways of forcing a probabilistic
>> interpretation on to quantum mechanics by hand -- they are not derivations
>> of probability from within the deterministic theory. There are no
>> probabilities in the deterministic theory, even from the 1p perspective,
>> because the data are consistent with any prior assignment of a probability
>> measure.
>>
>>
>> The probability enters from the self-location uncertainty; which is other
>> terms is saying: Assume each branch has the same probability (or some
>> weighting) for you being in that branch.  Then that is the probability that
>> you have observed the sequence of events that define that branch.
>>
>
> I think that is Sean Carroll's approach. I am uncertain as to whether this
> really works or not. The concept of a 'weight' or 'thickness' for each
> branch is difficult to reconcile with the first-person experience of
> probability: which is obtained within the branch, so is independent of any
> overall 'weight'. But that aside, self-locating uncertainty is just another
> idea imposed on quantum mechanics and, like decision-theoretic ideas, it is
> without theoretical foundation -- it is just imposed by fiat on a
> deterministic theory. It makes  probability a subjective notion imposed on
> a theory that is supposedly objective: there is an objective probability
> that a radioactive nucleus will decay in a certain time period --
> independent of our subjective impressions, or self-location. (I can develop
> this thought further, if required, but I think it shows Sean's approach to
> fail.)
>

Probability derived from self-locating uncertainty is an idea independent
of any particular physics. It is also independent of any theory of
consciousness, since we can imagine a non-conscious observer reasoning in
the same way. To some people it seems trivially obvious, to others it seems
very strange. I don’t know if which group one falls into correlates with
any other beliefs or attitudes.

> --
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAH%3D2ypWb-n24FgchakZWNBw9ifk7HYdtz5LQnDENstYM_0xVaw%40mail.gmail.com.


Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-04 Thread Bruno Marchal

> On 3 Mar 2020, at 14:12, Alan Grayson  wrote:
> 
> 
> 
> On Tuesday, March 3, 2020 at 5:23:48 AM UTC-7, Bruno Marchal wrote:
> 
> On 2 Mar 2020, at 14:19, Alan Grayson > 
> wrote:
> 
> 
> Here is where I think you've tried to answer my gravity problem posed on 
> another thread. You say there is an infinity of calculations, but what is 
> doing the calculations? And why among those multitudes is one set chosen, 
> namely our "illusion"? AG
> 
> 
> OK. I have begun the explanation. But here you are again already at the step 
> 7 (if you take a look at my paper sane04 general public presentation(*))
> 
> What is doing the computations? When you implement the computation in the 
> physical reality, the computations are done by the relevant digital 
> information encoded into physical relations, between physical objects. 
> What is doing the computation, when there is no physical universe, is any 
> relation in a model of a Turing complete theory. Elementary arithmetic is 
> Turing complete, so if you are OK with 2+2=4, or with statement like “there 
> is no bigger prime number”, the computation are implemented naturally by the 
> representation of those computation in term of true natural number relation. 
> It is a bit like a bloc-universe. Time will be accounted by the notion of 
> “number of steps of a computation”.
> 
> Example. The reality of, say, the computer science statement that" the 
> register (a, b, c) contains a", is realised by the physical fact that 
> something encoding “a" is put “physically” in a series of physical memories 
> (flip-flop, or magnetic disk, etc.).
> 
> That same reality is implemented, all by itself, in the arithmetical 
> statement that the number 
> 
> 2^(“a”) * 3^(“b”) * 5^(“c”) 
> 
>  admits only one decomposition into product of power of primes (by the so 
> called fundamental theorem of arithmetic, by Euclid), and saying that “a” 
> (the representation of ‘a' by some number, which I note “a”) belongs to its 
> first place is arithmetically equivalent  of saying that 2^”a” divides 
> 2^(“a”) * 3^(“b”) * 5^(“c”) , and that 2^”a” + 1 does not divide it. 
> 
> Of course, that “arithmetise” only a few bit of the computation. Gödel missed 
> the Church-Turing thesis, and so was unaware of arithmetising “computer 
> science”, but he will got the point later. Meanwhile, he was the first one to 
> arithmetise the provability predicate, and later, he will understand that his 
> provability predicate is Turing equivalent (with respect to computability,; 
> not with respect to provability).
> 
> The best might be to download Gödel 1931, for example here:
> 
> https://www.jamesrmeyer.com/pdfs/godel-original-english.pdf 
> 
> 
> The complete arithmetization is done in 40 steps. See below(**). The step 44 
> gives Gödel famous beweisbar predicate Bw(x), which is the one I wrote []x, 
> and which is the subject of what I called “theology of machine”, mainly the 
> mathematical theory of provability , proved complete by Solovay in 1976 (at 
> the modal propositional level). The step 1 is the arithmetical definition of 
> “x divides y”, OK?
> 
> It has to be long and tedious, as defining provability (and thus 
> computability) in arithmetic is like programming a high level language in a 
> low level language. 
> 
> To really swallow this, it will also be necessary to understand well the 
> difference between proof and truth. The computations are implemented in the 
> truth of arithmetic (in the “model”, or in all models, in the logician sense 
> of model (whicht will be brought by Lowenheim and Tarski, Gödel did without 
> this, but refer to the intuition, which is simpler, but for our concern, we 
> have to take this into account at some point)).
> 
> Bruno
> 
> 
> 
> (*) B. Marchal. The Origin of Physical Laws and Sensations. In 4th 
> International System Administration and Network Engineering Conference, SANE 
> 2004, Amsterdam, 2004.
> http://iridia.ulb.ac.be/~marchal/publications/SANE2004MARCHALAbstract.html 
> 
> 
> (**) Here are the 40 steps, for future reference. Tell me if you can read and 
> understand the line 1.
> 
> Yes. AG

Good. So, to be clear, we define an arithmetical formula by the formula using 
only the symbols “+”, “*”, “s” (intended for the successor operation) + the 
logical symbols.


>  
> You might print it, and we can do them step by step too. It *looks* 
> difficult, but it is not that much difficult, if you develop a bit of 
> familiarisation with the notation (which are not quite standard):
> 
> 1. x/y≡(∃z)[z≤x&x=y·z]  x is divisible by y.

Note that (∃z)[x=y·z] is an admissible definition of “x divide y”, i.e. x/y, 
but Gödel want to "bound the “∃” quantifier, to ensure that such relation 
(divide) to be computable, and provable when true. Strictly speaking, this is 
not even necessary for our purpose, but is neverth

Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-03 Thread Alan Grayson


On Tuesday, March 3, 2020 at 5:23:48 AM UTC-7, Bruno Marchal wrote:
>
>
> On 2 Mar 2020, at 14:19, Alan Grayson > 
> wrote:
>
>
> Here is where I think you've tried to answer my gravity problem posed on 
> another thread. You say there is an infinity of calculations, but what is 
> doing the calculations? And why among those multitudes is one set chosen, 
> namely our "illusion"? AG
>
>
>
> OK. I have begun the explanation. But here you are again already at the 
> step 7 (if you take a look at my paper sane04 general public 
> presentation(*))
>
> What is doing the computations? When you implement the computation in the 
> physical reality, the computations are done by the relevant digital 
> information encoded into physical relations, between physical objects. 
> What is doing the computation, when there is no physical universe, is any 
> relation in a model of a Turing complete theory. Elementary arithmetic is 
> Turing complete, so if you are OK with 2+2=4, or with statement like “there 
> is no bigger prime number”, the computation are implemented naturally by 
> the representation of those computation in term of true natural number 
> relation. It is a bit like a bloc-universe. Time will be accounted by the 
> notion of “number of steps of a computation”.
>
> Example. The reality of, say, the computer science statement that" the 
> register (a, b, c) contains a", is realised by the physical fact that 
> something encoding “a" is put “physically” in a series of physical memories 
> (flip-flop, or magnetic disk, etc.).
>
> That same reality is implemented, all by itself, in the arithmetical 
> statement that the number 
>
> 2^(“a”) * 3^(“b”) * 5^(“c”) 
>
>  admits only one decomposition into product of power of primes (by the so 
> called fundamental theorem of arithmetic, by Euclid), and saying that “a” 
> (the representation of ‘a' by some number, which I note “a”) belongs to its 
> first place is arithmetically equivalent  of saying that 2^”a” divides 
> 2^(“a”) * 3^(“b”) * 5^(“c”) , and that 2^”a” + 1 does not divide it. 
>
> Of course, that “arithmetise” only a few bit of the computation. Gödel 
> missed the Church-Turing thesis, and so was unaware of arithmetising 
> “computer science”, but he will got the point later. Meanwhile, he was the 
> first one to arithmetise the provability predicate, and later, he will 
> understand that his provability predicate is Turing equivalent (with 
> respect to computability,; not with respect to provability).
>
> The best might be to download Gödel 1931, for example here:
>
> https://www.jamesrmeyer.com/pdfs/godel-original-english.pdf
>
> The complete arithmetization is done in 40 steps. See below(**). The step 
> 44 gives Gödel famous beweisbar predicate Bw(x), which is the one I wrote 
> []x, and which is the subject of what I called “theology of machine”, 
> mainly the mathematical theory of provability , proved complete by Solovay 
> in 1976 (at the modal propositional level). The step 1 is the arithmetical 
> definition of “x divides y”, OK?
>
> It has to be long and tedious, as defining provability (and thus 
> computability) in arithmetic is like programming a high level language in a 
> low level language. 
>
> To really swallow this, it will also be necessary to understand well the 
> difference between proof and truth. The computations are implemented in the 
> truth of arithmetic (in the “model”, or in all models, in the logician 
> sense of model (whicht will be brought by Lowenheim and Tarski, Gödel did 
> without this, but refer to the intuition, which is simpler, but for our 
> concern, we have to take this into account at some point)).
>
> Bruno
>
>
>
> (*) B. Marchal. The Origin of Physical Laws and Sensations. In 4th 
> International System Administration and Network Engineering Conference, 
> SANE 2004, Amsterdam, 2004.
> http://iridia.ulb.ac.be/~marchal/publications/SANE2004MARCHALAbstract.html
>
>
> (**) Here are the 40 steps, for future reference. Tell me if you can read 
> and understand the line 1.
>

Yes. AG
 

> You might print it, and we can do them step by step too. It *looks* 
> difficult, but it is not that much difficult, if you develop a bit of 
> familiarisation with the notation (which are not quite standard):
>
> 1. x/y≡(∃z)[z≤x&x=y·z]  x is divisible by y.
> 2. Prim(x)≡~(∃z)[z≤x&z≠1&z≠x&x/z]&x>1 x is a prime number.
> 3. 0Prx≡0
> (n+1) Pr x ≡ εy [y ≤ x & Prim(y) & x/y & y > n Pr x]
> n Pr x is the nt h (in order of magnitude) prime number contained in x.34a
> 4. 0!≡1
> (n+1)! ≡ (n+1).n!
>  5. Pr(0) ≡ 0
> Pr(n+1) ≡ εy [y ≤ {Pr(n)}! + 1 & Prim(y) & y > Pr(n)]
> Pr(n) is the nt h prime number (in order of magnitude).
> 6. nGlx≡εy[y≤x&x/(nPrx)y &~x/(nPrx)y+1]
> n Gl x is the nt h term of the series of numbers assigned to the number 
> x (for n > 0 and n not greater than the length of this series).
> 7. l(x)≡εy[y≤x&yPrx>0&(y+1)Prx=0]
> l(x) is the length of the series of numbers assigned to x.
> 8. x*y≡εz[z≤[Pr{l(x)+l(y)}]x+

Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-03 Thread Bruno Marchal

> On 2 Mar 2020, at 14:19, Alan Grayson  wrote:
> 
> 
> Here is where I think you've tried to answer my gravity problem posed on 
> another thread. You say there is an infinity of calculations, but what is 
> doing the calculations? And why among those multitudes is one set chosen, 
> namely our "illusion"? AG


OK. I have begun the explanation. But here you are again already at the step 7 
(if you take a look at my paper sane04 general public presentation(*))

What is doing the computations? When you implement the computation in the 
physical reality, the computations are done by the relevant digital information 
encoded into physical relations, between physical objects. 
What is doing the computation, when there is no physical universe, is any 
relation in a model of a Turing complete theory. Elementary arithmetic is 
Turing complete, so if you are OK with 2+2=4, or with statement like “there is 
no bigger prime number”, the computation are implemented naturally by the 
representation of those computation in term of true natural number relation. It 
is a bit like a bloc-universe. Time will be accounted by the notion of “number 
of steps of a computation”.

Example. The reality of, say, the computer science statement that" the register 
(a, b, c) contains a", is realised by the physical fact that something encoding 
“a" is put “physically” in a series of physical memories (flip-flop, or 
magnetic disk, etc.).

That same reality is implemented, all by itself, in the arithmetical statement 
that the number 

2^(“a”) * 3^(“b”) * 5^(“c”) 

 admits only one decomposition into product of power of primes (by the so 
called fundamental theorem of arithmetic, by Euclid), and saying that “a” (the 
representation of ‘a' by some number, which I note “a”) belongs to its first 
place is arithmetically equivalent  of saying that 2^”a” divides 2^(“a”) * 
3^(“b”) * 5^(“c”) , and that 2^”a” + 1 does not divide it. 

Of course, that “arithmetise” only a few bit of the computation. Gödel missed 
the Church-Turing thesis, and so was unaware of arithmetising “computer 
science”, but he will got the point later. Meanwhile, he was the first one to 
arithmetise the provability predicate, and later, he will understand that his 
provability predicate is Turing equivalent (with respect to computability,; not 
with respect to provability).

The best might be to download Gödel 1931, for example here:

https://www.jamesrmeyer.com/pdfs/godel-original-english.pdf 


The complete arithmetization is done in 40 steps. See below(**). The step 44 
gives Gödel famous beweisbar predicate Bw(x), which is the one I wrote []x, and 
which is the subject of what I called “theology of machine”, mainly the 
mathematical theory of provability , proved complete by Solovay in 1976 (at the 
modal propositional level). The step 1 is the arithmetical definition of “x 
divides y”, OK?

It has to be long and tedious, as defining provability (and thus computability) 
in arithmetic is like programming a high level language in a low level 
language. 

To really swallow this, it will also be necessary to understand well the 
difference between proof and truth. The computations are implemented in the 
truth of arithmetic (in the “model”, or in all models, in the logician sense of 
model (whicht will be brought by Lowenheim and Tarski, Gödel did without this, 
but refer to the intuition, which is simpler, but for our concern, we have to 
take this into account at some point)).

Bruno



(*) B. Marchal. The Origin of Physical Laws and Sensations. In 4th 
International System Administration and Network Engineering Conference, SANE 
2004, Amsterdam, 2004.
http://iridia.ulb.ac.be/~marchal/publications/SANE2004MARCHALAbstract.html 


(**) Here are the 40 steps, for future reference. Tell me if you can read and 
understand the line 1.
You might print it, and we can do them step by step too. It *looks* difficult, 
but it is not that much difficult, if you develop a bit of familiarisation with 
the notation (which are not quite standard):

1. x/y≡(∃z)[z≤x&x=y·z]  x is divisible by y.
2. Prim(x)≡~(∃z)[z≤x&z≠1&z≠x&x/z]&x>1 x is a prime number.
3. 0Prx≡0
(n+1) Pr x ≡ εy [y ≤ x & Prim(y) & x/y & y > n Pr x]
n Pr x is the nt h (in order of magnitude) prime number contained in x.34a
4. 0!≡1
(n+1)! ≡ (n+1).n!
 5. Pr(0) ≡ 0
Pr(n+1) ≡ εy [y ≤ {Pr(n)}! + 1 & Prim(y) & y > Pr(n)]
Pr(n) is the nt h prime number (in order of magnitude).
6. nGlx≡εy[y≤x&x/(nPrx)y &~x/(nPrx)y+1]
n Gl x is the nt h term of the series of numbers assigned to the number x (for 
n > 0 and n not greater than the length of this series).
7. l(x)≡εy[y≤x&yPrx>0&(y+1)Prx=0]
l(x) is the length of the series of numbers assigned to x.
8. x*y≡εz[z≤[Pr{l(x)+l(y)}]x+y &(n)[n≤l(x)⇒nGlz=nGlx] & (n)[0 < n ≤ l(y) ⇒ 
{n+l(x)} Gl z = n Gl y]]
x * y corresponds to the operation of "joining t

Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-02 Thread Alan Grayson


On Sunday, March 1, 2020 at 8:22:37 AM UTC-7, Bruno Marchal wrote:
>
>
> On 1 Mar 2020, at 09:39, Alan Grayson > 
> wrote:
>
>
>
> On Saturday, February 29, 2020 at 3:56:11 AM UTC-7, Bruno Marchal wrote:
>
>
> On 29 Feb 2020, at 06:35, Alan Grayson  wrote:
>
>
>
> On Thursday, February 27, 2020 at 5:41:57 AM UTC-7, Bruno Marchal wrote:
>
>
> On 26 Feb 2020, at 18:06, Alan Grayson  wrote:
>
>
>
> On Wednesday, February 26, 2020 at 4:35:54 AM UTC-7, Bruno Marchal wrote:
>
>
> On 25 Feb 2020, at 12:43, Bruce Kellett  wrote:
>
> On Tue, Feb 25, 2020 at 10:26 PM Bruno Marchal  wrote:
>
> On 24 Feb 2020, at 23:22, Bruce Kellett  wrote:
>
> On Tue, Feb 25, 2020 at 12:10 AM Bruno Marchal  wrote:
>
> On 23 Feb 2020, at 23:49, Bruce Kellett  wrote:
>
> On Mon, Feb 24, 2020 at 12:21 AM Bruno Marchal  wrote:
>
> On 23 Feb 2020, at 04:11, Bruce Kellett  wrote:
>
>
> I don't really understand your comment. I was thinking of Bruno's 
> WM-duplication. You could impose the idea that each duplication at each 
> branch point on every branch is an independent Bernoulli trial with p = 0.5 
> on this (success being defined arbitrarily as W or M). Then, if these 
> probabilities carry over from trial to trial, you end up with every binary 
> sequence, each with weight 1/2^N. Summing sequences with the same number of 
> 0s and 1s, you get the Pascal Triangle distribution that Bruno wants.
>
> The trouble is that such a procedure is entirely arbitrary. The only 
> probability that one could objectively assign to say, W, on each Bernoulli 
> trial is one, 
>
>
> That is certainly wrong. If you are correct, then P(W) = 1 is written in 
> the personal diary,
>
>
> I did say "objectively assign". In other words, this was a 3p comment. You 
> confuse 1p with 3p yet again.
>
>
> Well, if you “objectively” assign P(W) = 1, the guy in M will subjectively 
> refute that prediction, and as the question was about the subjective 
> accessible experience, he objectively, and predictably, refute your 
> statement. 
>
>
>
> And if you objectively assign p(W) = p(M) = 0.5, then with the W-guy and 
> the M-guy will both say that your theory is refuted, since they both see 
> only one city: W-guy, W with p = 1.0, and the M-guy, M with p =1.0..
>
>
> That is *very* weird. That works for the coin tossing experience too, even 
> for the lottery. I predicted that I have 1/10^6 to win the lottery, but I 
> was wrong, after the gale was played I won, so the probability was one!
>
> In Helsinki, the guy write P(W) = P(M) = 1/2. That means he does not yet 
> know what outcome he will feel to live. Once the experience is done, one 
> copy will see W, and that is coherent with his prediction, same for the 
> others. He would have written P(W) = 1, that would have been felt as 
> refuted by the M guy, and vice-versa.
>
>
> But if he wrote p(W) = 0.9 and p(M) = 0.1 he would get exactly the same 
> result. The proposed probabilities are here without effect.
>
>
> If I toss a perfect coin too.
>
> Of course, that would lead directly to some problem with the iterated case 
> scenario.
>
>
>
>
>
> If not, tell me what is your prediction in Helsinki again, by keeping in 
> mind that it concerns your future subjective experience only. 
>
>
>
> In Helsinki I can offer no value for the probability since, given the 
> protocol, I know that all probabilities will be realized on repetitions of 
> the duplication.
>
>
> In the 3p picture. Indeed, that is, by definition, the protocol. But the 
> question is not about where you will live after the experience (we know 
> that it will be in both cities), but what do you expect to live from the 
> first person perspective, and here P(W & M) is null, as nobody will ever 
> *feel to live* in both city at once with this protocole.
>
>
> And, as I have repeated shown, the first person perspective does not give 
> you any expectations at all.
>
>
> If I am duplicated like in the 2^(16180 * 1) * (60 * 90) * 24 “movie” 
> scenario, I do expect seeing white noise, and I certainly don’t expect to 
> see “2001, Space Odyssey” with Tibetan subtitle.
>
> I am not sure what you mean by “the first person perspective does not give 
> any expectations”.
>
> Do you agree that if you are promised, in Helsinki, that a cup of coffee 
> will be offered to you, both in M and W, you can expect, with probability 
> one, to get a cup of coffee after pushing the button in Helsinki? (Assuming 
> Mechanism, of course).
>
> I would expect, in Helsinki,  to drink a cup of coffee with probability 
> one (using this protocole and all default hypotheses, like no asteroids 
> hurt the planet in the meantime, etc.).
>
> And I would consider myself maximally ignorant if that coffee will be 
> Russian or American coffee.
>
>
>
>
>
> The experience is totally symmetrical in the 3p picture, but that symmetry 
> is broken from the 1p perspective of each copy. One will say “I feel to be 
> in W, and not in M” and the other will say “I feel to be in M and not i

Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-01 Thread Bruno Marchal

> On 1 Mar 2020, at 09:39, Alan Grayson  wrote:
> 
> 
> 
> On Saturday, February 29, 2020 at 3:56:11 AM UTC-7, Bruno Marchal wrote:
> 
>> On 29 Feb 2020, at 06:35, Alan Grayson > 
>> wrote:
>> 
>> 
>> 
>> On Thursday, February 27, 2020 at 5:41:57 AM UTC-7, Bruno Marchal wrote:
>> 
>>> On 26 Feb 2020, at 18:06, Alan Grayson > wrote:
>>> 
>>> 
>>> 
>>> On Wednesday, February 26, 2020 at 4:35:54 AM UTC-7, Bruno Marchal wrote:
>>> 
 On 25 Feb 2020, at 12:43, Bruce Kellett > wrote:
 
 On Tue, Feb 25, 2020 at 10:26 PM Bruno Marchal > wrote:
 On 24 Feb 2020, at 23:22, Bruce Kellett > wrote:
> On Tue, Feb 25, 2020 at 12:10 AM Bruno Marchal > 
> wrote:
> On 23 Feb 2020, at 23:49, Bruce Kellett > wrote:
>> On Mon, Feb 24, 2020 at 12:21 AM Bruno Marchal > 
>> wrote:
>> On 23 Feb 2020, at 04:11, Bruce Kellett > wrote:
>>> 
>>> I don't really understand your comment. I was thinking of Bruno's 
>>> WM-duplication. You could impose the idea that each duplication at each 
>>> branch point on every branch is an independent Bernoulli trial with p = 
>>> 0.5 on this (success being defined arbitrarily as W or M). Then, if 
>>> these probabilities carry over from trial to trial, you end up with 
>>> every binary sequence, each with weight 1/2^N. Summing sequences with 
>>> the same number of 0s and 1s, you get the Pascal Triangle distribution 
>>> that Bruno wants.
>>> 
>>> The trouble is that such a procedure is entirely arbitrary. The only 
>>> probability that one could objectively assign to say, W, on each 
>>> Bernoulli trial is one,
>> 
>> That is certainly wrong. If you are correct, then P(W) = 1 is written in 
>> the personal diary,
>> 
>> I did say "objectively assign". In other words, this was a 3p comment. 
>> You confuse 1p with 3p yet again.
> 
> Well, if you “objectively” assign P(W) = 1, the guy in M will 
> subjectively refute that prediction, and as the question was about the 
> subjective accessible experience, he objectively, and predictably, refute 
> your statement. 
> 
> 
> And if you objectively assign p(W) = p(M) = 0.5, then with the W-guy and 
> the M-guy will both say that your theory is refuted, since they both see 
> only one city: W-guy, W with p = 1.0, and the M-guy, M with p =1.0..
 
 That is *very* weird. That works for the coin tossing experience too, even 
 for the lottery. I predicted that I have 1/10^6 to win the lottery, but I 
 was wrong, after the gale was played I won, so the probability was one!
 
 In Helsinki, the guy write P(W) = P(M) = 1/2. That means he does not yet 
 know what outcome he will feel to live. Once the experience is done, one 
 copy will see W, and that is coherent with his prediction, same for the 
 others. He would have written P(W) = 1, that would have been felt as 
 refuted by the M guy, and vice-versa.
 
 But if he wrote p(W) = 0.9 and p(M) = 0.1 he would get exactly the same 
 result. The proposed probabilities are here without effect.
>>> 
>>> If I toss a perfect coin too.
>>> 
>>> Of course, that would lead directly to some problem with the iterated case 
>>> scenario.
>>> 
>>> 
>>> 
>>> 
>>> 
> If not, tell me what is your prediction in Helsinki again, by keeping in 
> mind that it concerns your future subjective experience only. 
> 
> 
> In Helsinki I can offer no value for the probability since, given the 
> protocol, I know that all probabilities will be realized on repetitions 
> of the duplication.
 
 In the 3p picture. Indeed, that is, by definition, the protocol. But the 
 question is not about where you will live after the experience (we know 
 that it will be in both cities), but what do you expect to live from the 
 first person perspective, and here P(W & M) is null, as nobody will ever 
 *feel to live* in both city at once with this protocole.
 
 And, as I have repeated shown, the first person perspective does not give 
 you any expectations at all.
>>> 
>>> If I am duplicated like in the 2^(16180 * 1) * (60 * 90) * 24 “movie” 
>>> scenario, I do expect seeing white noise, and I certainly don’t expect to 
>>> see “2001, Space Odyssey” with Tibetan subtitle.
>>> 
>>> I am not sure what you mean by “the first person perspective does not give 
>>> any expectations”.
>>> 
>>> Do you agree that if you are promised, in Helsinki, that a cup of coffee 
>>> will be offered to you, both in M and W, you can expect, with probability 
>>> one, to get a cup of coffee after pushing the button in Helsinki? (Assuming 
>>> Mechanism, of course).
>>> 
>>> I would expect, in Helsinki,  to drink a cup of coffee with probability one 
>>> (using this protocole and all default hypotheses, like no asteroids hurt 
>>> the planet in the meantime, etc.).
>>> 
>>> And I would consider myself maximally ign

Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-01 Thread Bruno Marchal

> On 29 Feb 2020, at 11:14, Alan Grayson  wrote:
> 
> 
> 
> On Saturday, February 29, 2020 at 2:22:34 AM UTC-7, Quentin Anciaux wrote:
> 
> 
> Le sam. 29 févr. 2020 à 06:35, Alan Grayson  > a écrit :
> 
> 
> On Thursday, February 27, 2020 at 5:41:57 AM UTC-7, Bruno Marchal wrote:
> 
>> On 26 Feb 2020, at 18:06, Alan Grayson > wrote:
>> 
>> 
>> 
>> On Wednesday, February 26, 2020 at 4:35:54 AM UTC-7, Bruno Marchal wrote:
>> 
>>> On 25 Feb 2020, at 12:43, Bruce Kellett > wrote:
>>> 
>>> On Tue, Feb 25, 2020 at 10:26 PM Bruno Marchal > wrote:
>>> On 24 Feb 2020, at 23:22, Bruce Kellett > wrote:
 On Tue, Feb 25, 2020 at 12:10 AM Bruno Marchal > wrote:
 On 23 Feb 2020, at 23:49, Bruce Kellett > wrote:
> On Mon, Feb 24, 2020 at 12:21 AM Bruno Marchal > 
> wrote:
> On 23 Feb 2020, at 04:11, Bruce Kellett > wrote:
>> 
>> I don't really understand your comment. I was thinking of Bruno's 
>> WM-duplication. You could impose the idea that each duplication at each 
>> branch point on every branch is an independent Bernoulli trial with p = 
>> 0.5 on this (success being defined arbitrarily as W or M). Then, if 
>> these probabilities carry over from trial to trial, you end up with 
>> every binary sequence, each with weight 1/2^N. Summing sequences with 
>> the same number of 0s and 1s, you get the Pascal Triangle distribution 
>> that Bruno wants.
>> 
>> The trouble is that such a procedure is entirely arbitrary. The only 
>> probability that one could objectively assign to say, W, on each 
>> Bernoulli trial is one,
> 
> That is certainly wrong. If you are correct, then P(W) = 1 is written in 
> the personal diary,
> 
> I did say "objectively assign". In other words, this was a 3p comment. 
> You confuse 1p with 3p yet again.
 
 Well, if you “objectively” assign P(W) = 1, the guy in M will subjectively 
 refute that prediction, and as the question was about the subjective 
 accessible experience, he objectively, and predictably, refute your 
 statement. 
 
 
 And if you objectively assign p(W) = p(M) = 0.5, then with the W-guy and 
 the M-guy will both say that your theory is refuted, since they both see 
 only one city: W-guy, W with p = 1.0, and the M-guy, M with p =1.0..
>>> 
>>> That is *very* weird. That works for the coin tossing experience too, even 
>>> for the lottery. I predicted that I have 1/10^6 to win the lottery, but I 
>>> was wrong, after the gale was played I won, so the probability was one!
>>> 
>>> In Helsinki, the guy write P(W) = P(M) = 1/2. That means he does not yet 
>>> know what outcome he will feel to live. Once the experience is done, one 
>>> copy will see W, and that is coherent with his prediction, same for the 
>>> others. He would have written P(W) = 1, that would have been felt as 
>>> refuted by the M guy, and vice-versa.
>>> 
>>> But if he wrote p(W) = 0.9 and p(M) = 0.1 he would get exactly the same 
>>> result. The proposed probabilities are here without effect.
>> 
>> If I toss a perfect coin too.
>> 
>> Of course, that would lead directly to some problem with the iterated case 
>> scenario.
>> 
>> 
>> 
>> 
>> 
 If not, tell me what is your prediction in Helsinki again, by keeping in 
 mind that it concerns your future subjective experience only. 
 
 
 In Helsinki I can offer no value for the probability since, given the 
 protocol, I know that all probabilities will be realized on repetitions of 
 the duplication.
>>> 
>>> In the 3p picture. Indeed, that is, by definition, the protocol. But the 
>>> question is not about where you will live after the experience (we know 
>>> that it will be in both cities), but what do you expect to live from the 
>>> first person perspective, and here P(W & M) is null, as nobody will ever 
>>> *feel to live* in both city at once with this protocole.
>>> 
>>> And, as I have repeated shown, the first person perspective does not give 
>>> you any expectations at all.
>> 
>> If I am duplicated like in the 2^(16180 * 1) * (60 * 90) * 24 “movie” 
>> scenario, I do expect seeing white noise, and I certainly don’t expect to 
>> see “2001, Space Odyssey” with Tibetan subtitle.
>> 
>> I am not sure what you mean by “the first person perspective does not give 
>> any expectations”.
>> 
>> Do you agree that if you are promised, in Helsinki, that a cup of coffee 
>> will be offered to you, both in M and W, you can expect, with probability 
>> one, to get a cup of coffee after pushing the button in Helsinki? (Assuming 
>> Mechanism, of course).
>> 
>> I would expect, in Helsinki,  to drink a cup of coffee with probability one 
>> (using this protocole and all default hypotheses, like no asteroids hurt the 
>> planet in the meantime, etc.).
>> 
>> And I would consider myself maximally ignorant if that coffee will be 
>> Russian or American coffee.
>> 
>> 
>> 
>> 
>>> 
>>> The experience is to

Re: Postulate: Everything that CAN happen, MUST happen.

2020-03-01 Thread Alan Grayson


On Saturday, February 29, 2020 at 3:56:11 AM UTC-7, Bruno Marchal wrote:
>
>
> On 29 Feb 2020, at 06:35, Alan Grayson > 
> wrote:
>
>
>
> On Thursday, February 27, 2020 at 5:41:57 AM UTC-7, Bruno Marchal wrote:
>>
>>
>> On 26 Feb 2020, at 18:06, Alan Grayson  wrote:
>>
>>
>>
>> On Wednesday, February 26, 2020 at 4:35:54 AM UTC-7, Bruno Marchal wrote:
>>>
>>>
>>> On 25 Feb 2020, at 12:43, Bruce Kellett  wrote:
>>>
>>> On Tue, Feb 25, 2020 at 10:26 PM Bruno Marchal  wrote:
>>>
 On 24 Feb 2020, at 23:22, Bruce Kellett  wrote:

 On Tue, Feb 25, 2020 at 12:10 AM Bruno Marchal  
 wrote:

> On 23 Feb 2020, at 23:49, Bruce Kellett  wrote:
>
> On Mon, Feb 24, 2020 at 12:21 AM Bruno Marchal  
> wrote:
>
>> On 23 Feb 2020, at 04:11, Bruce Kellett  wrote:
>>
>>
>> I don't really understand your comment. I was thinking of Bruno's 
>> WM-duplication. You could impose the idea that each duplication at each 
>> branch point on every branch is an independent Bernoulli trial with p = 
>> 0.5 
>> on this (success being defined arbitrarily as W or M). Then, if these 
>> probabilities carry over from trial to trial, you end up with every 
>> binary 
>> sequence, each with weight 1/2^N. Summing sequences with the same number 
>> of 
>> 0s and 1s, you get the Pascal Triangle distribution that Bruno wants.
>>
>> The trouble is that such a procedure is entirely arbitrary. The only 
>> probability that one could objectively assign to say, W, on each 
>> Bernoulli 
>> trial is one, 
>>
>>
>> That is certainly wrong. If you are correct, then P(W) = 1 is written 
>> in the personal diary,
>>
>
> I did say "objectively assign". In other words, this was a 3p comment. 
> You confuse 1p with 3p yet again.
>
>
> Well, if you “objectively” assign P(W) = 1, the guy in M will 
> subjectively refute that prediction, and as the question was about the 
> subjective accessible experience, he objectively, and predictably, refute 
> your statement. 
>


 And if you objectively assign p(W) = p(M) = 0.5, then with the W-guy 
 and the M-guy will both say that your theory is refuted, since they both 
 see only one city: W-guy, W with p = 1.0, and the M-guy, M with p =1.0..


 That is *very* weird. That works for the coin tossing experience too, 
 even for the lottery. I predicted that I have 1/10^6 to win the lottery, 
 but I was wrong, after the gale was played I won, so the probability was 
 one!

 In Helsinki, the guy write P(W) = P(M) = 1/2. That means he does not 
 yet know what outcome he will feel to live. Once the experience is done, 
 one copy will see W, and that is coherent with his prediction, same for 
 the 
 others. He would have written P(W) = 1, that would have been felt as 
 refuted by the M guy, and vice-versa.

>>>
>>> But if he wrote p(W) = 0.9 and p(M) = 0.1 he would get exactly the same 
>>> result. The proposed probabilities are here without effect.
>>>
>>>
>>> If I toss a perfect coin too.
>>>
>>> Of course, that would lead directly to some problem with the iterated 
>>> case scenario.
>>>
>>>
>>>
>>>
>>>
>>> If not, tell me what is your prediction in Helsinki again, by keeping in 
> mind that it concerns your future subjective experience only. 
>


 In Helsinki I can offer no value for the probability since, given the 
 protocol, I know that all probabilities will be realized on repetitions of 
 the duplication.


 In the 3p picture. Indeed, that is, by definition, the protocol. But 
 the question is not about where you will live after the experience (we 
 know 
 that it will be in both cities), but what do you expect to live from the 
 first person perspective, and here P(W & M) is null, as nobody will ever 
 *feel to live* in both city at once with this protocole.

>>>
>>> And, as I have repeated shown, the first person perspective does not 
>>> give you any expectations at all.
>>>
>>>
>>> If I am duplicated like in the 2^(16180 * 1) * (60 * 90) * 24 
>>> “movie” scenario, I do expect seeing white noise, and I certainly don’t 
>>> expect to see “2001, Space Odyssey” with Tibetan subtitle.
>>>
>>> I am not sure what you mean by “the first person perspective does not 
>>> give any expectations”.
>>>
>>> Do you agree that if you are promised, in Helsinki, that a cup of coffee 
>>> will be offered to you, both in M and W, you can expect, with probability 
>>> one, to get a cup of coffee after pushing the button in Helsinki? (Assuming 
>>> Mechanism, of course).
>>>
>>> I would expect, in Helsinki,  to drink a cup of coffee with probability 
>>> one (using this protocole and all default hypotheses, like no asteroids 
>>> hurt the planet in the meantime, etc.).
>>>
>>> And I would consider myself maximally ignorant if 

Re: Postulate: Everything that CAN happen, MUST happen.

2020-02-29 Thread Bruno Marchal

> On 29 Feb 2020, at 06:35, Alan Grayson  wrote:
> 
> 
> 
> On Thursday, February 27, 2020 at 5:41:57 AM UTC-7, Bruno Marchal wrote:
> 
>> On 26 Feb 2020, at 18:06, Alan Grayson > 
>> wrote:
>> 
>> 
>> 
>> On Wednesday, February 26, 2020 at 4:35:54 AM UTC-7, Bruno Marchal wrote:
>> 
>>> On 25 Feb 2020, at 12:43, Bruce Kellett > wrote:
>>> 
>>> On Tue, Feb 25, 2020 at 10:26 PM Bruno Marchal > wrote:
>>> On 24 Feb 2020, at 23:22, Bruce Kellett > wrote:
 On Tue, Feb 25, 2020 at 12:10 AM Bruno Marchal > wrote:
 On 23 Feb 2020, at 23:49, Bruce Kellett > wrote:
> On Mon, Feb 24, 2020 at 12:21 AM Bruno Marchal > 
> wrote:
> On 23 Feb 2020, at 04:11, Bruce Kellett > wrote:
>> 
>> I don't really understand your comment. I was thinking of Bruno's 
>> WM-duplication. You could impose the idea that each duplication at each 
>> branch point on every branch is an independent Bernoulli trial with p = 
>> 0.5 on this (success being defined arbitrarily as W or M). Then, if 
>> these probabilities carry over from trial to trial, you end up with 
>> every binary sequence, each with weight 1/2^N. Summing sequences with 
>> the same number of 0s and 1s, you get the Pascal Triangle distribution 
>> that Bruno wants.
>> 
>> The trouble is that such a procedure is entirely arbitrary. The only 
>> probability that one could objectively assign to say, W, on each 
>> Bernoulli trial is one,
> 
> That is certainly wrong. If you are correct, then P(W) = 1 is written in 
> the personal diary,
> 
> I did say "objectively assign". In other words, this was a 3p comment. 
> You confuse 1p with 3p yet again.
 
 Well, if you “objectively” assign P(W) = 1, the guy in M will subjectively 
 refute that prediction, and as the question was about the subjective 
 accessible experience, he objectively, and predictably, refute your 
 statement. 
 
 
 And if you objectively assign p(W) = p(M) = 0.5, then with the W-guy and 
 the M-guy will both say that your theory is refuted, since they both see 
 only one city: W-guy, W with p = 1.0, and the M-guy, M with p =1.0..
>>> 
>>> That is *very* weird. That works for the coin tossing experience too, even 
>>> for the lottery. I predicted that I have 1/10^6 to win the lottery, but I 
>>> was wrong, after the gale was played I won, so the probability was one!
>>> 
>>> In Helsinki, the guy write P(W) = P(M) = 1/2. That means he does not yet 
>>> know what outcome he will feel to live. Once the experience is done, one 
>>> copy will see W, and that is coherent with his prediction, same for the 
>>> others. He would have written P(W) = 1, that would have been felt as 
>>> refuted by the M guy, and vice-versa.
>>> 
>>> But if he wrote p(W) = 0.9 and p(M) = 0.1 he would get exactly the same 
>>> result. The proposed probabilities are here without effect.
>> 
>> If I toss a perfect coin too.
>> 
>> Of course, that would lead directly to some problem with the iterated case 
>> scenario.
>> 
>> 
>> 
>> 
>> 
 If not, tell me what is your prediction in Helsinki again, by keeping in 
 mind that it concerns your future subjective experience only. 
 
 
 In Helsinki I can offer no value for the probability since, given the 
 protocol, I know that all probabilities will be realized on repetitions of 
 the duplication.
>>> 
>>> In the 3p picture. Indeed, that is, by definition, the protocol. But the 
>>> question is not about where you will live after the experience (we know 
>>> that it will be in both cities), but what do you expect to live from the 
>>> first person perspective, and here P(W & M) is null, as nobody will ever 
>>> *feel to live* in both city at once with this protocole.
>>> 
>>> And, as I have repeated shown, the first person perspective does not give 
>>> you any expectations at all.
>> 
>> If I am duplicated like in the 2^(16180 * 1) * (60 * 90) * 24 “movie” 
>> scenario, I do expect seeing white noise, and I certainly don’t expect to 
>> see “2001, Space Odyssey” with Tibetan subtitle.
>> 
>> I am not sure what you mean by “the first person perspective does not give 
>> any expectations”.
>> 
>> Do you agree that if you are promised, in Helsinki, that a cup of coffee 
>> will be offered to you, both in M and W, you can expect, with probability 
>> one, to get a cup of coffee after pushing the button in Helsinki? (Assuming 
>> Mechanism, of course).
>> 
>> I would expect, in Helsinki,  to drink a cup of coffee with probability one 
>> (using this protocole and all default hypotheses, like no asteroids hurt the 
>> planet in the meantime, etc.).
>> 
>> And I would consider myself maximally ignorant if that coffee will be 
>> Russian or American coffee.
>> 
>> 
>> 
>> 
>>> 
>>> The experience is totally symmetrical in the 3p picture, but that symmetry 
>>> is broken from the 1p perspective of each copy. One will say “I feel to be 
>>> in W

Re: Postulate: Everything that CAN happen, MUST happen.

2020-02-29 Thread Alan Grayson


On Saturday, February 29, 2020 at 2:22:34 AM UTC-7, Quentin Anciaux wrote:
>
>
>
> Le sam. 29 févr. 2020 à 06:35, Alan Grayson  > a écrit :
>
>>
>>
>> On Thursday, February 27, 2020 at 5:41:57 AM UTC-7, Bruno Marchal wrote:
>>>
>>>
>>> On 26 Feb 2020, at 18:06, Alan Grayson  wrote:
>>>
>>>
>>>
>>> On Wednesday, February 26, 2020 at 4:35:54 AM UTC-7, Bruno Marchal wrote:


 On 25 Feb 2020, at 12:43, Bruce Kellett  wrote:

 On Tue, Feb 25, 2020 at 10:26 PM Bruno Marchal  
 wrote:

> On 24 Feb 2020, at 23:22, Bruce Kellett  wrote:
>
> On Tue, Feb 25, 2020 at 12:10 AM Bruno Marchal  
> wrote:
>
>> On 23 Feb 2020, at 23:49, Bruce Kellett  wrote:
>>
>> On Mon, Feb 24, 2020 at 12:21 AM Bruno Marchal  
>> wrote:
>>
>>> On 23 Feb 2020, at 04:11, Bruce Kellett  wrote:
>>>
>>>
>>> I don't really understand your comment. I was thinking of Bruno's 
>>> WM-duplication. You could impose the idea that each duplication at each 
>>> branch point on every branch is an independent Bernoulli trial with p = 
>>> 0.5 
>>> on this (success being defined arbitrarily as W or M). Then, if these 
>>> probabilities carry over from trial to trial, you end up with every 
>>> binary 
>>> sequence, each with weight 1/2^N. Summing sequences with the same 
>>> number of 
>>> 0s and 1s, you get the Pascal Triangle distribution that Bruno wants.
>>>
>>> The trouble is that such a procedure is entirely arbitrary. The only 
>>> probability that one could objectively assign to say, W, on each 
>>> Bernoulli 
>>> trial is one, 
>>>
>>>
>>> That is certainly wrong. If you are correct, then P(W) = 1 is 
>>> written in the personal diary,
>>>
>>
>> I did say "objectively assign". In other words, this was a 3p 
>> comment. You confuse 1p with 3p yet again.
>>
>>
>> Well, if you “objectively” assign P(W) = 1, the guy in M will 
>> subjectively refute that prediction, and as the question was about the 
>> subjective accessible experience, he objectively, and predictably, 
>> refute 
>> your statement. 
>>
>
>
> And if you objectively assign p(W) = p(M) = 0.5, then with the W-guy 
> and the M-guy will both say that your theory is refuted, since they both 
> see only one city: W-guy, W with p = 1.0, and the M-guy, M with p =1.0..
>
>
> That is *very* weird. That works for the coin tossing experience too, 
> even for the lottery. I predicted that I have 1/10^6 to win the lottery, 
> but I was wrong, after the gale was played I won, so the probability was 
> one!
>
> In Helsinki, the guy write P(W) = P(M) = 1/2. That means he does not 
> yet know what outcome he will feel to live. Once the experience is done, 
> one copy will see W, and that is coherent with his prediction, same for 
> the 
> others. He would have written P(W) = 1, that would have been felt as 
> refuted by the M guy, and vice-versa.
>

 But if he wrote p(W) = 0.9 and p(M) = 0.1 he would get exactly the same 
 result. The proposed probabilities are here without effect.


 If I toss a perfect coin too.

 Of course, that would lead directly to some problem with the iterated 
 case scenario.





 If not, tell me what is your prediction in Helsinki again, by keeping 
>> in mind that it concerns your future subjective experience only. 
>>
>
>
> In Helsinki I can offer no value for the probability since, given the 
> protocol, I know that all probabilities will be realized on repetitions 
> of 
> the duplication.
>
>
> In the 3p picture. Indeed, that is, by definition, the protocol. But 
> the question is not about where you will live after the experience (we 
> know 
> that it will be in both cities), but what do you expect to live from the 
> first person perspective, and here P(W & M) is null, as nobody will ever 
> *feel to live* in both city at once with this protocole.
>

 And, as I have repeated shown, the first person perspective does not 
 give you any expectations at all.


 If I am duplicated like in the 2^(16180 * 1) * (60 * 90) * 24 
 “movie” scenario, I do expect seeing white noise, and I certainly don’t 
 expect to see “2001, Space Odyssey” with Tibetan subtitle.

 I am not sure what you mean by “the first person perspective does not 
 give any expectations”.

 Do you agree that if you are promised, in Helsinki, that a cup of 
 coffee will be offered to you, both in M and W, you can expect, with 
 probability one, to get a cup of coffee after pushing the button in 
 Helsinki? (Assuming Mechanism, of course).

 I would expect, in Helsinki,  to drink a cup of coffee with probability 
 one (using this 

Re: Postulate: Everything that CAN happen, MUST happen.

2020-02-29 Thread Quentin Anciaux
Le sam. 29 févr. 2020 à 06:35, Alan Grayson  a
écrit :

>
>
> On Thursday, February 27, 2020 at 5:41:57 AM UTC-7, Bruno Marchal wrote:
>>
>>
>> On 26 Feb 2020, at 18:06, Alan Grayson  wrote:
>>
>>
>>
>> On Wednesday, February 26, 2020 at 4:35:54 AM UTC-7, Bruno Marchal wrote:
>>>
>>>
>>> On 25 Feb 2020, at 12:43, Bruce Kellett  wrote:
>>>
>>> On Tue, Feb 25, 2020 at 10:26 PM Bruno Marchal  wrote:
>>>
 On 24 Feb 2020, at 23:22, Bruce Kellett  wrote:

 On Tue, Feb 25, 2020 at 12:10 AM Bruno Marchal 
 wrote:

> On 23 Feb 2020, at 23:49, Bruce Kellett  wrote:
>
> On Mon, Feb 24, 2020 at 12:21 AM Bruno Marchal 
> wrote:
>
>> On 23 Feb 2020, at 04:11, Bruce Kellett  wrote:
>>
>>
>> I don't really understand your comment. I was thinking of Bruno's
>> WM-duplication. You could impose the idea that each duplication at each
>> branch point on every branch is an independent Bernoulli trial with p = 
>> 0.5
>> on this (success being defined arbitrarily as W or M). Then, if these
>> probabilities carry over from trial to trial, you end up with every 
>> binary
>> sequence, each with weight 1/2^N. Summing sequences with the same number 
>> of
>> 0s and 1s, you get the Pascal Triangle distribution that Bruno wants.
>>
>> The trouble is that such a procedure is entirely arbitrary. The only
>> probability that one could objectively assign to say, W, on each 
>> Bernoulli
>> trial is one,
>>
>>
>> That is certainly wrong. If you are correct, then P(W) = 1 is written
>> in the personal diary,
>>
>
> I did say "objectively assign". In other words, this was a 3p comment.
> You confuse 1p with 3p yet again.
>
>
> Well, if you “objectively” assign P(W) = 1, the guy in M will
> subjectively refute that prediction, and as the question was about the
> subjective accessible experience, he objectively, and predictably, refute
> your statement.
>


 And if you objectively assign p(W) = p(M) = 0.5, then with the W-guy
 and the M-guy will both say that your theory is refuted, since they both
 see only one city: W-guy, W with p = 1.0, and the M-guy, M with p =1.0..


 That is *very* weird. That works for the coin tossing experience too,
 even for the lottery. I predicted that I have 1/10^6 to win the lottery,
 but I was wrong, after the gale was played I won, so the probability was
 one!

 In Helsinki, the guy write P(W) = P(M) = 1/2. That means he does not
 yet know what outcome he will feel to live. Once the experience is done,
 one copy will see W, and that is coherent with his prediction, same for the
 others. He would have written P(W) = 1, that would have been felt as
 refuted by the M guy, and vice-versa.

>>>
>>> But if he wrote p(W) = 0.9 and p(M) = 0.1 he would get exactly the same
>>> result. The proposed probabilities are here without effect.
>>>
>>>
>>> If I toss a perfect coin too.
>>>
>>> Of course, that would lead directly to some problem with the iterated
>>> case scenario.
>>>
>>>
>>>
>>>
>>>
>>> If not, tell me what is your prediction in Helsinki again, by keeping in
> mind that it concerns your future subjective experience only.
>


 In Helsinki I can offer no value for the probability since, given the
 protocol, I know that all probabilities will be realized on repetitions of
 the duplication.


 In the 3p picture. Indeed, that is, by definition, the protocol. But
 the question is not about where you will live after the experience (we know
 that it will be in both cities), but what do you expect to live from the
 first person perspective, and here P(W & M) is null, as nobody will ever
 *feel to live* in both city at once with this protocole.

>>>
>>> And, as I have repeated shown, the first person perspective does not
>>> give you any expectations at all.
>>>
>>>
>>> If I am duplicated like in the 2^(16180 * 1) * (60 * 90) * 24
>>> “movie” scenario, I do expect seeing white noise, and I certainly don’t
>>> expect to see “2001, Space Odyssey” with Tibetan subtitle.
>>>
>>> I am not sure what you mean by “the first person perspective does not
>>> give any expectations”.
>>>
>>> Do you agree that if you are promised, in Helsinki, that a cup of coffee
>>> will be offered to you, both in M and W, you can expect, with probability
>>> one, to get a cup of coffee after pushing the button in Helsinki? (Assuming
>>> Mechanism, of course).
>>>
>>> I would expect, in Helsinki,  to drink a cup of coffee with probability
>>> one (using this protocole and all default hypotheses, like no asteroids
>>> hurt the planet in the meantime, etc.).
>>>
>>> And I would consider myself maximally ignorant if that coffee will be
>>> Russian or American coffee.
>>>
>>>
>>>
>>>
>>>
>>> The experience is totally symmetrical in the 3p picture, bu

Re: Postulate: Everything that CAN happen, MUST happen.

2020-02-28 Thread Alan Grayson


On Thursday, February 27, 2020 at 5:41:57 AM UTC-7, Bruno Marchal wrote:
>
>
> On 26 Feb 2020, at 18:06, Alan Grayson > 
> wrote:
>
>
>
> On Wednesday, February 26, 2020 at 4:35:54 AM UTC-7, Bruno Marchal wrote:
>>
>>
>> On 25 Feb 2020, at 12:43, Bruce Kellett  wrote:
>>
>> On Tue, Feb 25, 2020 at 10:26 PM Bruno Marchal  wrote:
>>
>>> On 24 Feb 2020, at 23:22, Bruce Kellett  wrote:
>>>
>>> On Tue, Feb 25, 2020 at 12:10 AM Bruno Marchal  wrote:
>>>
 On 23 Feb 2020, at 23:49, Bruce Kellett  wrote:

 On Mon, Feb 24, 2020 at 12:21 AM Bruno Marchal  
 wrote:

> On 23 Feb 2020, at 04:11, Bruce Kellett  wrote:
>
>
> I don't really understand your comment. I was thinking of Bruno's 
> WM-duplication. You could impose the idea that each duplication at each 
> branch point on every branch is an independent Bernoulli trial with p = 
> 0.5 
> on this (success being defined arbitrarily as W or M). Then, if these 
> probabilities carry over from trial to trial, you end up with every 
> binary 
> sequence, each with weight 1/2^N. Summing sequences with the same number 
> of 
> 0s and 1s, you get the Pascal Triangle distribution that Bruno wants.
>
> The trouble is that such a procedure is entirely arbitrary. The only 
> probability that one could objectively assign to say, W, on each 
> Bernoulli 
> trial is one, 
>
>
> That is certainly wrong. If you are correct, then P(W) = 1 is written 
> in the personal diary,
>

 I did say "objectively assign". In other words, this was a 3p comment. 
 You confuse 1p with 3p yet again.


 Well, if you “objectively” assign P(W) = 1, the guy in M will 
 subjectively refute that prediction, and as the question was about the 
 subjective accessible experience, he objectively, and predictably, refute 
 your statement. 

>>>
>>>
>>> And if you objectively assign p(W) = p(M) = 0.5, then with the W-guy and 
>>> the M-guy will both say that your theory is refuted, since they both see 
>>> only one city: W-guy, W with p = 1.0, and the M-guy, M with p =1.0..
>>>
>>>
>>> That is *very* weird. That works for the coin tossing experience too, 
>>> even for the lottery. I predicted that I have 1/10^6 to win the lottery, 
>>> but I was wrong, after the gale was played I won, so the probability was 
>>> one!
>>>
>>> In Helsinki, the guy write P(W) = P(M) = 1/2. That means he does not yet 
>>> know what outcome he will feel to live. Once the experience is done, one 
>>> copy will see W, and that is coherent with his prediction, same for the 
>>> others. He would have written P(W) = 1, that would have been felt as 
>>> refuted by the M guy, and vice-versa.
>>>
>>
>> But if he wrote p(W) = 0.9 and p(M) = 0.1 he would get exactly the same 
>> result. The proposed probabilities are here without effect.
>>
>>
>> If I toss a perfect coin too.
>>
>> Of course, that would lead directly to some problem with the iterated 
>> case scenario.
>>
>>
>>
>>
>>
>> If not, tell me what is your prediction in Helsinki again, by keeping in 
 mind that it concerns your future subjective experience only. 

>>>
>>>
>>> In Helsinki I can offer no value for the probability since, given the 
>>> protocol, I know that all probabilities will be realized on repetitions of 
>>> the duplication.
>>>
>>>
>>> In the 3p picture. Indeed, that is, by definition, the protocol. But the 
>>> question is not about where you will live after the experience (we know 
>>> that it will be in both cities), but what do you expect to live from the 
>>> first person perspective, and here P(W & M) is null, as nobody will ever 
>>> *feel to live* in both city at once with this protocole.
>>>
>>
>> And, as I have repeated shown, the first person perspective does not give 
>> you any expectations at all.
>>
>>
>> If I am duplicated like in the 2^(16180 * 1) * (60 * 90) * 24 “movie” 
>> scenario, I do expect seeing white noise, and I certainly don’t expect to 
>> see “2001, Space Odyssey” with Tibetan subtitle.
>>
>> I am not sure what you mean by “the first person perspective does not 
>> give any expectations”.
>>
>> Do you agree that if you are promised, in Helsinki, that a cup of coffee 
>> will be offered to you, both in M and W, you can expect, with probability 
>> one, to get a cup of coffee after pushing the button in Helsinki? (Assuming 
>> Mechanism, of course).
>>
>> I would expect, in Helsinki,  to drink a cup of coffee with probability 
>> one (using this protocole and all default hypotheses, like no asteroids 
>> hurt the planet in the meantime, etc.).
>>
>> And I would consider myself maximally ignorant if that coffee will be 
>> Russian or American coffee.
>>
>>
>>
>>
>>
>> The experience is totally symmetrical in the 3p picture, but that 
>>> symmetry is broken from the 1p perspective of each copy. One will say “I 
>>> feel to be in W, and not in M” and the other will say “I f

Re: Postulate: Everything that CAN happen, MUST happen.

2020-02-27 Thread Bruno Marchal

> On 27 Feb 2020, at 12:45, Bruce Kellett  wrote:
> 
> On Thu, Feb 27, 2020 at 10:14 PM Bruno Marchal  > wrote:
> On 26 Feb 2020, at 23:58, Bruce Kellett  > wrote:
> 
>> From the first person perspective, there is indeterminacy,
> 
> That is the whole point. That is the 1p-indeterminacy I am talking about (and 
> that Clark, and only Clark, has a problem with).
> 
>> but no sensible assignment of probabilities is possible.
> 
> And you are right on this, in any “real case scenario”, but that is for the 
> next steps.
> 
> 
> And in the theoretical analysis. I am glad that you acknowledge that there is 
> no useful concept of probability in this WM-duplication scenario.

Yes, the key point is that in Helsinki, the “W v M” is a certainty (modulo the 
protocole knowledge), and the “W” and “M” prediction is refuted. That’s the 
first person indeterminacy, and probability is used mainly as a pedagogical 
tool to understand it.



> 
> 
> A probability is never observed, but evaluated, using some theory. In the 
> finite case, the numerical identity suggest the usual binomial, and this is 
> easy to verify for simple scenario.
> 
> Yes, it is binomial because there are only two possible outcomes. But 
> binomial without any specification of a probability for 'success’.

We can decide that W is success, and M is failure. (Hoping this is not seen as 
politically biased!). Then the self-duplication is a repeated Bernoulli 
experiences.



> 
> 
> All what is used is the fact that you are maximally ignorant on the brand of 
> coffee, and thus on the city you will see. Maximal ignorance is just modelled 
> by P = 1/2 traditionally, but that is not important, as the math will show 
> that we have no probabilities, but a quantum credibility measure.
> 
> 
> That is probably what all this argument is actually about -- the maths show 
> that there are no probabilities.

I am not sure the math shows that. We will lost “easy probability distribution” 
in the real case (in “front” of a universal dovetailing), but the P = 1/2 
remains consistent in the (irrealist) simple duplication of the thought 
experience.



> Because there are no unique probabilities in the classical duplication case, 
> the concept of probability has been shown to be inadmissible in the 
> deterministic (Everettian) quantum case.

Not from the first person point of view, where Gleason theorem assure the 
existence of a unique probability measure, and indeed it is the one given by 
the square law.



> The appeal by people like Deutsch and Wallace to betting quotients, or 
> quantum credibility measures, are just ways of forcing a probabilistic 
> interpretation on to quantum mechanics by hand -- they are not derivations of 
> probability from within the deterministic theory.

May be, but the derivation has to be done in arithmetic, and there it works. I 
am not following Deutsch and Wallace. I got that probability and many-histories 
interpretation of arithmetic well before discovering that physicists were 
already there.




> There are no probabilities in the deterministic theory, even from the 1p 
> perspective, because the data are consistent with any prior assignment of a 
> probability measure.

I don’t see this, or I see this for any choice of probability in any 
statistical analysis. It is irrational to expect  “Space-Odyssey with Tibetan 
subtile” from the big "2^(16180 * 1) * (60 * 90) * 24” multiplication 
experiences. 

Bruno



> 
> Bruce
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com 
> .
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/CAFxXSLQX%2B4zRuSQU_0wQ4Aoo%3DNX7c9TphFDcGpCtGbtFqryN7g%40mail.gmail.com
>  
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/05CF2DD0-168A-4BBD-8946-6324FF62C631%40ulb.ac.be.


Re: Postulate: Everything that CAN happen, MUST happen.

2020-02-27 Thread Bruce Kellett
On Fri, Feb 28, 2020 at 4:21 AM 'Brent Meeker' via Everything List <
everything-list@googlegroups.com> wrote:

> On 2/27/2020 3:45 AM, Bruce Kellett wrote:
>
>
> That is probably what all this argument is actually about -- the maths
> show that there are no probabilities. Because there are no unique
> probabilities in the classical duplication case, the concept of probability
> has been shown to be inadmissible in the deterministic (Everettian) quantum
> case. The appeal by people like Deutsch and Wallace to betting quotients,
> or quantum credibility measures, are just ways of forcing a probabilistic
> interpretation on to quantum mechanics by hand -- they are not derivations
> of probability from within the deterministic theory. There are no
> probabilities in the deterministic theory, even from the 1p perspective,
> because the data are consistent with any prior assignment of a probability
> measure.
>
>
> The probability enters from the self-location uncertainty; which is other
> terms is saying: Assume each branch has the same probability (or some
> weighting) for you being in that branch.  Then that is the probability that
> you have observed the sequence of events that define that branch.
>

I think that is Sean Carroll's approach. I am uncertain as to whether this
really works or not. The concept of a 'weight' or 'thickness' for each
branch is difficult to reconcile with the first-person experience of
probability: which is obtained within the branch, so is independent of any
overall 'weight'. But that aside, self-locating uncertainty is just another
idea imposed on quantum mechanics and, like decision-theoretic ideas, it is
without theoretical foundation -- it is just imposed by fiat on a
deterministic theory. It makes  probability a subjective notion imposed on
a theory that is supposedly objective: there is an objective probability
that a radioactive nucleus will decay in a certain time period --
independent of our subjective impressions, or self-location. (I can develop
this thought further, if required, but I think it shows Sean's approach to
fail.)

Bruce

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAFxXSLQs%2BPekZ_sX5b3b4peSumyKfqaHyK78tE2Sjo%3DENOT7jQ%40mail.gmail.com.


Re: Postulate: Everything that CAN happen, MUST happen.

2020-02-27 Thread 'Brent Meeker' via Everything List



On 2/27/2020 3:45 AM, Bruce Kellett wrote:
On Thu, Feb 27, 2020 at 10:14 PM Bruno Marchal > wrote:


On 26 Feb 2020, at 23:58, Bruce Kellett mailto:bhkellet...@gmail.com>> wrote:


From the first person perspective, there is indeterminacy,


That is the whole point. That is the 1p-indeterminacy I am talking
about (and that Clark, and only Clark, has a problem with).


but no sensible assignment of probabilities is possible.


And you are right on this, in any “real case scenario”, but that
is for the next steps.



And in the theoretical analysis. I am glad that you acknowledge that 
there is no useful concept of probability in this WM-duplication scenario.



A probability is never observed, but evaluated, using some theory.
In the finite case, the numerical identity suggest the usual
binomial, and this is easy to verify for simple scenario.


Yes, it is binomial because there are only two possible outcomes. But 
binomial without any specification of a probability for 'success'.



All what is used is the fact that you are maximally ignorant on
the brand of coffee, and thus on the city you will see. Maximal
ignorance is just modelled by P = 1/2 traditionally, but that is
not important, as the math will show that we have no
probabilities, but a quantum credibility measure.



That is probably what all this argument is actually about -- the maths 
show that there are no probabilities. Because there are no unique 
probabilities in the classical duplication case, the concept of 
probability has been shown to be inadmissible in the deterministic 
(Everettian) quantum case. The appeal by people like Deutsch and 
Wallace to betting quotients, or quantum credibility measures, are 
just ways of forcing a probabilistic interpretation on to quantum 
mechanics by hand -- they are not derivations of probability from 
within the deterministic theory. There are no probabilities in the 
deterministic theory, even from the 1p perspective, because the data 
are consistent with any prior assignment of a probability measure.


The probability enters from the self-location uncertainty; which is 
other terms is saying: Assume each branch has the same probability (or 
some weighting) for you being in that branch.  Then that is the 
probability that you have observed the sequence of events that define 
that branch.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/62893e71-3451-a0b0-8eaf-9c15c110cfb3%40verizon.net.


Re: Postulate: Everything that CAN happen, MUST happen.

2020-02-27 Thread Bruno Marchal

> On 26 Feb 2020, at 18:06, Alan Grayson  wrote:
> 
> 
> 
> On Wednesday, February 26, 2020 at 4:35:54 AM UTC-7, Bruno Marchal wrote:
> 
>> On 25 Feb 2020, at 12:43, Bruce Kellett > 
>> wrote:
>> 
>> On Tue, Feb 25, 2020 at 10:26 PM Bruno Marchal > > wrote:
>> On 24 Feb 2020, at 23:22, Bruce Kellett > 
>> wrote:
>>> On Tue, Feb 25, 2020 at 12:10 AM Bruno Marchal >> > wrote:
>>> On 23 Feb 2020, at 23:49, Bruce Kellett > 
>>> wrote:
 On Mon, Feb 24, 2020 at 12:21 AM Bruno Marchal >>> > wrote:
 On 23 Feb 2020, at 04:11, Bruce Kellett > 
 wrote:
> 
> I don't really understand your comment. I was thinking of Bruno's 
> WM-duplication. You could impose the idea that each duplication at each 
> branch point on every branch is an independent Bernoulli trial with p = 
> 0.5 on this (success being defined arbitrarily as W or M). Then, if these 
> probabilities carry over from trial to trial, you end up with every 
> binary sequence, each with weight 1/2^N. Summing sequences with the same 
> number of 0s and 1s, you get the Pascal Triangle distribution that Bruno 
> wants.
> 
> The trouble is that such a procedure is entirely arbitrary. The only 
> probability that one could objectively assign to say, W, on each 
> Bernoulli trial is one,
 
 That is certainly wrong. If you are correct, then P(W) = 1 is written in 
 the personal diary,
 
 I did say "objectively assign". In other words, this was a 3p comment. You 
 confuse 1p with 3p yet again.
>>> 
>>> Well, if you “objectively” assign P(W) = 1, the guy in M will subjectively 
>>> refute that prediction, and as the question was about the subjective 
>>> accessible experience, he objectively, and predictably, refute your 
>>> statement. 
>>> 
>>> 
>>> And if you objectively assign p(W) = p(M) = 0.5, then with the W-guy and 
>>> the M-guy will both say that your theory is refuted, since they both see 
>>> only one city: W-guy, W with p = 1.0, and the M-guy, M with p =1.0..
>> 
>> That is *very* weird. That works for the coin tossing experience too, even 
>> for the lottery. I predicted that I have 1/10^6 to win the lottery, but I 
>> was wrong, after the gale was played I won, so the probability was one!
>> 
>> In Helsinki, the guy write P(W) = P(M) = 1/2. That means he does not yet 
>> know what outcome he will feel to live. Once the experience is done, one 
>> copy will see W, and that is coherent with his prediction, same for the 
>> others. He would have written P(W) = 1, that would have been felt as refuted 
>> by the M guy, and vice-versa.
>> 
>> But if he wrote p(W) = 0.9 and p(M) = 0.1 he would get exactly the same 
>> result. The proposed probabilities are here without effect.
> 
> If I toss a perfect coin too.
> 
> Of course, that would lead directly to some problem with the iterated case 
> scenario.
> 
> 
> 
> 
> 
>>> If not, tell me what is your prediction in Helsinki again, by keeping in 
>>> mind that it concerns your future subjective experience only. 
>>> 
>>> 
>>> In Helsinki I can offer no value for the probability since, given the 
>>> protocol, I know that all probabilities will be realized on repetitions of 
>>> the duplication.
>> 
>> In the 3p picture. Indeed, that is, by definition, the protocol. But the 
>> question is not about where you will live after the experience (we know that 
>> it will be in both cities), but what do you expect to live from the first 
>> person perspective, and here P(W & M) is null, as nobody will ever *feel to 
>> live* in both city at once with this protocole.
>> 
>> And, as I have repeated shown, the first person perspective does not give 
>> you any expectations at all.
> 
> If I am duplicated like in the 2^(16180 * 1) * (60 * 90) * 24 “movie” 
> scenario, I do expect seeing white noise, and I certainly don’t expect to see 
> “2001, Space Odyssey” with Tibetan subtitle.
> 
> I am not sure what you mean by “the first person perspective does not give 
> any expectations”.
> 
> Do you agree that if you are promised, in Helsinki, that a cup of coffee will 
> be offered to you, both in M and W, you can expect, with probability one, to 
> get a cup of coffee after pushing the button in Helsinki? (Assuming 
> Mechanism, of course).
> 
> I would expect, in Helsinki,  to drink a cup of coffee with probability one 
> (using this protocole and all default hypotheses, like no asteroids hurt the 
> planet in the meantime, etc.).
> 
> And I would consider myself maximally ignorant if that coffee will be Russian 
> or American coffee.
> 
> 
> 
> 
>> 
>> The experience is totally symmetrical in the 3p picture, but that symmetry 
>> is broken from the 1p perspective of each copy. One will say “I feel to be 
>> in W, and not in M” and the other will say “I feel to be in M and not in W”.
>> 
>> Regardless of any prior probability assignment.
> 
> Exactly. 
> 
> 
> 
>> 
>> 
>>> I cannot infer a probability from just one trial, but the probabi

Re: Postulate: Everything that CAN happen, MUST happen.

2020-02-27 Thread Bruno Marchal


> On 26 Feb 2020, at 21:43, 'Brent Meeker' via Everything List 
>  wrote:
> 
> 
> 
> On 2/26/2020 3:12 AM, Bruno Marchal wrote:
>> To take the observation of some reality as a proof that such reality exist 
>> ontologically, is equivalent with Aristotle's Materialism.
> 
> But to take the non-observation of many worlds as evidence they exist is 
> equivalent with Kellyanne Conway's alternative facts.


Absolutely. That is why without QM, I would probably consider that mechanism is 
plausibly false, and without evidence, but as I said, the evidences are in 
favour of the mechanist theory, from Darwin to QM (without collapse). Of 
course, with mechanism, there is no world at all, just 0, 1, 2, … (and plus and 
times).

Bruno

PS As you mention Kellyane Conway, I will just say the Senate trial was an 
obvious joke, and it has made Trump into a dictator, and is the darkest event 
in the history of democracies since long. It is hard to believe that Trump will 
not “win” the election, now that he has the right to continue the cheating. 
(Now, this is beyond this list, I discuss this more on FaceBook).



> 
> Brent
> 
>> The point of Plato was precisely that what we observe might be only one 
>> aspect of a deeper and simpler reality.
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/everything-list/11a95145-9fb2-98e1-1c5b-48d407852c46%40verizon.net.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/4F946248-626D-47D3-839C-48D86D7FBEB0%40ulb.ac.be.


Re: Postulate: Everything that CAN happen, MUST happen.

2020-02-27 Thread Bruce Kellett
On Thu, Feb 27, 2020 at 10:14 PM Bruno Marchal  wrote:

> On 26 Feb 2020, at 23:58, Bruce Kellett  wrote:
>
> From the first person perspective, there is indeterminacy,
>
>
> That is the whole point. That is the 1p-indeterminacy I am talking about
> (and that Clark, and only Clark, has a problem with).
>
> but no sensible assignment of probabilities is possible.
>
>
> And you are right on this, in any “real case scenario”, but that is for
> the next steps.
>


And in the theoretical analysis. I am glad that you acknowledge that there
is no useful concept of probability in this WM-duplication scenario.


A probability is never observed, but evaluated, using some theory. In the
> finite case, the numerical identity suggest the usual binomial, and this is
> easy to verify for simple scenario.
>

Yes, it is binomial because there are only two possible outcomes. But
binomial without any specification of a probability for 'success'.


All what is used is the fact that you are maximally ignorant on the brand
> of coffee, and thus on the city you will see. Maximal ignorance is just
> modelled by P = 1/2 traditionally, but that is not important, as the math
> will show that we have no probabilities, but a quantum credibility measure.
>


That is probably what all this argument is actually about -- the maths show
that there are no probabilities. Because there are no unique probabilities
in the classical duplication case, the concept of probability has been
shown to be inadmissible in the deterministic (Everettian) quantum case.
The appeal by people like Deutsch and Wallace to betting quotients, or
quantum credibility measures, are just ways of forcing a probabilistic
interpretation on to quantum mechanics by hand -- they are not derivations
of probability from within the deterministic theory. There are no
probabilities in the deterministic theory, even from the 1p perspective,
because the data are consistent with any prior assignment of a probability
measure.

Bruce

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAFxXSLQX%2B4zRuSQU_0wQ4Aoo%3DNX7c9TphFDcGpCtGbtFqryN7g%40mail.gmail.com.


Re: Postulate: Everything that CAN happen, MUST happen.

2020-02-27 Thread Bruno Marchal

> On 26 Feb 2020, at 23:58, Bruce Kellett  wrote:
> 
> On Wed, Feb 26, 2020 at 10:35 PM Bruno Marchal  > wrote:
> On 25 Feb 2020, at 12:43, Bruce Kellett  > wrote:
>> On Tue, Feb 25, 2020 at 10:26 PM Bruno Marchal > > wrote:
>> 
>> In Helsinki, the guy write P(W) = P(M) = 1/2. That means he does not yet 
>> know what outcome he will feel to live. Once the experience is done, one 
>> copy will see W, and that is coherent with his prediction, same for the 
>> others. He would have written P(W) = 1, that would have been felt as refuted 
>> by the M guy, and vice-versa.
>> 
>> But if he wrote p(W) = 0.9 and p(M) = 0.1 he would get exactly the same 
>> result. The proposed probabilities are here without effect.
> 
> If I toss a perfect coin too.
> 
> Huh? If you toss a coin, perfect or not, you will get either heads or 
> tails -- you do not get both results in different branches of the wave 
> function. That is the difference here: in WM-duplication, or Everett, every 
> result is obtained every time, even if on different branches. That is why the 
> probabilities that you hypothesize at the start are irrelevant: you get the 
> same outcomes whatever probabilities you assign.


Here, you clearly confuse the first person experience and its third person 
description. If I toss a coin, I will get either head or tail not both, OK. But 
if a guy is duplicated from Helsinki in Moscow and Washington, that guy will 
live only Washington or Moscow, from its first person perspective, and, I 
recall, the question asked in Helsinki is about that first person experience.

You can’t get both experience of being in W and in M at once. "To open the door 
of the reconstitution box and see W" is simply logically incompatible (in our 
hypothetical frame) with ""To open the door of the reconstitution box and see 
M”.




> 
>  
> Of course, that would lead directly to some problem with the iterated case 
> scenario.
> 
> I don't understand this comment. What difficulties? All, I am saying is that 
> no concept of probability applies in the WM-duplication case.

Because you forget apparently that the question is about the first person 
experience.

Do you agree that in Helsinki (and assuming mechanism ‘course) you know with 
certainty that you will have a cup of coffee (knowing that it is offered in 
both W and M after the reconstitutions)?





> 
>  
>>> If not, tell me what is your prediction in Helsinki again, by keeping in 
>>> mind that it concerns your future subjective experience only. 
>>> 
>>> 
>>> In Helsinki I can offer no value for the probability since, given the 
>>> protocol, I know that all probabilities will be realized on repetitions of 
>>> the duplication.
>> 
>> In the 3p picture. Indeed, that is, by definition, the protocol. But the 
>> question is not about where you will live after the experience (we know that 
>> it will be in both cities), but what do you expect to live from the first 
>> person perspective, and here P(W & M) is null, as nobody will ever *feel to 
>> live* in both city at once with this protocole.
>> 
>> And, as I have repeated shown, the first person perspective does not give 
>> you any expectations at all.
> 
> If I am duplicated like in the 2^(16180 * 1) * (60 * 90) * 24 “movie” 
> scenario, I do expect seeing white noise, and I certainly don’t expect to see 
> “2001, Space Odyssey” with Tibetan subtitle.
> 
> You make the same mistake as above with the coin tosses -- you are trying to 
> compare duplication scenarios to single-outcome scenarios. That is wrong -- 
> no matter how many pixels on your screen, they are all either black or white, 
> not both. And there are no other worlds in which all possibilities occur.


The duplication/multiplication here are classical and real. All outcomes exist 
in the 3p picture, but each copies will write that he saw particular thing. 
Some rare individuals will see “2001, Space Odyssey” with Tibetan subtitle, but 
most will see something looking like snow (white noise).



> 
>  
> I am not sure what you mean by “the first person perspective does not give 
> any expectations”.
> 
> Just that you cannot assign any 1p probabilities to particular outcomes in 
> duplication scenarios.


The probabilities are 3p. You just forget that the probabilities are *about* 
the 1p experience, relative to my decision made in Helsinki. You can justify 
those probabilities by by using the notion of bet, if you duplicate population 
of machines, but wu-ithout this you get them by simple counting, or by the 
frequency analysis made by most people. You can sample the copies, if they are 
too many, and justify the symmetries by the numerical identity of the copies, 
thanks to the digital mechanist hypothesis. 




> 
> 
> Do you agree that if you are promised, in Helsinki, that a cup of coffee will 
> be offered to you, both in M and W, you can expect, with probability one, to

Re: Postulate: Everything that CAN happen, MUST happen.

2020-02-26 Thread Bruce Kellett
On Wed, Feb 26, 2020 at 10:35 PM Bruno Marchal  wrote:

> On 25 Feb 2020, at 12:43, Bruce Kellett  wrote:
>
> On Tue, Feb 25, 2020 at 10:26 PM Bruno Marchal  wrote:
>
>>
>> In Helsinki, the guy write P(W) = P(M) = 1/2. That means he does not yet
>> know what outcome he will feel to live. Once the experience is done, one
>> copy will see W, and that is coherent with his prediction, same for the
>> others. He would have written P(W) = 1, that would have been felt as
>> refuted by the M guy, and vice-versa.
>>
>
> But if he wrote p(W) = 0.9 and p(M) = 0.1 he would get exactly the same
> result. The proposed probabilities are here without effect.
>
>
> If I toss a perfect coin too.
>

Huh? If you toss a coin, perfect or not, you will get either heads or
tails -- you do not get both results in different branches of the wave
function. That is the difference here: in WM-duplication, or Everett, every
result is obtained every time, even if on different branches. That is why
the probabilities that you hypothesize at the start are irrelevant: you get
the same outcomes whatever probabilities you assign.



> Of course, that would lead directly to some problem with the iterated case
> scenario.
>

I don't understand this comment. What difficulties? All, I am saying is
that no concept of probability applies in the WM-duplication case.



> If not, tell me what is your prediction in Helsinki again, by keeping in
>>> mind that it concerns your future subjective experience only.
>>>
>>
>>
>> In Helsinki I can offer no value for the probability since, given the
>> protocol, I know that all probabilities will be realized on repetitions of
>> the duplication.
>>
>>
>> In the 3p picture. Indeed, that is, by definition, the protocol. But the
>> question is not about where you will live after the experience (we know
>> that it will be in both cities), but what do you expect to live from the
>> first person perspective, and here P(W & M) is null, as nobody will ever
>> *feel to live* in both city at once with this protocole.
>>
>
> And, as I have repeated shown, the first person perspective does not give
> you any expectations at all.
>
>
> If I am duplicated like in the 2^(16180 * 1) * (60 * 90) * 24 “movie”
> scenario, I do expect seeing white noise, and I certainly don’t expect to
> see “2001, Space Odyssey” with Tibetan subtitle.
>

You make the same mistake as above with the coin tosses -- you are trying
to compare duplication scenarios to single-outcome scenarios. That is wrong
-- no matter how many pixels on your screen, they are all either black or
white, not both. And there are no other worlds in which all possibilities
occur.



> I am not sure what you mean by “the first person perspective does not give
> any expectations”.
>

Just that you cannot assign any 1p probabilities to particular outcomes in
duplication scenarios.


Do you agree that if you are promised, in Helsinki, that a cup of coffee
> will be offered to you, both in M and W, you can expect, with probability
> one, to get a cup of coffee after pushing the button in Helsinki? (Assuming
> Mechanism, of course).
>
> I would expect, in Helsinki,  to drink a cup of coffee with probability
> one (using this protocole and all default hypotheses, like no asteroids
> hurt the planet in the meantime, etc.).
>


Since both copies are given coffee, that is a certain outcome for both.

And I would consider myself maximally ignorant if that coffee will be
> Russian or American coffee.
>

Exactly, that is the point: you can't predict which brand of coffee you
will receive, even if you do know that you will be given coffee.

But bringing coffee in here adds nothing. It is yet another meaningless
distraction.

The experience is totally symmetrical in the 3p picture, but that symmetry
>> is broken from the 1p perspective of each copy. One will say “I feel to be
>> in W, and not in M” and the other will say “I feel to be in M and not in W”.
>>
>
> Regardless of any prior probability assignment.
>
>
> Exactly.
>

What? Do you actually agree that there are no meaningful probability
assignments in this case?

I cannot infer a probability from just one trial, but the probability I
>> infer from N repetitions can be any value in [0,1].
>>
>>
>> But we try to find the probability from the theory.
>>
>
> And we use the experimental data to test the theory. If you predict p(W) =
> p(M) =0.5, after a large number of duplications that prediction will be
> refuted by the majority of the copies. In fact, in the limit, only a set of
> measure zero will obtain p = 0.5 from their data.
>
>
> Then that is true for the iterated coin tossing too, and there is no
> probabilities at all.
>


There you go again. Confusing single outcome scenarios with the duplication
scenarios in which all outcomes occur. This is not 'honest dealing' in
argument, Bruno.

As I illustrated with the WMS triplication, unknown to the candidate, we
>> see that we cannot infer any probabilities, from experienc

Re: Postulate: Everything that CAN happen, MUST happen.

2020-02-26 Thread 'Brent Meeker' via Everything List




On 2/26/2020 3:12 AM, Bruno Marchal wrote:

To take the observation of some reality as a proof that such reality exist 
ontologically, is equivalent with Aristotle's Materialism.


But to take the non-observation of many worlds as evidence they exist is 
equivalent with Kellyanne Conway's alternative facts.


Brent


The point of Plato was precisely that what we observe might be only one aspect 
of a deeper and simpler reality.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/11a95145-9fb2-98e1-1c5b-48d407852c46%40verizon.net.


Re: Postulate: Everything that CAN happen, MUST happen.

2020-02-26 Thread Alan Grayson


On Wednesday, February 26, 2020 at 4:35:54 AM UTC-7, Bruno Marchal wrote:
>
>
> On 25 Feb 2020, at 12:43, Bruce Kellett > 
> wrote:
>
> On Tue, Feb 25, 2020 at 10:26 PM Bruno Marchal  > wrote:
>
>> On 24 Feb 2020, at 23:22, Bruce Kellett > 
>> wrote:
>>
>> On Tue, Feb 25, 2020 at 12:10 AM Bruno Marchal > > wrote:
>>
>>> On 23 Feb 2020, at 23:49, Bruce Kellett >> > wrote:
>>>
>>> On Mon, Feb 24, 2020 at 12:21 AM Bruno Marchal >> > wrote:
>>>
 On 23 Feb 2020, at 04:11, Bruce Kellett >>> > wrote:


 I don't really understand your comment. I was thinking of Bruno's 
 WM-duplication. You could impose the idea that each duplication at each 
 branch point on every branch is an independent Bernoulli trial with p = 
 0.5 
 on this (success being defined arbitrarily as W or M). Then, if these 
 probabilities carry over from trial to trial, you end up with every binary 
 sequence, each with weight 1/2^N. Summing sequences with the same number 
 of 
 0s and 1s, you get the Pascal Triangle distribution that Bruno wants.

 The trouble is that such a procedure is entirely arbitrary. The only 
 probability that one could objectively assign to say, W, on each Bernoulli 
 trial is one, 


 That is certainly wrong. If you are correct, then P(W) = 1 is written 
 in the personal diary,

>>>
>>> I did say "objectively assign". In other words, this was a 3p comment. 
>>> You confuse 1p with 3p yet again.
>>>
>>>
>>> Well, if you “objectively” assign P(W) = 1, the guy in M will 
>>> subjectively refute that prediction, and as the question was about the 
>>> subjective accessible experience, he objectively, and predictably, refute 
>>> your statement. 
>>>
>>
>>
>> And if you objectively assign p(W) = p(M) = 0.5, then with the W-guy and 
>> the M-guy will both say that your theory is refuted, since they both see 
>> only one city: W-guy, W with p = 1.0, and the M-guy, M with p =1.0..
>>
>>
>> That is *very* weird. That works for the coin tossing experience too, 
>> even for the lottery. I predicted that I have 1/10^6 to win the lottery, 
>> but I was wrong, after the gale was played I won, so the probability was 
>> one!
>>
>> In Helsinki, the guy write P(W) = P(M) = 1/2. That means he does not yet 
>> know what outcome he will feel to live. Once the experience is done, one 
>> copy will see W, and that is coherent with his prediction, same for the 
>> others. He would have written P(W) = 1, that would have been felt as 
>> refuted by the M guy, and vice-versa.
>>
>
> But if he wrote p(W) = 0.9 and p(M) = 0.1 he would get exactly the same 
> result. The proposed probabilities are here without effect.
>
>
> If I toss a perfect coin too.
>
> Of course, that would lead directly to some problem with the iterated case 
> scenario.
>
>
>
>
>
> If not, tell me what is your prediction in Helsinki again, by keeping in 
>>> mind that it concerns your future subjective experience only. 
>>>
>>
>>
>> In Helsinki I can offer no value for the probability since, given the 
>> protocol, I know that all probabilities will be realized on repetitions of 
>> the duplication.
>>
>>
>> In the 3p picture. Indeed, that is, by definition, the protocol. But the 
>> question is not about where you will live after the experience (we know 
>> that it will be in both cities), but what do you expect to live from the 
>> first person perspective, and here P(W & M) is null, as nobody will ever 
>> *feel to live* in both city at once with this protocole.
>>
>
> And, as I have repeated shown, the first person perspective does not give 
> you any expectations at all.
>
>
> If I am duplicated like in the 2^(16180 * 1) * (60 * 90) * 24 “movie” 
> scenario, I do expect seeing white noise, and I certainly don’t expect to 
> see “2001, Space Odyssey” with Tibetan subtitle.
>
> I am not sure what you mean by “the first person perspective does not give 
> any expectations”.
>
> Do you agree that if you are promised, in Helsinki, that a cup of coffee 
> will be offered to you, both in M and W, you can expect, with probability 
> one, to get a cup of coffee after pushing the button in Helsinki? (Assuming 
> Mechanism, of course).
>
> I would expect, in Helsinki,  to drink a cup of coffee with probability 
> one (using this protocole and all default hypotheses, like no asteroids 
> hurt the planet in the meantime, etc.).
>
> And I would consider myself maximally ignorant if that coffee will be 
> Russian or American coffee.
>
>
>
>
>
> The experience is totally symmetrical in the 3p picture, but that symmetry 
>> is broken from the 1p perspective of each copy. One will say “I feel to be 
>> in W, and not in M” and the other will say “I feel to be in M and not in W”.
>>
>
> Regardless of any prior probability assignment.
>
>
> Exactly. 
>
>
>
>
>
> I cannot infer a probability from just one trial, but the probability I 
>> infer from N repetitions can be any value in [0,1].
>>
>>
>> But w

  1   2   3   4   >