On 5/8/2022 5:39 PM, Bruce Kellett wrote:
On Mon, May 9, 2022 at 10:32 AM Brent Meeker <meekerbr...@gmail.com>
wrote:
On 5/8/2022 5:25 PM, Bruce Kellett wrote:
On Mon, May 9, 2022 at 10:17 AM Brent Meeker
<meekerbr...@gmail.com> wrote:
On 5/8/2022 3:42 PM, Bruce Kellett wrote:
On Mon, May 9, 2022 at 6:37 AM smitra <smi...@zonnet.nl> wrote:
On 08-05-2022 05:58, Bruce Kellett wrote:
> It is when you take the SE to imply that all possible
outcomes exist
> on each trial. That gives all outcomes equal status.
All outcomes can exist without these being equally
likely. One can make
models based on more branches for certain outcomes, but
these are just
models that may not be correct.
Such models are certainly inconsistent with the SE. So if
your concern is that the SE does not contain provision for a
collapse, then you should doubt other theories that violate
the SE. You can't have it both ways: you can't reject
collapse models because they violate the SE and then embrace
other models that also violate the SE. Either the SE is
universally correct, or it is not.
What matters is that such models can be
formulated in a mathematically consistent way, which
demonstrates that
there is n o contradiction. The physical plausibility of
such models is
another issue.
This has been discussed. To allow for real number
probabilities, the number of branches on each split must be
infinite.
I don't think that's a problem. The number of information
bits within a Hubble sphere is something like the area in
Planck units, which already implies the continuum is a just a
convenient approximation. If the area is N then something
order 1/N would be the smallest non-zero probability. Also
there would be a cutoff for the off-diagonal terms of the
density matrix. Once all the off-diagonal terms are zero
then it's like a mixed matrix and one could say that one of
the diagonal terms has "happened".
As I have pointed out before, a finite number of branches does
not work because after a certain finite number of splits, one
would run out of branches to partition in anything like the way
appropriate for the related probabilities. One cannot go adding
more branches at that stage without rendering the whole concept
meaningless. Keeping things finite has its attractions, but it
does not work in this case.
I think it depends on how you count splits. If the number of dof
within a Hubble volume is finite, then the number of splits
doesn't grow exponentially. They get cut off when their
probability becomes too small.
You are back to your notion of a smallest possible probability. That
also runs into problems if you run a long sequence of events where one
outcome has a very small probability on each trial. Try tossing a coin
N times. The probability of a sequence of N heads is 1/N. What
happens when this gets smaller than the smallest allowed probability?
Is the next toss somehow forbidden to give head again? You are making
the whole notion of probability problematic.
Yes, I can see a concern. But my back-of-the envelope estimate is that
the Hubble volume has the information content of ~10^96 bits. So it
would very hard experimentally to flip enough coins to test that limit.
However it would imply that you couldn't create a pseudo-random number
generator that could produce random numbers with that many bits. That
would raise the question of how would you tell? The sequence of numbers
of a good pseudo-random number generator look random until you test high
order correlations.
Brent
--
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/everything-list/c0b065a0-906a-df7d-1776-bc81b9175ab1%40gmail.com.