--- "Eric B. Ramsay" <[EMAIL PROTECTED]> wrote:

> Matt: I understand your point #2 but it is a grand sweep without any detail.
> To give you an example of what I have in mind, let's consider the photon
> double slit experiment again. You have a photon emitter operating at very
> low intensity such that photons come out singly. There is an average rate
> for the photons emitted but the point in time for their emission is random -
> this then introduces the non-deterministic feature of nature. At this point,
> why doesn't the emitted photon just go through one or the other slit?
> Instead, what we find is that the photon goes through a specific slit if
> someone is watching but if no one is watching it somehow goes through both
> slits and performs a self interference leading to the interference pattern
> observed. Now my question: can it be demonstrated that this scenario of two
> alternate behaviour strategies minimizes computation resources (or whatever
> Occam's razor requires) and so is a necessary feature of a simulation? We
> already have a
>  probability event at the very start when the photon was emitted, how does
> the other behaviour fit with the simulation scheme? Wouldn't it be
> computationally simpler to just follow the photon like a billiard ball
> instead of two variations in behaviour with observers thrown in?

It is the non-determinism of nature that is evidence that the universe is
simulated by a finite state machine.  There is no requirement of low
computational cost, because we don't know the computational limits of the
simulating machine.  However there is a high probability of algorithmic
simplicity according to AIXI/Occam's Razor.

If classical (Newtonian) mechanics were correct, it would disprove the
simulation theory because it would require infinite precision, which is not
computable on a Turing machine.

Quantum mechanics is deterministic.  It is our interpretation that is
probabilistic.  The wave equation for the universe has an exact solution, but
it is beyond our ability to calculate it.  The two slit experiment and other
paradoxes such as Schrodinger's cat and EPR (
http://en.wikipedia.org/wiki/Einstein-Podolsky-Rosen_paradox ) are due to
using a simplified model that does not include the observer in the equations.

Your argument that computational costs might restrict the possible laws of
physics is also made in Whitworth's paper (
http://arxiv.org/ftp/arxiv/papers/0801/0801.0337.pdf ), but I think he is
stretching.  For example, he argues (table on p. 15) that the speed of light
limit is evidence that the universe is simulated because it reduces the cost
of computation.  Yes, but for a different reason.  The universe has a finite
age, T.  The speed of light c limits its size, G limits its mass, and Planck's
constant h limits its resolution.  If any of these physical constants did not
exist, then the universe would have infinite information content and would not
be computable.  From T, c, G, and h you can derive the entropy (about 10^122
bits), and thus the size of a bit, which happens to be about the size of the
smallest stable particle.

We cannot use the cost of computation as an argument because we know nothing
about the physics of the simulating universe.  For example, the best known
algorithms for computing the quantum wave equation on a conventional computer
are exponential, e.g. 2^(10^122) operations.  However, you could imagine a
"quantum Turing machine" that operates on a superposition of tapes and states
(and possibly restricted to time reversible operations).  Such a computation
could be trivial, depending on your choice of mathematical model.



-- Matt Mahoney, [EMAIL PROTECTED]

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=85465376-f0c66e

Reply via email to