Georges Quenot writes:
I do not believe in either case that a simulation with this level
of detail can be conducted on any computer that can be built in
our universe (I mean a computer able to simulate a universe
containing a smaller computer doing the calculation you considered
with a level of accuracy sufficient to ensure that the simulation
of the behavior of the smaller computer would be meaningful).
This is only a theoretical speculation.
    
Hal Finney responded:
What about the idea of simulating a universe with simpler laws using such
a technique?  For example, consider a 2-D or 1-D cellular automaton (CA)
system like Conway's "Life" or the various systems considered by Wolfram.

One of the issues is the computational complexity of "running all the possible i.e. definable programs" to
create an informational multiverse out of which consistent, metric, regular, observable info-universes
emerge. If computation takes energy (as it undeniably does WITHIN our universe), then an unfathomably
impossibly large amount of "extra-universal" energy would be required to compute all info-universes.

(def'n qubitstring = a bitstring capable of simultaneously holding all of its possible values i.e. all possible
combinations of 1s and 0s in a bitstring of that length)

For example, say that we have a relatively tiny info-multiverse consisting of a qubitstring
where the qubitstring is only 1 billion bits long. i.e. It simultaneously exhibits (if queried appropriately) any of
(2 raised to the power 1 billion) different information-states.

Now lets imagine computation upon this qubitstring multiverse, in order for a god-computer to "tour" some
set of info-states of the qubitstring in some "time-order" in order to simulate the operation of some universe
within the qubitstring multiverse.

Let's further imagine that the god-computer doesn't like to discriminate amongst its potential universes within
the multiverse qubitstring, so it wants to try (just the next computation step, for now,  in) all possible computations
upon the multiverse qubitstring.

How many ways of choosing the next computational step are there? Again, there are 2 to the power 1 billion ways
at each step.
So if the god-computer wants to simulate only 1 million discrete-computing-steps
(defined as different-info-state-selecting steps) of each universe simulation, but to to do this for all possible
"potential universes i.e. state-change traces" in the billion-qubit multiverse, then the number of ways of
doing this (and we're saying all of these ways are going to get done, because the god-comp is non-favoring
of one potential universe over another), the number of ways of simulating a million comp-steps in each
universe is (2 to the power (1 million billion).)   (express as 2 ^ 1,000,000,000,000,000)

This "number of possible computing-step-sequences to compute all of these million-step-old universes
is the same as the number of computing-steps NECESSARY to compute all these universes.
---------
Now lets make the numbers more realistic. Our current universe has the following statistics (roughly!)
# of protons = 10 ^ 78
# of material particles and photons = 10 ^ 88 (give or take)

Entropy H = 10 ^ 88    = "the log of the number of possible velocities and positions of the material particles and photons"

Bitstring length needed to represent a single "instantaneous state" of our universe
IS THE SAME AS THE ENTROPY, so the bitstring needed to represent a "shortest distinguishable moment" of
our universe is 10 ^ 88 bits long. So the qubitstring needed to "simultaneously" represent all possible moment-states
of our universe is also 10 ^ 88 bits long.

So, how many "shortest-distinguishable-moments" have their been in our universe since the big bang anyway?
Well the shortest distinguishable moment of time is the Planck time unit = 10 ^ -43 seconds.

And there have been 3 x 10 ^ 60 Planck time units in our own universe's lifetime so far.

So, putting it all back together in the qubitstring-computational framework,

The number of possible ways of choosing computing steps, to compute all possible info-universes up to those
universe evolution stages of the same age and particle+photon population and entropy as ours is: (+- fudge factors):
(Drum-roll please.....)  42 (just kidding, it is.....)

2 to the power (Entropy  *  the number of distinguishable time moments) =

2 to the power (bitstring-length  *  the number of distinguishable info states in each universe's sim. computation i.e. history) =

2 to the power ( (10 ^ 88) * (3 * 10 ^ 60) )  = 

======================================================================
2 to the power (10 ^ 148)          (approximately)

Which is the number of computing steps that must be done to compute the simulations of
the histories of all possible universes of comparable age and particle population and entropy as our own.
======================================================================
 
-------------------------

So probably, the "extra-universal" notion of "computing all the universe simulations" is not traditional computation
at all. I prefer to think of the state of affairs as being that the multiverse substrate is just kind of like a
very large, passive qubitstring memory, capable of holding the qubits, that is of exhibiting, if appropriately
queried, the different bit values in the information states (all 10 ^ 148 information-states in the histories of
universes as big and old as ours, that is.)

So then the question becomes, ok, how do we get to ACTUAL computation happening in all this mess,
and dynamics happening, and time etc "happening". If it's all just a really, really big static RAM, then nothing's
HAPPENING!! And that doesn't seem right.

However..... (raw speculation here)....

SPECULATION A: The god-computer is lazy. It actually just constructs (or initially constructs) selected universes in
 the qubitstring multiverse, using the principle of their being just 1 bit initially (entropy of 0) and then
expanding to 2 bits but only creating (or first creating) those info states which are minimally different from
each other (i.e. are 1-bit in 1 position different from each other). Then creating an info state with 3 bits, but again,
only evolving its state 1 local bit-change at a time, and so on and so on. So I guess this means something at least
akin to  "do the shortest (or least entropy-changing) bit-flipping programs first, before any other info-states are realized."
That could perhaps define some kind of thermodynamic time arrow I suppose.And it would define a limited set of
(perhaps constrained to be sensible-form) universes initially emerging from the computation, as well.

In other words, maybe entropy is increasing (due to longer and longer bitstrings being created as "god-computation/mem generation"
proceeds) but it is increasing AS SLOWLY AS POSSIBLE. By only least-entropy-changing next states of the bitstring
being selected by the lazy god-comp at each step of "computed existence".
 Maybe that's the constraint that allows order to temporarily beat back entropy in local areas of our universe?

AND/OR

SPECULATION B:

What if.... Observers are things that are kind of like "lazy computers" with access (from within it??) to
 the multiverse qubitstring RAM.    So observers all observe the same universe, and the same time arrow, because
they are constrained to do the "least-entropy-changing" i.e. laziest-computational tour through whatever locale
of info-state-space in the multiverse they inhabit. In this model, there is only one realizable universe, and some
kind of entropy-change-minimization computing direction constraint is what determines it.

I'm basically halucinating this at this stage, so I'll stop.

Eric





Reply via email to