Welcome!
I haven't been active on this list lately. Your article looks very
interesting, I'll read it in detail.
Saibal
Citeren Travis Garrett <travis.garr...@gmail.com>:
Hi everybody,
My name is Travis - I'm currently working as a postdoc at the
Perimeter Institute. I got an email from Richard Gordon and Evgenii
Rudnyi pointing out that my recent paper: http://arxiv.org/abs/1101.2198
is being discussed here, so yeah, I'm happy to join the conversation.
I'll respond to some specific points in the discussion thread, but
what the heck, I'll give an overview of my idea here...
The idea flows from the assumption that one can do an arbitrarily
good simulation of arbitrarily large regions of the universe inside a
sufficiently powerful computer -- more formally I assume the physical
version of the Church Turing Thesis. Everything that exists can then
be viewed as different types of information. The Observer Class
Hypothesis then proposes that observers collectively form by far the
largest set of information, due to the combinatorics that arise from
absorbing information from many different sources (the observers
thereby roughly resemble the power set of the set of all
information). One thus exists as an observer because it is by far the
most probable form of existence.
A couple caveats are of crucial importance: when I say information,
I mean non-trivial, gauge-invariant, "real" information, i.e.
information that has a large amount of effective complexity (Gell-Mann
and Lloyd) or logical depth (Bennett). I focus on "gauge-invariant"
because I can then borrow the Faddeev-Popov procedure from quantum
field theory: in essence, one does not count over redundant
descriptions. I also borrow the idea of regularization from quantum
field theory: when considering systems where infinities occur, it can
be useful to introduce a finite cutoff, and then study the limiting
behavior as the cutoff goes to infinity. For instance, regulating the
integers shows that the density of primes goes like 1/log(N) - without
the cutoff one can only say that there are a countable number of
primes and composites. These ideas are well known in theoretical
physics, but perhaps not outside, and I am also using them in a new
setting...
Let me give a simple example of the use of gauge invariance from the
paper - consider the mathematical factoid: {3 is a prime number}.
This can be re-expressed in an infinite number of different ways: {2+1
is a prime number}, {27^(1/3) is not composite}, etc, etc... Thus, at
first it seems that just this simple factoid will be counted an
infinite number of times! But no, follow Faddeev and Popov, and pick
one particular representation (it's fine to use, say, {27^(1/3) is not
composite}, but later we will want to use the most compact
representations when we regularize), and just count this small piece
of information once, which removes all of the redundant descriptions.
To reiterate, we only count over the gauge-invariant information.
Consider a more complex example, say the Einstein equations: G_ab =
T_ab. Like "3 is a prime number", they can be expressed in an
infinite number of different ways, but let's pick the most compact
binary representation x_EE (an undecidable problem in general, but say
we get lucky). Say the most compact encoding takes one million bits.
Basic Kolmogorov complexity would then say that x_EE contains the
same amount of information as a random sequence r_i one million bits
long - both are not compressible. But x_EE contains a large amount of
nontrivial, gauge invariant information that would have to be
preserved in alternative representations, while the random sequence
has no internal patterns that must be preserved in different
representations: x_EE has a large amount of effective complexity, and
r_i has none. Focusing on the gauge-invariant structures thus not
only removes the redundant descriptions, but also removes all of the
random noise, leaving only the "real" information behind. For
instance, I posit that the uncomputable reals are nothing more than
infinitely long random sequences, which also get removed (along with
the finite random sequences) by the selection of a gauge.
In some computational representation, the real information structures
will thus form a sparse subset among all binary strings. In the paper
I consider 3 cases - 1) there are a finite number of finitely complex
real information structures (which could be viewed as the null
assumption), 2) there are a infinite number of finitely complex
structures, and 3) there are irreducibly infinitely complex
information structures. I focus on 1) and 2), with the assumption
that 3) isn't meaningful (i.e. that hypercomputers do not exist).
Even case 2) is extremely large, and it leads to the prediction of
universal observers: observers that continuously evolve in time, so
that they can eventually process arbitrarily complex forms of
information. The striking fact that a technological singularity may
only be a few decades away lends support to this extravagant idea...
Well anyways, that's probably enough for now. I am interested in
seeing what people think of the idea :-), and I am going through
previous threads to see what other sorts of things are being
discussed.
Sincerely,
Travis
--
You received this message because you are subscribed to the Google
Groups "Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at
http://groups.google.com/group/everything-list?hl=en.
--
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at
http://groups.google.com/group/everything-list?hl=en.