On 10/21/2011 8:14 AM, Bruno Marchal wrote:

On 19 Oct 2011, at 05:30, Russell Standish wrote:

On Mon, Oct 17, 2011 at 07:03:38PM +0200, Bruno Marchal wrote:
This, ISTM, is a completely different, and more wonderful beast, than
the UD described in your Brussells thesis, or Schmidhuber's '97
paper. This latter beast must truly give rise to a continuum of
histories, due to the random oracles you were talking about.


All UDs do that. It is always the same beast.


On reflection, yes you're correct. The new algorithm you proposed is
more efficient than the previous one described in your thesis, as
machines are only executed once for each prefix, rather over and over
again for each input having the same prefix. But in an environment of
unbounded resources, such as we're considering here, that has no import.

Note that my programs are not prefixed. They are all generated and executed. To prefix them is usefulm when they are generated by a random coin, which I do not need to do.



So the histories, we're agreed, are uncountable in number, but OMs
(bundles of histories compatible with the "here and now") are surely
still countable.

This is not obvious for me. For any to computational states which are in a sequel when emulated by some universal UM,there are infinitely many UMs, including one dovetailing on the reals, leading to intermediate states. So I think that the "computational neighborhoods" are a priori uncoutable. That fits with the topological semantics of the first person logics (S4Grz, S4Grz1, X, X*, X1, X1*). But many math problems are unsolved there.


Hi Bruno and Russel,

I would like to better understand what "topological semantics" means. Are you considering relations defined only in set theoretical sense, ala the closed or open or clopen nature of the sets relative to each other? What about the form of the axiom of choice for the set theory? How do you induce compactness? How is a "space" defined in strictly arithmetic terms?



If we take the no information ensemble,

You might recall what you mean by this exactly.



and transform it by applying a
universal turing machine and collect just the countable output string
where the machine halts, then apply another observer function that
also happens to be a UTM, the final result will still be a
Solomonoff-Levin distribution over the OMs.

This is a bit unclear to me. Solomonof-Levin distribution are very nice, they are machine/theory independent, and that is quite in the spirit of comp, but it seems to be usable only in ASSA type approach. I do not exclude this can help for providing a role to little program, but I don't see at all how it could help for the computation of the first person indeterminacy, aka the derivation of physics from computer science needed when we assume comp in cognitive science. In the work using Solomonof-Levin, the mind-body problem is still under the rug. They don't seem aware of the first/third person description.

    S-L seems to assume 1p = 3p or no 1p at all!



This result follows from
the compiler theorem - composition of a UTM with another one is still
a UTM.

So even if there is a rich structure to the OMs caused by them being
generated in a UD, that structure will be lost in the process of
observation. The net effect is that UD* is just as much a "veil" on
the ultimate ontology as is the no information ensemble.

UD*, or sigma_1 arithmetic, can be seen as an effective (mechanically defined) definition of a zero information. It is the everything for the computational approach, but it is tiny compared to the first person view of it by internal observers accounted in the limit by the UD.

    How do we define this notion of size? Tiny as opposed to ???




Unless I'm missing something here.




Lets leave the discussion of the universal prior to another post. In a
nutshell, though, no matter what prior distribution you put on the "no
information" ensemble, an observer of that ensemble will always see
the Solomonoff-Levin distribution, or universal prior.

I don't think it makes sense to use a universal prior. That would
make sense if we suppose there are computable universes, and if we
try to measure the probability we are in such structure. This is
typical of Schmidhuber's approach, which is still quite similar to
physicalism, where we conceive observers as belonging to computable
universes. Put in another way, this is typical of using some sort of
identity thesis between a mind and a program.

I understand your point, but the concept of universal prior is of far
more general applicability than Schmidhuber's model. There need not be
any identity thesis invoked, as for example in applications such as
observers of Rorshach diagrams.

And as for identity thesis, you do have a type of identity thesis in
the statement that "brains make interaction with other observers
relatively more likely" (or something like that).


yes, by the duplication (multiplication) of populations of observers, like in comp, but also like in Everett.

But we also need something that acts to code the "no preferred reference frame" of GR. Everett does not solve this problem, it only compounds it. :-(





There has to be some form of identity thesis between brain and mind
that prevents the Occam catastrophe, and also prevent the full retreat
into solipsism. I think it very much an open problem what that is.

This will depend on the degree of similarity between between quantum mechanics and the comp physics, which is given entirely by the (quantified) material hypostases (mainly the Z1* and X1* logics). Open but well mathematically circumscribed problem.



From what I have studied so far, the (static) relationship between QM and COMP physics is the relationship between Boolean algebras and Orthocomplete lattices. I think that a partial solution might require a weakening of the definition of a countable model of arithmetic and application of the Tennenbaum theorem (that would allow for a variational principle, something like q-deformed theories where q is a measure of the relative recursiveness of the model of the theory). The idea is that every 1p would observe itself, in the Lob sense, to be recursive. The proof would require showing that a Lobian machine on a non-standard model of arithmetic would *not* be able to "see" its non-standardness and thus it would bet that only it is recursive, thus it's Bp&p would be 1p and not 3p truth.

Onward!

Stephen

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to