On 10 Jan 2012, at 12:58, acw wrote:

On 1/10/2012 12:03, Bruno Marchal wrote:

On 09 Jan 2012, at 19:36, acw wrote:



To put it more simply: if Church Turing Thesis(CTT) is correct,
mathematics is the same for any system or being you can imagine.

I am not sure why. "Sigma_1 arithmetic" would be the same; but higher
mathematics (set theory, analysis) might still be different.




If it's wrong, maybe stuff like concrete infinities, hypercomputation
and infinite minds could exist and that would falsify COMP, however
there is zero evidence for any of that being possible.

Not sure, if CT is wrong, there would be finite machines, working in
finite time, with well defined instructions, which would be NOT Turing emulable. Hypercomputation and infinite (human) minds would contradict comp, not CT. On the contrary, they need CT to claim that they compute more than any programmable machines. CT is part of comp, but comp is not
part of CT.
Beyond this, I agree with your reply to Craig.


In that response I was using CT in the more unrestricted form: all effectively computable functions are Turing-computable.

I understand, but that is confusing. David Deutsch and many physicists are a bit responsible of that confusion, by attempting to have a notion of "effectivity" relying on physics. The original statement of Church, Turing, Markov, Post, ... concerns only the intuitively human computable functions, or the functions computable by finitary means. It asserts that the class of such intuitively computable functions is the same as the class of functions computable by some Turing machine (or by the unique universal Turing machine). Such a notion is a priori completely independent of the notion of computable by physical means.






It might be a bit stronger than the usual equivalency proofs between a very wide range of models of computation (Turing machines, Abacus/ PA machines, (primitive) recursive functions (+minimization), all kinds of more "current" models of computation, languages and so on).

Yes. I even suspect that CT makes the class of functions computable by physics greater than the class of Church.



If hypercomputation was actually possible that would mean that strong variant of CT would be false, because there would be something effectively computable that wasn't computable by a Turing machine.

OK.



In a way, that strong form of CT might already be false with comp, only in the 1p sense as you get a free random oracle as well as always staying consistent(and 'alive'), but it's not false in the 3p view...

Yes. Comp makes physics a first person plural reality, and a priori we might be able to exploit the first plural indeterminacy to compute more function, like we know already that we have more "processes", like that free random oracle. The empirical fact that quantum computer does not violate CT can make us doubt about this.



Also, I do wonder if the same universality that is present in the current CT would be present in hypercomputation (if one were to assume it would be possible)

Yes, at least for many type of hypercomputation, notably of the form of computability with some oracle.


- would it even retain CT's current "immunity" from diagonalization?

Yes. Actually the immunity of the class of computable functions entails the immunity of the class of computable functions with oracle. So the consistency of CT entails the consistency of some super-CT for larger class. But I doubt that there is a super-CT for the class of functions computable by physical means. I am a bit agnostic on that.




As for the mathematical truth part, I mostly meant that from the perspective of a computable machine talking about axiomatic systems - as it is computable, the same machine (theorem prover) would always yield the same results in all possible worlds(or shared dreams).

I see here why you have some problem with AUDA (and logic). CT = the notion of computability is absolute. But provability is not absolute at all. Even with CT, different machine talking or using different axiomatic system will obtain different theorems. In fact this is even an easy (one diagonalization) consequence of CT, although Gödel's original proof does not use CT. provability, nor definability is not immune for diagonalization. Different machines proves different theorems.



Although with my incomplete understanding of the AUDA, and I may be wrong about this, it appeared to me that it might be possible for a machine to get more and more of the truth given the consistency constraint.

That's right both PA + con(PA) and PA + ~con(PA) proves more true arithmetical theorems than PA. And PA + con(PA + con(PA + con (PA + con PA)) will proves even more theorems. The same with the negation of those consistency. Note that the theory PA* = PA* + con(PA*), which can be defined finitely by the use of the Kleene recursion fixed point proves ALL the true propositions of arithmetic!!! Unfortunately it proves also all false propositions of arithmetic. This follows easily by the second theorem of Gödel, because such a theory can prove its own consistency given that (con "itself") is an axiom, and by Gödel II, it is inconsistent. But, yes, once you have a consistent machine, you can extend its provability ability on the whole constructive transfinite.



As for higher math, such as set theories: do they have a model and are they consistent? (that's an open question) If some forms close to Cantor's paradise are accepted in the ontology, wouldn't that risk potentially falsifying COMP?

Trivially. You can refute comp in the theory ZF + ~comp (accepting some formalization of comp in set theory). Remember that, like consistency, (~ probable comp) is consistent with comp. If comp is true, like consistency, it is not provable, and so you can add (~ provable comp), or (con ~comp) to the comp theory of everything (arithmetic) without getting an inconsistency with comp. Of course you should not add ~comp to comp at the same (meta)level. You will get a contradiction. Likewise, as above, the theory PA + (PA is inconsistent) is consistent. You can prove, in it, that the false is provable, but you cannot prove, in it, the false. Bf -> f is not a theorem (Bf -> f) = ~Bf = consistency.



I can see many reasons why a particular machine/system would want to talk about such higher math, but I'm not sure how it could end up with different discourses/truths if the machine('s body) is computable.

Here there is a difficulty, and many people get it wrong. There is a frequent error in logic which mirrors very well Searles error in his chinese room argument. With comp I can certainly simulate Einstein's brain, but that fact does not transform me into Einstein. If someone asks me a complex question about relativity, I might be able to answer by simulating what Einstein would respond, but I might still not have a clue about what the meaning of Einstein's answer. In fact I would just make it possible for Einstein to answer the question. Not me. Like wise, a quasi debilitating arithmetical theorem system like Robinson Arithmetic, which cannot prove x+y = y+x, for example, is still Turing universal, and as such can imitate PA perfectly through its provability abilities. That is, RA is able to prove that PA can prove x+y=y+x. But RA has not the power to be convinced by that PA's proof, like I can simulate Einstein without having the gift to understand any words by him.

RA can simulate PA and ZF, and even ZF+k (which can prove the consistency of ZF), but this does not give to RA the *provability* power of PA, ZF or ZF+k.

PA can prove that ZF can prove the consistency of PA, but PA can still not prove the consistency of PA.



I can see it discovering the independence of certain axioms (for example the axiom of choice or the continuum hypothesis), but wouldn't all the math that it can /talk/ about be the same? The machine would have to assume some axioms and reason from there.

Yes. And with different axioms you get different provability aptitudes. Once a machine can prove all true arithmetical sigma_1 sentences (with the shape ExP(x) with P decidable) she is universal, with respect to *computability. You can add as many axioms you want, the machine will not *compute* more functions. But adding axioms will always lead the machine into proving more *theorems*.

In AUDA, "belief" is modeled by provability (not computability), and then knowledge is defined in the usual classical (Theaetetus) ways. All beliefs of the correct machines will obeys to the same self- referential logic, but all belief-content will differ from a machine to a different machine. PA and ZF have the same self-referential logics, but they have quite different belief, even restricted on the numbers. For example ZF proves more arithmetical truth than PA. ZFC and ZF+(~C) proves exactly the same theorem in arithmetic, despite they proves quite different theorem about sets (so arithmetic is deeply independent of the axiom of choice). ZF+k (= ZF + it exists an inaccessible cardinal) proves *much more* arithmetical theorem than ZF.

To sum up:
Computability is an absolute notion.
Provability is a relative notion.





BTW, acw, you might try to write a shorter and clearer version of your joining post argument. It is hard to follow. If not, I might take much
more time.

Bruno



I think I talked about too many different things in that post, not all directly relevant to the argument (although relevant when trying to consider as many consequences as possible of that experiment). If some parts are unclear, feel free to ask in that thread. The general outline of what I talked about in the part you have yet to comment on: a generalized form of the experiment the main character from "Permutation City" novel performed is described in detail(assuming COMP), a possible explanation for why it might not actually be useless to perform such an experiment and why it might be a good practical test for verifying the consequences described in the UDA, various variations/factors/practicalities of that experiment are discussed (with goals such as reducing white rabbits, among a few others), some not directly relevant to the argument and at the end I tried to see if the notion of observer can be better defined and tried to show that the notion of "generalized brain" might not always be an appropriate way to talk about an observer. That post was mostly meant to be exploratory and I hoped the ensuing discussion would lead to 2 things: 1) assessing the viability of that experiment if COMP is assumed AND 2) reaching a better definition of the notion of observer.

OK. This I think I understood this, but your style is not easy, and it might be useful, even within your goal, to work on a clearer and shorter version, with shorter sentences, without any digression, with clear section and subsection, so as to invite most people (including me) to grasp it, or to find a flaw, and this in reasonable time. In particular I fail to see the point of discussing the use of different universal systems like you did with the Cellular Automata (CA).

Bruno


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to