Le 26-avr.-08, à 06:55, nichomachus a écrit :

>
>
>
> On Apr 25, 5:27 am, Bruno Marchal <[EMAIL PROTECTED]> wrote:
>> Le 24-avr.-08, à 18:26, nichomachus a écrit :
>>
>>
>>> On Apr 22, 11:28 pm, "Brian Tenneson" <[EMAIL PROTECTED]> wrote:
>>>> Perhaps Hilbert was right and Physics ought to have been axiomatized
>>>> when he
>>>> suggested it.  ;)  Then again, there might not have been a 
>>>> motivation
>>>> to
>>>> until recently with Tegmark's MUH paper and related material (like 
>>>> by
>>>> David
>>>> Wolpert of NASA).
>>
>>> The logical positivists were motivated to axiomatize in the predicate
>>> calculus the laws of scientific theories in the early 20th century,
>>> first because they believed that it would guarantee the cognitive
>>> significance of theoretical terms in the theory (such as the
>>> unphysical ether of maxwell's electromagnetism), and then later
>>> because it had evolved into an attempt to specify the proper form of 
>>> a
>>> scientific theory. In practice this had too many problems and was
>>> eventually abandoned. One of the consequences of this program was 
>>> that
>>> axiomatizing the laws of a theory in first order predicate calculus
>>> with equality was that such a formulation of a theory always implied
>>> various unintended interpretations. The amount of effort needed to
>>> block these unintended interpretations was out of proportion with the
>>> benefit received by axiomatization.
>>
>> It is a bit weird because it is just logically impossible to block
>> those unintended interpretations. And This should not be a problem.
>> The reason why physical theories are not axiomatize is more related to
>> the fact that axiomatization does not per se solve or even address the
>> kind of conceptual problem raised by physics.
>
> Also to this point, that it is impossible to identify a theory with
> any particular linguistic formulation of it. Theories are not
> linguistic entities.
>
> And since we’re on the subject: according to Max Tegmark, given the
> apparent direction of inter-theoretic reduction, one may assume that
> the foundational physics of our universe should be able to be
> expressed in a completely “baggage-free” description that is without
> reference to any human-specific concepts.



This is vague. Do you think that natural numbers are human-specific 
concepts?
You cannot axiomatize the natural numbers in a way such that it avoids 
other objects obeying your axioms.
Even arithmetical truth (the set of first order true arithmetical 
propositions seen as a theory) has no standard models.
Computability theory/ recursion theory is the best, imo, way to get a 
human independent, even a machine or formalism independent, mathematics 
(despite non standardness). ... doubly so with the explicit use of the 
(classical) Church's thesis.





>  This presumed most basic
> law of the universe would be capable of being axiomatized without
> unintended implications since the mathematical structure expressing
> the most basic law would be isomorphic with the law itself to the
> degree that it may appropriately be identified with it.

If you say "yes" to the doctor, accepting a digital brain/body, you 
identify yourself (your 3-self) locally with a finite linguistic (et 
least finitely 3-person presentable) structure.



>  The
> mathematical laws which describe the phenomena of all of the emergent
> levels or organization diverge from this ideal more and more the
> further one proceeds from this unknown foundational theory.


This is hard to interpret because I don't know your theoretical 
background. I say a few more words below.


>
>>> Also, I
>>> personally remain unconvinced that there is anything problematic 
>>> about
>>> the exitence of the universe of universes, or the ensemble of all
>>> possible mathematical structures, thought it may not be well defined
>>> at present. I don't believe that this is simply the union of all
>>> axiomatic systems. If trying to define the Everything as a set 
>>> implies
>>> a contradiction, then fine -- it isn't a set, it's an ensemble, which
>>> doesn't carry any of the connotations that are implied by the use of
>>> "set" in the mathematical sense. Therefore each entity in the 
>>> ensemble
>>> is a unique collection of n axioms that has no necessary relationship
>>> to any other axiom collection. What happens in an axiom system stays
>>> in that axiom system, and can't bleed over to the next one on the
>>> list. Some of these may be equivalent to each other.
>>
>>> A = The collection of all finite axiom systems
>>> B = The collection of all consistent finite axiom systems
>>
>> I guess you mean "recursively enumerable" instead of finite. You would
>> loose first order Peano Arithmetic (my favorite lobian machine :).
>
> Really? It would seem that all recursively enumerable (RE) axiom 
> systems
> would exist in A.

"A" is ambiguous. Strictly speaking Peano Arithmetic is an 
axiomatization, in first order predicate logic, of elementary number 
theory. It contains 3 axioms for the notion of succession, 4 axioms for 
addition and multiplication, and an infinite (but RE) set of axioms of 
induction. It is known that we cannot formalize finitely PA, and 
*staying* in first order logic. If you put PA in "A", it means you 
allow at least a second order logical formula (which is infinity in 
disguised). It is OK with me, because if PA want "to say yes to the 
doctor", it has to thrust some finite but second order description of 
itself. But logicians does not called that a "finite axiom system". The 
same for set theory (cf Zermelo-Fraenkel versus von Neuman Bernays 
Gödel).



>
>> Note also that SAS occurs very quickly. SAS occur in theories which 
>> are
>> much weaker than the SAS themselves (ex: SAS occur in Robinson
>> Arithmetic, i.e. when you can define successor, addition and
>> multiplication. SAS themselves need induction.
>
> I don’t understand. Are you saying that Self Aware Substructures exist
> in the Robinson Arithmetic?


Robinson arithmetic (RA) proves all formula having the shape ExP(x) 
with P(x) a decidable predicate. This is enough to be as rich as a 
universal dovetailer (a single program which generates and executes all 
possible programs). RA does prove the existence of your actual mental 
state (assuming comp).
Of course for a logician, RA is an example of a very weak theory: it 
cannot prove most obvious universal generalization. It cannot prove, 
for example, that for all n (n+0 = n). But for all such n RA can prove 
(n+0 = 0). Only, RA cannot generalize and infer from those proofs a 
proof that An(n+0=n). Yet, RA has the full power of a universal 
machine.
In the approach I present, RA is sufficient for the ontology of the 
everything. With comp it is just undecidable that there is more than 
that. RA generates the thought and talks of much richer entities like 
PA and ZF. RA can simulate PA and ZF, although RA has no cognitive 
ability to give sense to what PA and ZF generalize about. (A bit like: 
I can simulate Einstein's brain (assuming comp) without understanding 
him. (See for Searle's Error in the archive where I develop this point 
in more detail, or ask a question may be).
To sum up: ontologically, RA is enough. Epistemologically, we have to 
interview the richer (lobian) machines generated by RA. I don't 
identify them at all.
In math (and in Wonderland!) something very big (first person 
plenitude) can be hidden in something "little" (RA's proofs). Be sure 
you grasp either the UDA (see my Sane04 paper ) or some theorem by 
Löwenheim and Skolem to appreciate fully what I am saying here :)

Best,

Bruno


http://iridia.ulb.ac.be/~marchal/


--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to