A solution is mentioned in the comment for df-cleq, shall it be
reconsidered?

/       We could avoid this complication by introducing a new symbol,
say =_2,//
//       in place of ` = ` .  This would also have the advantage of making//
//       elimination of the definition straightforward, so that we could//
//       eliminate Extensionality as a hypothesis.  We would then also
have the//
//       advantage of being able to identify in various proofs exactly
where//
//       Extensionality truly comes into play rather than just being an
artifact//
//       of a definition.  One of our theorems would then be ` x ` =_2//
//       ` y <-> x = y ` by invoking Extensionality.//
//
//       However, to conform to literature usage, we retain this
overloaded//
//       definition.  This also makes some proofs shorter and probably
easier to//
//       read, without the constant switching between two kinds of
equality.//
//
/As far as I understand Mario has chosen this in MM0.


On 28/10/2021 18:57, 'ookami' via Metamath wrote:
I am going to read carefully through all this during weekend, or
whenever I have more time to focus on details particularly about where
are the limits of a proof scheme like Metamath.  For now I would like
to remind of the starting point of this thread.
And this is a problem pertaining to the current version of Metamath. 
The thread started from the observed fact, that the axiom list is
somewhat flaky, "a good estimate of what should be in it, nothing
more".  This is something I am personally less content with.  One
reason is that I use this list on regular base to separate the regions
of concern of any axiom. It is a source of incentives for proof
modification, and I hate the idea, it cannot be used thoughtlessly.
Actually the list seem to be very close to perfect, and I cannot see a
good reason to leave the final gap open.

The analysis has lead to df-clel and df-cleq as the culprits for the
observed shortcoming.  To my understanding they bear unnecessary
complexity, and despite their name, they are not laid out according to
the rules of pure doctrine, requiring definitions to be conservative
(one downside of never having read a text book on logic is that I
developed a language on my own, and now have to adapt to common
usage).  What I sense in this thread is some reluctance to address
this defect, and I still havent't grasped the reason behind it.  Is it
habits?  Is it some superficial imitation of concept of other
sources?  I don't know. I like to urge getting back on the track of
rigor, that convinced me some ten years ago.

I initially asked permission to add ax-9 to the list of covered axioms
by df-cleq.  This is likely not the best solution, because it does not
address the observed lack of conservaty. What about the solution of
Benoit then?  Or develeping an intermediate stage between set theory
and class theory? Or something else experimental.

Is it possible to make some progress in this direction?  Or come up
with a (hopefully) convincing reason to stop where we are.

Wolf
[email protected] schrieb am Donnerstag, 28. Oktober 2021 um 03:55:56
UTC+2:

    On Wed, Oct 27, 2021 at 7:22 PM Norman Megill <[email protected]>
    wrote:

            In fact, ZFC + classes is a conservative extension in
            exactly the same way that NBG is,


        There is a fundamental difference.

        NBG has global choice, which has no ZFC equivalent.  From the
        wikip page on NBG, "the axiom of global choice ... is stronger
        than ZFC's axiom of choice".

        On the other hand, any ZFC+classes expression containing class
        variables can be converted to an equivalent expression with
        wff variables, and vice-versa.  They are just two different
        notations for the same thing.  See Takeuti/Zaring Ch. 4 or
        Levy Appendix X or Quine pp. 15-21.  Class variables can be
        considered wff (meta)variables in disguise. See also the
        comments in ~abeq2.


    An example of a ZFC+classes expression containing class variables
    that has no equivalent without class variables is "x e. A". Any
    expression equivalent to this must contain the variable "A". (More
    on this below.)

    A more substantive observation about ZFC+classes is that it
    provides a proof speedup, just like ACA_0 over PA (ACA_0 is a
    conservative extension of PA that adds second order variables). Or
    at least it would, in a traditional axiomatization; in metamath
    the advantage is largely nullified because you can prove schemes
    over wff variables, which normal ZFC can't do. I suspect there is
    still some speedup in the metamath setting because being able to
    hide the bound variables involved in a class abstraction means you
    don't have to substitute and alpha-rename them in deeply nested
    situations.

            especially if you pay attention to metalogical completeness.


        What does this have to do with our metalogical (scheme)
        completeness - or do you mean something else?


    I am talking about scheme completeness. This is very important to
    understand because it is the crux of the argument that the
    textbook proofs fall short of a true proof of conservativity in
    the metamath setting.

            As far as I can tell, the standard argument that classes
            are a definitional extension only applies if we think that
            the only inhabitants of "class" are class abstractions,
            but in metamath there are always variables living in every
            sort, and you can't prove properties about generic
            variables even if the property holds for all class
            abstractions.


        Sorry, I don't understand what this sentence means. :)  What
        is an inhabitant, "living in every sort", generic variable,
        property?  There are no "generic" variables (as I understand
        that word) in set.mm <http://set.mm>; there are only wff,
        setvar, and class variables.


    An inhabitant just means an object of that type. Here I am
    referring to the terms that can be considered as valid things of
    type "class": in the traditional setting, every class is either a
    class abstraction, i.e. literally {x | ph} for some ph, or is
    defined to be something that ultimately boils down to a class
    abstraction (and when traditional logics talk about definition
    they usually mean an abbreviation at the meta-level, so it is
    still syntactically a class abstraction). In metamath there are
    other things of type "class", that I am calling "generic
    variables", by which I mean "A", "B" and so on. These are not
    class abstractions, they are variables; if you ask what syntax
    makes them up "cab" is nowhere to be found.

         A simple example of this is axiom ax-9c, which is true for
        class abstractions

            because of df-clab (i.e. x = y -> (x e. { z | ph } -> y e.
            { z | ph }) is true by properties of substitution) but
            cannot be proven for arbitrary classes.


        This is an instance of eleq1+biimpd, which holds for arbitrary
        classes.


    eleq1 is a consequence of df-cleq though, which as we have already
    established makes ax-9c derivable. If you drop df-cleq and
    df-clel, keeping only df-clab, the mentioned theorem is true for
    class abstractions but not for arbitrary classes (even though x =
    y -> (x e. A -> y e. A) involves neither wceq or wcel, at least if
    we ignore the notation overloading and read it as "weq x y ->
    (wvel x A -> wvel y A)").

            Traditional textbooks don't cover this, because of course
            metalogical completeness isn't a thing. But I think it is
            impossible for metamath to

            introduce a new sort without letting these extra variable
            terms sneak in, so metalogical completeness is always
            relevant to us. I believe that conservativity in the
            traditional sense still applies to the set.mm
            <http://set.mm> formulation of "class" over the set-only
            part of , but nothing stronger than that.


        I don't understand what you mean.  What is an example of extra
        variable terms that sneak in?


    Perhaps this is not the most intuitive picture for you, but the
    way I internalize metalogical completeness as a concept is that in
    the model of metamath every sort contains additional things beyond
    the regular things. The space "wff" contains more than just object
    level formulas, it also contains "variables" which are
    uninterpreted / generic predicates. It is a consequence of this
    that just because a fact is true (say by induction on formulas)
    for every object level formula, it may not be true at these
    "generic points" (points because we are thinking of "wff" as a
    space of things, with the points being formulas of the object
    language but also these uninterpreted variable things), which is
    another way of saying that a logically complete schema may not be
    metalogically complete.

    Applied to the sort "class", we get that the regular points of the
    space are the class abstractions (those are the things we put
    there deliberately), and the generic points are the variables "A",
    "B" (those come along for the ride because of how metamath works).
    And now to the point: the theorem in the textbooks that reduces
    every theorem in the object language using classes to an
    equivalent theorem without classes is performed by induction on
    formulas and terms, and so it does not apply to the generic
    points. For any metamath theorem about classes, you can always
    make a special case of the theorem by substituting class
    abstractions for everything and then run the elimination proof,
    but a plain variable "A" is not eliminable: you can replace it
    with { x | x e. A } but you are no closer to eliminating A by
    doing so.

                I don't see the fact that our definitional check
                algorithm can't check them as sufficient justification
                in itself to call them axioms.  If we had adopted the
                usual textbook convention of using multipart recursive
                definitions for df-oadd etc. instead of our direct
                definitions with `rec`, would you want to call those
                axioms?


            What it amounts to is that all our tools are saying it's
            not a definition, and the only reason it is considered a
            definition is because we are explicitly overriding the
            tool and declaring them definitions by fiat. Furthermore,
            because of the notational overloading situation, these
            definitions are


        There is no new mathematics that results from the class theory
        definitions, and as a practical matter I have chosen to trust
        the multiple proofs of conservation and eliminability in the
        literature. In principle they could be proved formally.  Class
        theory is simply a different notation for the same thing that
        is often more practical to work with. Since no new mathematics
        results from the class theory definitions, it's hard for me to
        justify calling them axioms.

        I understand that these literature proofs may not qualify as a
        "trusted base" from your point of view since they haven't been
        verified formally. In which case it would make sense for you
        to rename/treat them as ax-* in your copy of set.mm
        <http://set.mm>.


    The new mathematics is kind of fringe stuff, so I can understand
    why you might think this, but I think this conversation started
    exactly because Wolf has been exploring the mathematics of
    subsystems of set.mm <http://set.mm>'s axiom system where you
    can't take equivalence to the textbooks for granted, and it is in
    this light that the conservativity proof starts to break down.

    Regarding my position on axioms, the issue is not that they
    haven't been verified formally. I think I am probably more willing
    to accept new axioms than you because I see theorems vs axioms as
    the difference between "correct by internal reasoning" and
    "correct by external reasoning". In MM0 of course I am planning to
    verify the metalogic in addition to proving theorems in the logic,
    so making a theorem into an axiom is not cheating, it is merely
    moving the work from one part to another. Some things are best
    verified internally, these are the theorems; and some things are
    easier to verify by a metatheoretic proof, and these are the
    axioms. From such a description it should be clear why I consider
    df-cleq et al. axioms, and it has nothing to do with believing the
    conservativity result to be false or unverified.

             For a careful reader, it is not clear to me what the
            relative probabilities of the "type I and type II errors"
            here are. As you say, if we mark them as


        What are type i and type II errors?


    Not a perfect match for the situation, but see
    https://en.wikipedia.org/wiki/Type_I_and_type_II_errors . I mean
    that we have roughly two choices, and both choices have some
    probability of leading readers astray, in different directions.

    Mario

--
You received this message because you are subscribed to the Google
Groups "Metamath" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to [email protected].
To view this discussion on the web visit
https://groups.google.com/d/msgid/metamath/879eb864-f2b1-40a3-9018-c5a698ad5ac8n%40googlegroups.com
<https://groups.google.com/d/msgid/metamath/879eb864-f2b1-40a3-9018-c5a698ad5ac8n%40googlegroups.com?utm_medium=email&utm_source=footer>.

--
You received this message because you are subscribed to the Google Groups 
"Metamath" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/metamath/28471da3-9ad8-a648-d251-258af55f22e9%40gmx.net.

Reply via email to