I'm currently working on a "textbook-like" introduction to the system used 
in set.mm.  I'm currently thinking about how I should provide a proof 
justifying the definition of  '↔'.

I write the abbreviation eq(𝜑, 𝜓)  ≔  ¬ ((𝜑 → 𝜓) → ¬ (𝜓 → 𝜑)).  Then 
the definition is just the expansion of eq((𝜑 ↔ 𝜓), eq(𝜑, 𝜓)).  The 
elimination rule becomes (𝜑 ↔ 𝜓)  ≔  eq(𝜑, 𝜓).  Applying the 
elimination rule to the definition you get eq(eq(𝜑, 𝜓), eq(𝜑, 𝜓)) which 
can be proved as a theorem without any reference to the symbol '↔'.  Any 
theorem containing '↔' must have a proof tree that can be traced backwards 
to a statement of the definition itself.  You replace (𝜑 ↔ 𝜓) with eq(𝜑, 
𝜓) in the definition, you have a proved a theorem.  You then replace it 
everywhere else you see it, tracing the same steps forward as you traced 
backwards.  Every line along along the way should still be justified since  
eq(𝜑, 𝜓) is a WFF that is self-contained.

Anyways, I'm still working on the exact wording to make this 
"meta-argument" absolutely 100% clear to myself and anyone else.  Writing 
out a formal induction meta-proof in the way the standard deduction theorem 
is typically proved feels a bit cumbersome to me, but I don't want to just 
handwave anything away either.  I feel like I should maybe provide an 
example showing how the elimination rule would work on a WFF containing 
nested  '↔'.  The elimination rule has to be applied recursively until the 
symbol is gone.

paper <https://1drv.ms/b/s!AuvNPSOQfN3xjK1-DmOLCnCMd9hDrQ?e=5expKV>
On Wednesday, December 21, 2022 at 7:57:14 PM UTC-5 [email protected] wrote:

> On Wed, Dec 21, 2022 at 7:12 PM Samuel Goto <[email protected]> wrote:
>
>> Hey all,
>>
>>    As I'm looking into metamath there is something lingering that is 
>> making me uncomfortable: definitions are axioms.
>>
>>    This is obviously intended and by design by the community (the book 
>> has a whole chapter about it), but I'm still uncomfortable with the 
>> justifications and consequences.
>>
>
> You are right to be concerned. I was also concerned several years ago, and 
> the solution I came up with was to observe that definitions in set.mm 
> follow a very regular structure, such that all definitions with that 
> structure can be given a metatheoretic argument for conservative extension. 
> So I added a "definition checker" module to mmj2, which checks the rules 
> described in the "Additional rules for definitions" section of 
> https://us.metamath.org/mpeuni/conventions.html . This program is run as 
> part of the set.mm CI, so we can be sure that (with a small list of 
> exceptions) every df-* axiom is actually conservative over the ZFC axioms.
>  
>
>>     A few irrational feelings I think I'm having:
>>
>>     - I can't parse or understand df-bi trivially. What does it mean?
>>
>
> df-bi is one of the exceptions, because the first rule is that a 
> definition has to have <-> or = as the root symbol and this doesn't work 
> when we are defining <-> itself. That doesn't mean that the definition is 
> not conservative, only that it can't be automatically checked. The way to 
> read df-bi is that it is defining (𝜑 ↔ 𝜓) to be ¬ ((𝜑 → 𝜓) → ¬ (𝜓 → 
> 𝜑)) (that is dfbi1), except that the root ↔ in the definition is itself 
> expanded according to this expression. The justification that this is a 
> conservative extension is that if you replace all instances of (𝜑 ↔ 𝜓) 
> with ¬ ((𝜑 → 𝜓) → ¬ (𝜓 → 𝜑)) in df-bi, you get bijust, which is 
> provable without df-bi. So you can perform that replacement in any 
> derivation to eliminate the defined symbol.
>  
>
>>     - How do I know that the axioms of the definitions aren't introducing 
>> contradictions?
>>
>
> As I mentioned, we can reduce the problem to a small list of exceptions 
> like df-bi, plus the definition schema given by the definition checker's 
> rules. The schema can be proved by a direct substitution argument like the 
> one I just gave for df-bi, except that to handle bound variables we also 
> need to have the ability to perform alpha-renaming, such as in _V = { x | x 
> = x }, which you can use to prove the theorem { x |  x = x } = { y | y = y 
> }, so we need to know that this was already provable before (this is 
> vjust). The ability to perform alpha renaming in any formula is a 
> metatheorem of set.mm's axiom system.
>  
>
>>     - It seems that definitions aren't supposed to "extend the language", 
>> but I don't get that sentiment as I read them
>>
>
> I'm not sure I would say that. Definitions definitely are intended to 
> extend the language, but a conservative extension means that no statements 
> expressible in the original language become provable as a result of the 
> introduction of the definition. (In particular, if |- F. was not provable 
> before, it is still not provable, so the extension is sound.)
>  
>
>>     - We say "all you need is propositional and first order logic and set 
>> theory axioms" but then proceed to introduce a bunch by definitions
>>
>
> Yes, I tend to agree with this sentiment. It's not *really* true that you 
> don't need definitions. You don't need them *in principle*, but if you 
> don't like exponentially long proofs (which can make a qualitative 
> difference if you have ultrafinitist tendencies and don't trust proofs that 
> cannot be checked on any computer present or future) then you are more or 
> less forced to accept them.
>  
>
>>     I'm sure this can't be a new sentiment, since an entire chapter in 
>> the book was dedicated to it, but I was wondering if:
>>
>>     (a) Does anyone have some explanation posted online that I could read 
>> to inform myself and perhaps settle my anxiety?
>>     (b) Is there a version of metamath and/or set.mm that don't rely on 
>> definitions as axioms?
>>
>
> For the reasons I stated above, a set.mm without definitions would not be 
> remotely usable. And metamath doesn't really support definitions in any 
> other way.
>
> Even with the definition checker, the situation was still somewhat 
> problematic to me, since the definition checker is now essentially a part 
> of the "trusted code base" of metamath (unless you want to personally 
> review all of those thousand definitions-as-axioms in set.mm) and yet it 
> is not a required part of metamath verifier operation, and it has only one 
> implementation, which moreover is tailored to set.mm notions. So I tried 
> to make a version of metamath that includes definitions as part of the 
> kernel, and the result is Metamath Zero (https://github.com/digama0/mm0). 
> It unfortunately complicates several things to take definitions as 
> primitive because you need to have a notion of bound variable to handle 
> examples like df-v, as well as a conversion judgment to handle definition 
> unfolding without having to also bake in an equality operator (since not 
> all metamath databases have one, or only have one for some sorts and not 
> others). You can read about MM0 in https://arxiv.org/abs/1910.10703 .
>  
> Another Metamath variant which adds definitions as a primitive part of the 
> language is GHilbert by Raph Levien, although this diverges a bit more from 
> metamath and is closer to lambda calculus / LF in terms of its primitive 
> operators.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Metamath" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/metamath/b93dd1b2-b67e-4e40-92c2-4b9173a7d884n%40googlegroups.com.

Reply via email to