[ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ]
Here is a proof that values don't step: https://urldefense.com/v3/__https://pastebin.com/raw/V9Kg0cY4__;!!IBzWLUs!DI_iHIlT1E8rdhB4G9G-Wm5cLeF8loOevBXfeUhqHo4zaA1zOdGzKrCuHt_3u2trbh_ZLMAUCho$ The relevant parts are: Lemma ctx1_not_val e k : ctx1 k -> value (k e) -> False. Proof. intros [] Hv; inversion Hv. Qed. Lemma ctx_val e k : ctx k -> value (k e) -> k = id. Proof. intros Hk Hv. induction Hk. - by apply functional_extensionality. - exfalso. by eapply ctx1_not_val. Qed. Lemma val_no_step e e' : value e → ¬ step e e'. Proof. intros Hv Hs. destruct Hs. assert (k = id) as -> by eauto using ctx_val. destruct H0; inversion Hv. Qed. The proof script is of constant size (not proportional to the number of language constructs). So maybe the not-so-magic bullet is tactic scripts that can uniformly solve a bunch of cases. However, in this case perhaps it can be done in Agda too, because we have only invoked a tactic on multiple subgoals in the pattern destruct A; inversion B. This seems to correspond to dependent pattern matching, so maybe in Agda there also would only be O(1) cases? @Derek: with the stack machine there are run-time constructs that enter into the state of the operational semantics that are not already covered by the expression typing judgement. The fact that this doesn't happen in the other case seems to be the reason why an extensional approach works. So maybe the intensional approach is unavoidable in that case...? Jules On Sun, Oct 10, 2021 at 3:47 PM Derek Dreyer <dre...@mpi-sws.org> wrote: > [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list > ] > > > I managed to carry out a proof along the lines described by Derek and > Jules, but it used more ink (and more think!) than my original proof using > a congruence rule for each constructor. In this, as in much else, I find > myself agreeing with Bob. > > I don't think there's likely to be a big difference one way or the > other when proving syntactic properties. I personally like to use the > evaluation context semantics because the idea of an evaluation context > comes in extremely handy later on when you get to deeper semantic or > Hoare-style reasoning (of the sort we do when formulating logical > relations arguments in Iris). At that point, the "inductive" cases of > the relation can be handled uniformly using a "Bind" lemma. In the > case of unary logical relations, this lemma looks something like the > following: > > e \in E[A] > forall v:Val. (v \in V[A]) => (K[v] \in E[B]) > ---------------------------------------------------- > K[e] \in E[B] > > where E[B] is the logical relation on terms of type B, and V[A] is the > logical relation on values of type A. > > Using the Bind Lemma, for example, you can take a proof goal like the > following (this is the case of proving "compatibility" of the logical > relation for function applications): > > e1 \in E[A -> B] > e2 \in E[A] > -------------------- > e1 e2 \in E[B] > > and through two applications of Bind, you can reduce it to: > > v1 \in V[A -> B] > v2 \in V[A] > -------------------- > v1 v2 \in E[B] > > This approach scales also to languages with a wide range of features > (recursive types, state, concurrency), at least if you work in a > logical framework like Iris. In fact, I used this very example to > motivate the use of Iris in my POPL'18 keynote (see around 28:30, the > Bind rule is then discussed a few minutes later: > > https://urldefense.com/v3/__https://www.youtube.com/watch?v=8Xyk_dGcAwk&ab_channel=POPL2018__;!!IBzWLUs!G2KhnZbuiPIsnAWmxp3nKsyAqqNkyiHo5kL0icZ3pru8QkU69To1o-Oz6FUAk52G_lp8ZPZSz5Q$ > ). > > Without evaluation contexts, I don't know how to express such a lemma > nicely. > > Cheers, > Derek > > > > > Have I got that right? Or is there a magic bullet that I'm missing? Go > well, -- P > > > > > > . \ Philip Wadler, Professor of Theoretical Computer Science, > > . /\ School of Informatics, University of Edinburgh > > . / \ and Senior Research Fellow, IOHK > > . > https://urldefense.com/v3/__http://homepages.inf.ed.ac.uk/wadler/__;!!IBzWLUs!G2KhnZbuiPIsnAWmxp3nKsyAqqNkyiHo5kL0icZ3pru8QkU69To1o-Oz6FUAk52G_lp8h6qNokQ$ > > > > > > > > On Sat, 9 Oct 2021 at 22:12, Derek Dreyer <dre...@mpi-sws.org> wrote: > >> > >> This email was sent to you by someone outside the University. > >> You should only click on links or attachments if you are certain that > the email is genuine and the content is safe. > >> > >> Hi, Phil. > >> > >> Yes, there is a way to make the single congruence rule work. See for > >> example the way I set things up in my Semantics course notes (see > >> Section 1.2): > https://urldefense.com/v3/__https://courses.ps.uni-saarland.de/sem_ws1920/dl/21/lecture_notes_2nd_half.pdf__;!!IBzWLUs!G2KhnZbuiPIsnAWmxp3nKsyAqqNkyiHo5kL0icZ3pru8QkU69To1o-Oz6FUAk52G_lp86DUCbU8$ > >> > >> (Note: the general approach described below is not original, but my > >> version may be easier to mechanize than others. Earlier work, such as > >> Wright-Felleisen 94 and Harper-Stone 97, presents variations on this > >> -- they use the term Replacement Lemma -- that I think are a bit > >> clunkier and/or more annoying to mechanize. Wright-Felleisen cites > >> Hindley-Seldin for this Replacement Lemma. In my version, the > >> Replacement Lemma is broken into two lemmas -- Decomposition and > >> Composition -- by defining a typing judgment for evaluation contexts.) > >> > >> Under this approach, you: > >> > >> 1. Divide the definition of reduction into two relations: let's call > >> them "base reduction" and "full reduction". The base one has all the > >> interesting basic reduction rules that actually do something (e.g. > >> beta). The full one has just one rule, which handles all the "search" > >> cases via eval ctxts: it says that K[e] reduces to K[e'] iff e > >> base-reduces to e'. I believe it isn't strictly necessary to separate > >> into two relations, but I've tried it without separating, and it makes > >> the proof significantly cleaner to separate. > >> > >> 2. Define a notion of evaluation context typing K : A => B (signifying > >> that K takes a hole of type A and returns a term of type B). This is > >> the key part that many other accounts skip, but it makes things > >> cleaner. > >> > >> With eval ctxt typing in hand, we can now prove the following two very > >> easy lemmas (each requires like only 1 or 2 lines of Coq): > >> > >> 3. Decomposition Lemma: If K[e] : B, then there exists A such that K : > >> A => B and e : A. > >> > >> 4. Composition Lemma, If K : A => B and e : A, then K[e] : B. > >> > >> (Without eval ctxt typing, you have to state and prove these lemmas as > >> one joint Replacement lemma.) > >> > >> Then, to prove preservation, you first prove preservation for base > >> reduction in the usual way. Then, the proof of preservation for full > >> reduction follows immediately by wrapping the base-reduction > >> preservation lemma with calls to Decomposition and Composition (again, > >> just a few lines of Coq). > >> > >> My Semantics course notes just show this on pen and paper, but my > >> students have also mechanized it in Coq, and we will be using that in > >> the newest version of my course this fall. It is quite > >> straightforward. The Coq source for the course is still in > >> development at the moment, but I can share it with you if you're > >> interested. I would be interested to know if for some reason this > >> proof structure is harder to mechanize in Agda. > >> > >> Best wishes, > >> Derek > >> > >> > >> On Sat, Oct 9, 2021 at 8:55 PM Philip Wadler <wad...@inf.ed.ac.uk> > wrote: > >> > > >> > [ The Types Forum, > http://lists.seas.upenn.edu/mailman/listinfo/types-list ] > >> > > >> > Most mechanised formulations of reduction systems, such as those > found in > >> > Software Foundations or in Programming Language Foundations in Agda, > use > >> > one congruence rule for each evaluation context: > >> > > >> > ξ-·₁ : ∀ {L L′ M} > >> > → L —→ L′ > >> > ----------------- > >> > → L · M —→ L′ · M > >> > > >> > ξ-·₂ : ∀ {V M M′} > >> > → Value V > >> > → M —→ M′ > >> > ----------------- > >> > → V · M —→ V · M′ > >> > > >> > One might instead define frames that specify evaluation contexts and > have a > >> > single congruence rule. > >> > > >> > data Frame : Set where > >> > □· : Term → Frame > >> > ·□ : (V : Term) → Value V → Frame > >> > > >> > _[_] : Frame → Term → Term > >> > (□· M) [ L ] = L · M > >> > (·□ V _) [ M ] = V · M > >> > > >> > ξ : ∀ F {M M′} > >> > → M —→ M′ > >> > ------------------- > >> > → F [ M ] —→ F [ M′ ] > >> > > >> > However, one rapidly gets into problems. For instance, consider the > proof > >> > that types are preserved by reduction. > >> > > >> > preserve : ∀ {M N A} > >> > → ∅ ⊢ M ⦂ A > >> > → M —→ N > >> > ---------- > >> > → ∅ ⊢ N ⦂ A > >> > ... > >> > preserve (⊢L · ⊢M) (ξ (□· _) L—→L′) = (preserve ⊢L L—→L′) · ⊢M > >> > preserve (⊢L · ⊢M) (ξ (·□ _ _) M—→M′) = ⊢L · (preserve ⊢M M—→M′) > >> > ... > >> > > >> > The first of these two lines gives an error message: > >> > > >> > I'm not sure if there should be a case for the constructor ξ, > >> > because I get stuck when trying to solve the following unification > >> > problems (inferred index ≟ expected index): > >> > F [ M ] ≟ L · M₁ > >> > F [ M′ ] ≟ N > >> > when checking that the pattern ξ (□· _) L—→L′ has type L · M —→ N > >> > > >> > And the second provokes a similar error. > >> > > >> > This explains why so many formulations use one congruence rule for > each > >> > evaluation context. But is there a way to make the approach with a > single > >> > congruence rule work? Any citations to such approaches in the > literature? > >> > > >> > Thank you for your help. Go well, -- P > >> > > >> > > >> > > >> > . \ Philip Wadler, Professor of Theoretical Computer Science, > >> > . /\ School of Informatics, University of Edinburgh > >> > . / \ and Senior Research Fellow, IOHK > >> > . > https://urldefense.com/v3/__http://homepages.inf.ed.ac.uk/wadler/__;!!IBzWLUs!EIYnAk7pSQVJFwJaONabTO_JqymiXUpQnVqKBbbpFSiJ_flduU6cOIjOgNtqMC_UDbn50dUukp4$ > >> > The University of Edinburgh is a charitable body, registered in > >> > Scotland, with registration number SC005336. >