Hi Adam

Sorry to take so long. I've been on the move; I'm on the move again tomorrow. Keeping up with email is difficult. I hope you don't mind that I'm copying this reply to the Epigram mailing list: I expect some other people may be interested in the issues involved.

Adam Chlipala wrote:

I'm doing some work now on depedently-typed programming languages that should be about as easy to use as ML. The nastiest parts have to do with reasoning about type equality in type inference. I've had some success doing this without requiring explicit proof objects in programs, but I believe that you're dealing with the same issues in Epigram, so I thought I'd check on what your state-of-the-art is.

I don't remember coming across any discussion of this yet in the documentation that you have on www.e-pig.org. Do any of your papers talk about pragmatic type equality inference? Ideally I'd like something that avoids a "tutorial" approach and cuts to the point in a way that is understandable by someone very comfortable with Coq programming.


We don't get to write very many papers with other than a "tutorial" approach: it's very hard to write technical papers about the next step when the typical reader is not already familiar with the starting point. Equality (and let's not talk about "type equality" as if it's a separate issue) is a notoriously slippery tar pit, and your query comes at a time of some change in our approach. Let me at least try to outline the situation and point you towards useful sources. Sadly, there is not yet a canonical technical paper about how this works in Epigram.

Firstly, you'll be aware from working with Coq that we need to tease out the distinctions between

(1) the definitional equality of the type theory, as used to determine whether two completed terms are equal, and hence to give operational force to the conversion rule (suitably shifted to where it's needed for a syntax-directed system) (2) the mechanism, typically some sort of unification, used to solve definitional equations over unknowns, typically implicit arguments (3) the types which stand for equational propositions, and how we generate and exploit their inhabitants.

I imagine you're primarily interested in (2), although (3) is also important. To make sense of (2), we need first to clarify (1).

The definitional equality of Epigram is pretty much as described in 'Epigram Reloaded' (Chapman, Altenkirch, McBride). It's a type-directed equality on beta-delta-iota-normal forms which performs eta-expansion on the fly for elements of Pi-, Sigma-, unit and empty types. This is a bit less bureaucratic than Coq's alpha-equivalence of beta-delta-iota-normal forms, but it's still an intensional computational equality and nobody will faint with amazement. It's nice having proof irrelevance for the empty type partly because, in combination with the eta-rules for ->, you get proof irrelevance for classical logic under the double-negation translation.

Of course, in practice, the problems which show up contain unknowns. Epigram tries to solve these with Miller-style pattern unification. See Dale Miller's 'Unification Under a Mixed Prefix'. If you're willing to deal with heterogeneous equations (and I am), this lifts straightforwardly to the dependently typed case. The eta-laws you adopt place an effective limit on the amount of higher-order unification you can do and still have unique mgus. I mean, if somebody tells you that ?f True = True and ?f False = False, you do not know, intensionally, what f is, unless you have the commuting conversion for Bool. In effect, pattern unification is first-order unification plus eta-laws.

Caveat! Given a system of unification constraints to solve, there is no fixed order in which they should be tackled. This gives (or certainly at one point gave) rise to a weakness (bug?) in Coq, one which we certainly had in the Lego days, where (?x, ?x + ?y) = (0, 3) is solved but (?x + ?y, ?x) = (3, 0) is not. Scanning left to right and giving up at the first sign of trouble does not work; unification is inherently transactional and should proceed in any order it can. This is what Epigram's 'yellow' status is used for: hard problems are not irredeemably unsolvable; they are suspended pending new information which may simplify them.

The definitional equality is pretty weak, really, and sooner or later you need equations that just don't follow by partial evaluation alone. You can run but you can't hide, or do algebra. At this point, you either need to extend the definitional equality or wheel on the propositional equality. I'm not a big fan of ad hoc extensions of the definitional equality. The pragmatic case for bolting on things like Presburger arithmetic solvers is quite strong, but I'd rather see this happen through the reflection of propositional equations. If you know a class of decidable problems, reflect it as a syntax S, give the semantics Sem : S -> Prop (or whatever), then prove forall s : S, (Sem S) \/ (not (Sem S)) or some such. Actually, a better trick is to choose Sem so that forall s : S, Sem S holds and is interesting. See, for example, the work on arithmetic with bounded quantification by Martijn Oostdijk. Teach the machine how to quote (an operation which lives outside the kernel) rather than inventing a new type system which is S-savvy. Leastways, that's my plan. Hasn't happened yet.

However you prove equations, you can certainly coerce explicitly from one type to another which is provably equal.

let

(  Q : S = T ;  s : S  !
!----------------------!
!  coe Q s : T         )

coe Q s <= case Q {
 coe refl s => s }

Moreover, as it happens, this coercion is a no-op.

let

(  Q : S = T ;  s : S     !
!-------------------------!
!  coh Q s : s = coe Q s  )

coh Q s <= case Q {
 coh refl s => refl }

Such is the convenience of heterogeneous (formerly 'John Major') equality. Thanks to work by Edwin Brady (e.g., see 'Inductive Families Need Not Store Their Indices') none of this equality reasoning need survive to run time.

The only place where reasoning with the propositional equality happens automatically behind the scenes is to solve index constraints when invoking an eliminator. See almost anything I've ever written, but especially 'The view from the left'. Also my Types 96 paper, my thesis and the more recent 'Eliminating dependent pattern matching'.

However, some change to this picture is afoot. See 'Towards Observational Type Theory' for a system whose definitional equality remains intensional, but whose propositional equality is defined type by type to reflect the notions of observation available. For functions, this is extensional equality. For equality proofs, this is proof irrelevant. And so on. This will enable us to use the propositional equality to represent unification problems in typechecking and hence bring user-supplied reasoning methods to bear on them. We're aiming for a system which has a very compact kernel, but whose effective functionality is highly extensible by programming.

So I'm sorry there isn't a more coherent story about all of this. Either it doesn't stay still long enough for me to write it down, or I don't stay still long enough to write anything down. I hope this selection of pointers is in any case useful.

Mind how you go

Conor

Reply via email to