Hi all, Full covering is not a problem any more because partially defined functions are harmless. Why do we insist on wanting this problem? Is it a good thing to avoid an undecidable problem?
You may ask, what about the inhabitation problem? It is too much to go to the details but I list two points. 1. Pattern matching is an additional power to define functions. The type theory is like: UTT (or Martin-Lof's TT) + pattern matching. If an algorithm can decide the inhabitation, then you are welcome to use pattern matching without worrying about the covering problem. Otherwise you can only use other means such as the eliminators. 2. What we have to decide is the inhabitation of a function space, not a type. And this make the problem easier sometimes. For example, an algorithm may say ....-> K -> ... -> K is inhabitable without checking whether K is or not. n:Nat -> Vec(A,n) -> Vec(A,n) -> Vec(A,n) is in this category. There are other tricks to check the inhabitation of a function space. Although it is an undecidable problem, but it is different from the full covering problem. Sometimes it is much easier to check inhabitation than to check full covering. The majority function (or alike) is an example, because ...-> Bool and ...-> Nat are obviously inhabitable. Overall, I do think the idea of partiality with pattern matching give us more flexibility to define functions. Yong ----- Original Message ----- From: "Conor McBride" <[EMAIL PROTECTED]> To: <epigram@durham.ac.uk> Sent: Thursday, June 08, 2006 8:02 PM Subject: [Epigram] On Pattern Matching Coverage > Hi folks > > My reply-to-Yong shed has been empty for a while now, partly because > there's so much to say. I guess I'd better say it in fits. > > Fit 1: On Pattern Matching Coverage > > I'm bored of hearing that dependent pattern matching coverage is > undecidable. Yes it is, if you start from an inadequate notion of what a > dependently typed program is. That so many people (myself included) have > done so is, I suspect, because of inertia and lazy thinking. Why should > things be much different from simply typed programming? As a practical > question, there's quite a long answer to that. As a moral question, it's > just that simply typed programming leaves so much to be desired. > > Much of what follows is extracted from 'Eliminating dependent pattern > matching' by me, Healf and James. > > Systems of 'pattern equations', whatever they are, are not generally > sufficient to express total programs in a type theory with inductive > families. Why not? Because you can't check whether or not their > left-hand sides are exhaustive. There's no simple roll-call of > constructors because not all constructors target every type in the > family. The problem boils down to type inhabitation which is > undecidable, and that's a good thing, right? Types are propositions and > if propositions were all decidable, the world would be a much more > boring place. > > I conclude that systems of pattern equations are inadequate to represent > total functional programs with dependent types. This leaves me with > three choices: > [give up dependent types] ain't gonna; total programming with simple > types is too weak; > [give up total programming] ain't gonna; or if I do, I'll be careful > enough to express the possibility of finite failure (or even looping) in > types, and then it'll be total programming again; > [give up the idea that programs consist of systems of pattern > equations] that sounds more like it. > > The task is to find a form of program which includes evidence for the > exhaustiveness of its patterns. Epigram delivers one such candidate, > giving programs explicitly as splitting trees generated by arbitrary > (standard and user-defined) eliminators, with first-order unification > for the constraints. Splitting trees are readily checkable, and they > also give us a handy interactive programming process. > > But they are a bit noisy. What can be done to reduce the noise? Allow > the flattening of nonempty case nodes. What do I mean? Well, if you're > writing > > xs : Vec (suc n) X > ---------------------- > tail xs : Vec n X > > tail xs <= case xs { > tail (cons x xs) => xs } > > you could just give the type and write > > tail (cons x xs) => xs > > instead. But to write > > x : Fin zero > ---------------- > kill x : X > > you would still need > > kill x <= case x > > which we might now write > > kill x \|/ x > > pronounced 'kill x refutes x'. The symbol is the top half of an ASCII > stick-person, er, expressing some belligerence... > > It's not hard to show that the recovery of a suitable splitting tree > from such a flattened program (or just a flattened layer of a program) > can readily be computed. It may not be the same splitting tree. For > example, given an inductively defined unit type with constructor void, > and the program > > dull void void => void > > you cannot tell which argument to split first, but who cares? Any > splitting tree which delivers the expressed analysis will do. > > The idea goes like this: start with a *problem* pattern p, and a > nonempty set of *program* patterns {sigma_i p} which refine it, for > substitutions sigma_i. Note, initially p might just be the function > applied to distinct pattern variables with the program patterns being > the left-hand sides. If the program pattern set is just {rho p} for some > bijective renaming rho, we're done. Otherwise, nondeterministically > split on any pattern variable x in p such that sigma_i x is > constructor-headed for each i and that the program patterns partition > into nonempty sets refining the subproblem patterns; repeat. This > process terminates because the program patterns remain fixed and bound > the number of constructor symbols which may appear an a problem pattern > which it refines. Problem patterns acquire strictly more constructor > symbols at each split and each must be refined by at least one program > pattern. > > The nondeterminism is finite, so backtracking is feasible. But not > necessary, I now suspect. Which is not the same as saying that programs > with the same equations always have the same refutations. Here's an > example: projecting an element of a vector indexed by a finite set of > the right size: > > proj i xs <= rec i { > proj fz (cons x xs') => x > proj (fs i') (cons x xs') => proj i' xs' } > > necessarily arises by splitting i first. If you split xs or the length > first, you get get an unrefined problem pattern for nil. Meanwhile, > > proj i xs <= rec i { > proj i nil \|/ i > proj fz (cons x xs') => x > proj (fs i') (cons x xs') => proj i' xs' } > > must split xs or the length first, because i is not everywhere in > constructor form. It doesn't matter whether you split the length or the > vector. Splitting the vector tells you the length anyway. Splitting the > length doesn't hurt. > > What we have done is to recover a splitting tree from its entire fringe: > not just its leaves which give rise to equations, but also from its > nodes-with-no-children, which become refutations. This is reasonable, > because it can sometimes take a lot of explanation to kill a type, and > there's no reason to expect this to happen by magic. Consider > > data Boo : * where moo, noo : Fin zero -> Boo > > We should now write > > killBoo (moo x) \|/ x > killBoo (noo x) \|/ x > > to explain the more complex reason why Boo is empty. > > My point here is not to say that programs which arise from splitting > trees are the only tolerable ones. All I'm saying in this message is > that a sensible notion of splitting-tree-based dependent pattern > matching exists and has decidable coverage. Deciding coverage is thus > not an problem inherent in total dependently typed programming. > > So, Yong, if you want to justify your 'partial function' proposal, you > cannot do so on the basis that 'coverage is undecidable'. Moreover, if > you want to retain consistency by ensuring that all partial functions > could be 'filled up' with a heuristically chosen dummy behaviour, then > it is you who has the type inhabitation problem. > > Type theory is a language of evidence. To provide evidence of > functionality, you must either inhabit the range or dismiss the domain. > Programs in type theory should therefore be capable of expressing both > sorts of evidence, and once you actually allow them to do so, the > 'coverage problem' vanishes like the illusion it always was. > > Let us never speak of it again. > > All the best > > Conor > >