Hello again

Yong Luo wrote:

Hi all,

Let me restate the reasons of introducing the concept of "partially defined
functions".

1. One would like to introduce functions by pattern matching.

That's not in doubt. But when you have the Epigram set-up, with elaboration from a high-level source language to a low-level core, you need to consider where pattern matching belongs. Our position has been that adding pattern matching to the core theory is an unnecessary complication of it. Eliminators suffice. Moreover, by basing the high-level pattern matching on eliminators, we provide a language of explanation for coverings which is not restricted to constructor patterns, and we also deliver our interactive mode of programming without having to try too hard.

2. Full covering is an undecidable problem.


You insist on repeating this. What you mean by 'Full covering' may be undecidable, but it is not the problem which we face. Checking Epigram programs for covering is perfectly decidable, and as I've pointed out on numerous occasions, programs which only involve /constructor/ case analysis can be flattened to a sequence of equations and refutations.

However, with the concept of
partiality, it is not a problem any more because partially defined functions
are harmless.

What do you mean by 'harmless' here? Leaving aside (for a moment) your requirement that a machine decide type inhabitation, let's see what you're proposing. In effect, you want to equip a language of total functions with a 'junk' operator

 pred (suc n) = n
 pred zero = junk

only your syntax for the junk operator is to omit the line entirely. Your condition for deploying the junk operator is that there must be a term (in the approriate context) of the return type. In a total language, programmers are free to do this, provided they make up their own junk, but you want to make it up for them. Do you maintain that junk is harmless as long as it's well-typed junk? The 'garbage in garbage out' policy used to be respectable, but if you want to prove anything about your programs, you spend half your life proving that what you got out doesn't matter because what you put in was garbage. That's why we try to give programs more precise types in the first place. Can I write a function

 tail : (X : *, n : Nat) Vec X n -> Vec X (pred n)

?

Returning to the question of type inhabitation, and whether or not it's equivalent to the covering problem, let me just ask you how you tell if the empty program is a valid (partial) definition of type A -> Empty ?

Two more examples are in the following. The inhabitation of ....-> Nat is
obvious, and it is unnecessary to check pattern coverage. See more details
on:

http://www.cs.kent.ac.uk/people/staff/yl41/TPF.ps

I see that your patterns are constructor forms. Can you implement the eliminators for inductive families in this language?

f :: Nat -> Nat -> Nat -> Nat
f 0 0 0 = 1
f 0 (succ y) z = g1 y z
f (succ x) y 0 = g2 x y
f x 0 (succ z) = g3 x z
f (succ x) (succ y) (succ z) = f (succ x) (succ y) z +
                                           f (succ x) y (succ z) +
                                           f x (succ y) (succ z)
where g1 g2 and g3 were defined beforehand.

That's recognisably a total function. Why do we need to consider partial functions to allow this definition? Your example does raise the question of whether making the above equations hold definitionally is sufficiently desirable to motivate adding pattern matching to the core theory. One can have pattern matching without partial functions. Why do you persist in arguing for both as if they are inseparable?

Nested patterns:

np :: List (List Nat)-> Nat -> Nat
np (nil (List Nat)) n = n
np (cons (List Nat) (nil Nat) xss) n = np xss n
np  (cons (List Nat) (cons Nat x xs) xss) = np (cons (List Nat) xs xss)
(succ n)

In Haskell, one can write np as follows.

np :: [[Int]] -> Int -> Int
np [] n = n
np ([]:xss) n = np xss n
np ((x:xs):xss) n = np (xs:xss) (n+1)

What was supposed to be the problem here? The patterns are straightforwardly a covering. The recursion is not quite structural (in the usual sense) because (cons xs xss) is not guarded by constructors in (cons (cons x xs) xss), but it's still explicable in terms of rec, thus:

 np xss n <= rec xss {
   np nil n => n
   np (cons xs xss') n <= rec xs {
     np (cons nil xss') n => np xss' n
     np (cons (cons x xs') xss') => np (cons xs' xss') (suc n) }}

Just as you're advocating a fixed external implementation of inhabitation to justify function admissibility, you're also advocating a fixed external criterion for termination checking. I would much prefer to support a checkable language of evidence for both.

All the best

Conor

Reply via email to