Lennart mentioned this recently; the situation is as follows:

1. semi-unification (the general one) is undecidable
2. it is semi-decidable
3. the semi-decision procedure for semi-unification can be augmented
   with heuristics that finds almost all cases in which it fails
   (this is a kind of modified occur-check). It is a "well-behaving"
   undecidability.

You can modify an ordinary (unification-based) type-inference
algorithm as follows to get the more general types:
when type-checking a recursive definition, say

        letrec  f = ... f ..... f ... 

use a fresh type variable for every occurrence of f and proceed as
usual.  If for the remaining parts (the ...) the type-checking has
succeeded, the algorithm inspects what happened to all those type
variables for f, i.e. which types have been substituted for them.
In a unification-based setting you had to unify them, here: you have
to semi-unify them, i.e. suppose the types of those three occurrences
are as follows

        letrec  f = ... f ..... f ... 
                t1      t2      t3

then we have to solve the semi-unification problem (notice the
difference between using and defining occurrence)
        t1 > t2
        t1 > t3
i.e. we have to find substitutions sigma, tau1, tau2, such that
        tau1(sigma(t1)) = sigma(t2)
        tau2(sigma(t1)) = sigma(t3)

The type sigma(t1) is the type for our f, and we have to apply sigma
throughout the range of our type-inference.

One can enhance the performance of the algorithm by the usual
Haskell/Miranda dependency analysis for letrecs (to be honest, I'm not
sure that it is really an improvement -- the semi-unification
algorithm does very similar things anyway), which basically reduces
bigger semi-unification problems to smaller ones.

Stefan Kahrs

Reply via email to