If anybody has strong objections, we need to hear them!

BitC currently provides quantification over types only:

  (define (pair-cons x y)
    (pair x y))
  : (forall ('a 'b) (fn ('a 'b) (pair 'a 'b)))

  (define (is-null x)
    (case id x
      (null #t)
      (default #f)))
  : (forall (nullable 'a) (fn 'a bool))

But there are limitations in the current scheme. For example we cannot
define a generalized type class of "vector-like" behavior to include
arrays, because we have no way to abstract over the array size.

It is not possible to quantify over *values* without getting into a
fundamentally different type system, but it is possible to quantify over
literals without fundamentally extending the current type system. In
particular, we can extend the FORALL syntax to allow:

  (forall ( ... id:T ... ) ...definition using id...)

with the intended meaning

  "for all compile-time constants /id/ having type T"

within the scope of the quantification, the ID becomes a type name, and
any identifier bound to that type becomes the associated (constant)
literal value. A formal parameter of literal type is never actually
passed or received.

If we admit this, then array-length becomes a procedure rather than a
built-in:

  (forall (sz:word 'a)
    (define (array-length a:(array 'a sz)) sz)) :

  (forall (sz:word 'a) (fn (array 'a sz) sz))

Note that a literal of type T is one-way assignment compatible with a
variable of type T, but only the variable <- literal direction.


In and of itself, this kind of literal quantification isn't very
exciting. It regularizes array types, which is nice, but that isn't a
truly compelling justification for adding this, and I have held off on
this because I wasn't really convinced about its value in the grand
scheme of things.

Over the last few days, I've been wrestling with separate compilation
issues and typeclass instances, and I noticed today that literal
quantification solves that problem very nicely.

More precisely, *constant* quantification solves the problem.

Observe that literal quantification is a special case of general
compile-time constant quantification. There is no requirement that the
type T used in the literal quantification must be primitive type (which
is what I had originally assumed). Any compile-time constant is okay
here provided that constant is shallow-immutable (one can get into
trouble with deeply mutable constants of this sort, but such constants
do not present any hazard to the quantification mechanism). If the
constant is deep-mutable, the effect is strangely similar to a
compile-time evaluated closure construction.

That is: compile-time constant quantification is term substitution
(modulo deep mutability).

And no, off-hand I don't have any particular use for compile-time
constant quantification in general, but there does turn out to be one
very important case: procedure objects for statically resolvable
procedures.

In particular, we can take the view that in:

  (deftypeclass (myTC 'a) 
    proc1: (fn 'a bool)
    proc2: ...)

  (define (f x)
    (proc1 x)) :
  (forall (myTC 'a) (fn 'a bool))

the type of procedure /f/ should be read to mean:

  (forall ((myTC 'a) proc1:(myTC 'a).proc1) (fn 'a bool))

which is to say:

  "we require a type 'a that satisfies (myTC 'a), and we further
   require a compile-time resolvable binding of proc1 satisfying the
   type 'methods of (myTC 'a) named proc1'"

If we adopt this view, then what happens is that the method binding
occurs at the outermost point where (myTC 'a) is instantiated by the
polyinstantiator, and that resolution percolates inwards across any
calls into libraries and so forth. The *binding* rule can now be lexical
and unambiguous without sacrificing any ability to build generalized
libraries.

Quantified literals are "resolved" by unification with constructors or
other quantified literals.


shap

_______________________________________________
bitc-dev mailing list
[email protected]
http://www.coyotos.org/mailman/listinfo/bitc-dev

Reply via email to