On Thu, Jul 10, 2014 at 2:23 PM, Matt Oliveri <[email protected]> wrote:

> I'm debating with Shap whether or not it's OK for type inference to
> sometimes fail for unintelligible reasons. I say yes, Shap says no. My
> motivation is that getting well-behaved type inference places
> uncomfortable restrictions on the language design, at least given the
> current state of the art.
>
> At least that's what I think we're debating.


So first, yes, I think that's a good summary of the direct debate. But I
think the revealing question to ask is: why are we coming to different
outcomes?

The answer, I think, is that we have different perceptions about where the
usability constraints are in the language, and which constraints will
inhibit uptake by mainstream programmers.

To the mainstream systems programmer, the HM type system is already well
past the line of Clarke's third law:

Any sufficiently advanced technology is indistinguishable from magic.


Unless we can make type-level computation feel reasonably natural to people
who don't know much about modern type theory, it will hinder adoption more
than it helps us. This puts us up against Voltaire:

Don't let the perfect be the enemy of the good.


When I look at the problems in the field today, it seems a lot more
important to get the large majority of programs moved to safe languages in
a stronger system framework than it is to prove properties about the three
real-world programs we can actually specify. If mainstream adoption is the
goal, proof support may be an impediment.

When I look at something like Habit, I see uses of type-level computation
and kinding that are new, but they are explicable to a systems programmer,
and the value of them can be readily seen. So far so good.

We know, empirically, that writing higher-order types by hand is
exceptionally hard for even experts to do. You can find an annotation that
works, but it will rarely turn out to be a complete type.

In BitC, we already have a bunch of things running around in the type
system:

   1. Parametric types in the style of HM
   2. Kinding in the style of Habit [almost certainly]
   3. General mutability
   4. Regions
   5. Effects
   6. Concurrency labels [if I can figure out how to make them work]

Of these, only the first one exists in HM so far as any programmer need
notice. Similarly in Coq. So even before we *get* to the question of
dependent type, we're already looking at something enormously complicated
that has to be rendered in a directly comprehensible way. Personally, I
think it likely that we already have more than we can manage, and we are
going to need to drop things in order to make the language practically
approachable. The only tool we have to mitigate that complexity is
inference.

Frankly, I'm a scared to put dependent typing in BitC. I think it's too
much. On the one hand, I can't point to a single dependently typed language
that *I* can really wrap my head around. Try imagining your average Linux
programmer working in Idris. On the other hand, I think we have enough new
stuff in other dimensions that is demonstrably important and useful to
systems programs.

It may be the right answer for dependent types to wait another generation
of PL. Could be we can capture a lot with pre/post conditions and
assertions.

There are a lot of paths here. Success requires that we thread the needle
between expressiveness and usability. I'm concerned that things are already
too complicated for usability, and we're going to need to pick our battles.


shap
_______________________________________________
bitc-dev mailing list
[email protected]
http://www.coyotos.org/mailman/listinfo/bitc-dev

Reply via email to