On Tue, Jul 15, 2014 at 12:41 PM, Matt Oliveri <[email protected]> wrote:

> On Tue, Jul 15, 2014 at 12:14 AM, Jonathan S. Shapiro <[email protected]>
> wrote:
> > I'm not sure what you mean by "at a certain type".
>
> That technicality is necessary to rule out assigning weaker types to
> "the same" programs.


But that still leaves me at a bit of an impasse, because "weaker" is a
value judgment and you haven't stated your criteria. By "weaker", do you
mean "less general"? What's the scale of reference here?

I'm not trying to be difficult with this question. I think it may go to the
heart of why we are having trouble converging on this discussion. In the
same vein, I'm having difficult with what you mean when talk about "right"
types (right by what criteria)? And then I am confused when you talk about
whether the BitC type system is complete enough. Because "complete", in my
lexicon, is a technical term meaning a type that is maximal with respect to
inclusion. You seem to be using the term to mean something else.


> It's hard to state this question in general for
> all systems. For example, what on earth is the "corresponding untyped
> program" in general? If BitC has unique typing, then technically
> different types contain disjoint sets of programs. But unique typing
> is kind of unintuitive to me for a strong type system.
>

BitC certainly doesn't have unique typings, because languages don't have
typings. Programs do. BitC *programs* don't necessarily have unique typings
because there are cases where overlapping integral types may lead to
constraints that are not uniquely resolved by the time we reach main. Also
because it is possible for type variables associated with unused fields not
to get resolved. For each of these cases there are resolving rules.

I'm again unsure what you mean here by a "strong" type system. Given
complete typings, unique typings are almost a necessary consequence.


> > It is a hard requirement
> > that certain programs incur zero dynamic exceptions, therefore zero
> dynamic
> > checks. It is not realistic to expect that the necessary degree of static
> > analysis will be feasible for all programs.
>
> That's actually a great point for my side of the argument. How are you
> going to get the right types while avoiding unacceptable dynamic
> failures, for all programs, when (automated?) static analysis is
> generally infeasible?
>

Good question. This actually *does* become a matter of annotation. The
underlying nature of the problem here is that assigning/deriving a most
general type leads to types that, for this purpose, are less constrained
than we would like them to be. We are pushed toward a resolution in which
we don't *want* a maximally inclusive typing. We intentionally want a
narrower and more restrictive typing.

What this really means is that the "right" type is circumstantial. When you
allow inference to derive the type, you want the maximal type, because that
maximizes reusability/genericity. For specific use-cases, that may not be
what you want.

And note that by the time you are generating code you are dealing with
*concrete* types, not *complete* types. Deriving a complete type initially
doesn't penalize efficiency.


>
> > One of the things about dependent types that concerns me is that you get
> > progressively deeper into black art prover work when you need to achieve
> > this "zero dynamic check" requirement.
>
> Well the proof obligations can get arbitrarily bad, but that doesn't
> technically mean you need to resort to black art features.


I mean that they are "black art" in the eyes of a systems programmer who
probably can't *name* a prover, never mind drive one.


> Maybe you're saying it's hard to predict when this (innocent-looking)
> code leads indirectly to that (impossible-looking) proof obligation.
> That's just a fact of sound type systems though.


The second you have a dependency on a run-time value, the nature of the
discharge problem becomes qualitatively more complex. Value analysis is a
*global* program analysis problem, and the discharge is accordingly
complicated. The same is *not* true of discharging type obligations or kind
obligations, which are local properties.

If you want to claim
> strong things, someone has to prove them.


You obviously don't work in marketing. :-)


> > There is a lot we can usefully say with kinding. For now that may need
> to be
> > enough.
>
> Can you get decision procedures for all the things expressible with
> your kinding system? If not, how are proof obligations handled?
>

I think we probably can do well enough for now. It's definitely a challenge
and a potential hazard.



> Curry-style typing has no affect on the actual program that runs.
> Whether a value has a type is just a matter of fact; nothing needs to
> be done to make it so. So asking which parts of the BitC type system
> would ideally be Curry-style is kind of like asking what types would
> ideally be completely erased.


I'm pretty firmly on the side of type erasure. So far as I can determine,
*all* parts of the BitC type system should be erasable without altering the
meaning of the program. I'm not sure that quite works for existentials, and
the '.' operator clearly needs some dynamic left context to decide what to
do.

I don't agree with you about punting to dynamic checks, since (e.g.) the
range check on array bounds is defined to be part of the accessor
operation. The check can be optimized away if the type information tells us
enough, but optimization does not change the program semantics.


> Given an informal sketch of such a semantics for BitC, we could
> start talking productively about how close to complete BitC's typing
> rules are.
>

Or we could simply refer to the proof of sound and complete typing for the
language core that has already been done...

shap
_______________________________________________
bitc-dev mailing list
[email protected]
http://www.coyotos.org/mailman/listinfo/bitc-dev

Reply via email to