Wow, we're thinking really differently about this. This is going to be hard.

On Tue, Jul 15, 2014 at 10:30 PM, Jonathan S. Shapiro <[email protected]> wrote:
> On Tue, Jul 15, 2014 at 12:41 PM, Matt Oliveri <[email protected]> wrote:
>>
>> On Tue, Jul 15, 2014 at 12:14 AM, Jonathan S. Shapiro <[email protected]>
>> wrote:
>> > I'm not sure what you mean by "at a certain type".
>>
>> That technicality is necessary to rule out assigning weaker types to
>> "the same" programs.
>
> But that still leaves me at a bit of an impasse, because "weaker" is a value
> judgment and you haven't stated your criteria. By "weaker", do you mean
> "less general"? What's the scale of reference here?

Yes, that was my informal explanation to motivate that technical
qualification. I'm sorry if it didn't help.

> I'm not trying to be difficult with this question. I think it may go to the
> heart of why we are having trouble converging on this discussion. In the
> same vein, I'm having difficult with what you mean when talk about "right"
> types (right by what criteria)? And then I am confused when you talk about
> whether the BitC type system is complete enough. Because "complete", in my
> lexicon, is a technical term meaning a type that is maximal with respect to
> inclusion. You seem to be using the term to mean something else.

There certainly is some amount of subjective design sense involved
here, but by looking at a specific semantics, we can try to separate
your intended meaning/use of the language features from the way you
codified them as a type system.

I will not be able to argue if it just turns out we disagree about the
goals of BitC. What I'm worried about is that BitC will fall short of
your goals because of limitations of type checking and inference.

(Though, cards on the table, I'm not _that_ worried. I could just
leave you to your fate.)

For concreteness, let's try to get Church vs Curry sorted out before I
try to explain completeness. But as a possible short cut, if it turns
out you don't want to think of BitC as involving any Curry-style
typing, then the type system is vacuously complete.

>> It's hard to state this question in general for
>> all systems. For example, what on earth is the "corresponding untyped
>> program" in general? If BitC has unique typing, then technically
>> different types contain disjoint sets of programs. But unique typing
>> is kind of unintuitive to me for a strong type system.
>
> BitC certainly doesn't have unique typings, because languages don't have
> typings. Programs do.

Did you really not understand what I meant, or are you being pedantic?
Programs have types. A language's typing rules say which programs have
which types. A language with unique typing assigns every program at
most one type.

> BitC programs don't necessarily have unique typings
> because there are cases where overlapping integral types may lead to
> constraints that are not uniquely resolved by the time we reach main. Also
> because it is possible for type variables associated with unused fields not
> to get resolved. For each of these cases there are resolving rules.

OK.

> I'm again unsure what you mean here by a "strong" type system.

This time I was just chit chatting. Since we're looking at this
differently, I should probably stop that because it'll just confuse
things.

> Given
> complete typings, unique typings are almost a necessary consequence.

That sounds interesting, let me think about it...
Wait, I thought your "complete" means "maximal with respect to
inclusion". Being maximal would be vacuous if there's only one type to
pick from. In other words, given unique typing, _all_ types are
complete.

>> How are you
>> going to get the right types while avoiding unacceptable dynamic
>> failures, for all programs, when (automated?) static analysis is
>> generally infeasible?
>
> Good question. This actually does become a matter of annotation. The
> underlying nature of the problem here is that assigning/deriving a most
> general type leads to types that, for this purpose, are less constrained
> than we would like them to be. We are pushed toward a resolution in which we
> don't want a maximally inclusive typing. We intentionally want a narrower
> and more restrictive typing.
>
> What this really means is that the "right" type is circumstantial. When you
> allow inference to derive the type, you want the maximal type, because that
> maximizes reusability/genericity. For specific use-cases, that may not be
> what you want.

This sounds similar to the point I made earlier about choosing a type
being like choosing a specification, and that it generally needs to be
a human act of design. Or am I misunderstanding?

> And note that by the time you are generating code you are dealing with
> concrete types, not complete types.

By concrete types, you mean no type variables? Why would this be? And
how are concrete types opposed to complete types? They never overlap?

> Deriving a complete type initially
> doesn't penalize efficiency.

Umm, right... Did I suggest it would?

>> > One of the things about dependent types that concerns me is that you get
>> > progressively deeper into black art prover work when you need to achieve
>> > this "zero dynamic check" requirement.
>>
>> Well the proof obligations can get arbitrarily bad, but that doesn't
>> technically mean you need to resort to black art features.
>
> I mean that they are "black art" in the eyes of a systems programmer who
> probably can't name a prover, never mind drive one.

Ah. I wouldn't have used that term. Even many proof assistant drivers
consider some of Coq's features to be black magic, and that's the kind
of thing I thought you meant. And just generally, it shouldn't be the
case that everything unfamiliar might as well be black art.

>> Maybe you're saying it's hard to predict when this (innocent-looking)
>> code leads indirectly to that (impossible-looking) proof obligation.
>> That's just a fact of sound type systems though.
>
> The second you have a dependency on a run-time value, the nature of the
> discharge problem becomes qualitatively more complex. Value analysis is a
> global program analysis problem, and the discharge is accordingly
> complicated. The same is not true of discharging type obligations or kind
> obligations, which are local properties.

Oh you're talking about automatic static analysis. Then yeah, I
believe it. To reach an understanding faster, I think it'll help if
you stop thinking about algorithms for type checking and inference,
for the scope of this thread. The point is to separate the intended
semantics of the type system from the practical issue that it's too
hard to check against those semantics completely.

>> If you want to claim
>> strong things, someone has to prove them.
>
> You obviously don't work in marketing. :-)

You're right. I actually find most forms of marketing to be slightly
immoral. Deliberately and unapologetically filling people's heads with
biased and misleading messages... It'd be more than "slightly", except
we're so used to it.

>> Curry-style typing has no affect on the actual program that runs.
>> Whether a value has a type is just a matter of fact; nothing needs to
>> be done to make it so. So asking which parts of the BitC type system
>> would ideally be Curry-style is kind of like asking what types would
>> ideally be completely erased.
>
> I'm pretty firmly on the side of type erasure. So far as I can determine,
> all parts of the BitC type system should be erasable without altering the
> meaning of the program.

To my ear, that can't be right. For example, integers and interface
instances would be implemented differently. (I hope.) But that
difference was directed by them having different types.

> I don't agree with you about punting to dynamic checks, since (e.g.) the
> range check on array bounds is defined to be part of the accessor operation.
> The check can be optimized away if the type information tells us enough, but
> optimization does not change the program semantics.

That's clever, but it doesn't help if you can't recover from a failing
range check. I guess in BitC it's still up to the programmer to catch
their mistakes.

>> Given an informal sketch of such a semantics for BitC, we could
>> start talking productively about how close to complete BitC's typing
>> rules are.
>
> Or we could simply refer to the proof of sound and complete typing for the
> language core that has already been done...

By "complete" here, you just mean it'll find a type (maybe even a
complete type) for every program that has one, right? That's not what
I mean. Again, I'm not talking about algorithms here.
_______________________________________________
bitc-dev mailing list
[email protected]
http://www.coyotos.org/mailman/listinfo/bitc-dev

Reply via email to