On Fri, Jul 4, 2014 at 12:30 PM, Matt Oliveri <[email protected]> wrote:
> > And the > > truth is that "constructions over literals" pushes you pretty far in the > > direction of unifying the expression and type grammars, because literals > can > > appear any place that a type name can appear. And then also expressions > over > > those. So you could easily see something like > > > > List (1+2) > > > > as a **type name**, meaning the (contrived) "List of elements of type 3 : > > Nat". > > Since when is 3 a type? That seems unambiguously ill-formed to me. > Unless List is also a value constructor, in which case it's > unambiguously a list of int. Is it that you're asking how to get the > parser to know that? > The literal 3 is a value that inhabits the type 3 (among others), which in turn inhabits Nat. The ambiguity is intentional; it's the key to literal types. > > But in either case, at some point you need to be able to take a > > variable /ndx/ and index into an array /ar/: > > > > ar[v] > > > > and note that because /v/ is a variable, the types don't agree. There > are a > > couple of ways to resolve this. One of them is to resolve this by > > introducing a new set of overloads for the indexing operator that invoke > the > > range check explicitly, passing the literal type as a type parameter to > the > > range check procedure. This is implemented in the runtime, and can > therefore > > do dispatches on type that ordinary code may or may not be able to do. > > I don't understand why that check should happen at runtime. I mean, I > see that it could, for more flexibility, but isn't that undoing the > point of using types? > Hopefully it doesn't happen at runtime, but you can't always discharge the range check statically. Oh, maybe that's what you're saying. That the way languages expose > that is to provide a range check primitive that will provide > type-level evidence for being in range. I wouldn't call that a > dispatch on type. It's just a branch on a value with a type strong > enough to maintain the connection to the strong types. > More or less. The tricky bit is that this is likely to cause value dependencies to start showing up in type qualifications, and then we need to think about how far we want to let those propagate. If we let them propagate too far they start showing up in top-level signatures, and at that point I think we go over the line into dependent types. > > The point is that even within this (seemingly) limited "computation by > kind > > classes" sort of approach, there remain places at the bottom where we > need > > to cross the line between types and values. Eventually these are going to > > "leak out" into the rest of the language in the form of constraints or > some > > such thing. > > If singleton types count as crossing the line, then yes. But that's > not the way dependent type systems cross the line. > Wait, are you saying you thought you were going to get away without > constraints on type-level Nats and stuff? > I don't think so. I'm saying that sooner or later we're going to get a procedure that takes an array 'a['i] and an int /ndx/ that wants to index into it, and we're either going to have to have a way to express the range constraint in the type (which is a form of dependence) or we're going to have to introduce a dynamic check so that the range constraint doesn't escape into the external procedure signature. > > I guess what I'm saying is: it doesn't feel to me like we are going to > get > > away with "giving up" on this in the grammar, even if we don't do full > > dependent types. Since we can't give up, it would be nice not to shoot > > ourselves in the grammar in some way that might preclude going to > dependent > > types later. > > Oh, now it looks like you're saying you can't get away without > constraints, and that they will show up in the surface syntax. That > seems OK. > Yes. And I'm further saying that some of those constraints involve values, so there may end up being places where terms appear as arguments to types. > >> And I haven't seen anyone who has tried to combine dependent typing > >> with unboxed types yet, other than to go all the way to formal > >> verification (using propositions as types). > > > > I actually don't see that as much of an issue. Propositions as types > doesn't > > do it either, since as long as the data structure is pure you can't tell > the > > difference between value copy semantics and reference copy semantics. > > I'm taking about impure data structures here. Supporting unboxed types > only when they're pure seems like an empty victory to me. Propositions > as types does everything, but then the problem is keeping the proof > burden tractable. > OK. But unboxed != mutable. It'll probably help us to keep the lexicon straight in this discussion. > > if the type-level computation language and the term-level computation > > language are related closely enough, it seems likely that you end up with > > similar grammar issues, and maybe even a lot of the same feel. > > That sounds right. So we're basically talking about how to add syntax > for Habit features to BitC? > Well, that's the path I plan to start with, anyway. > Wait, from my understanding of Geoffrey's idea, it would subsume > pretty much any sensible form of type-level computation short of true > dependent typing. But now you're saying you've been working out how to > support it all along. So then what's the remaining problem? > Mainly that I hadn't been thinking about type-level computation. And also that there's a termination problem. > I like effect types a lot, but they have a way of introducing one hell of > a > > lot of textual clutter. > > Have you thought about heuristic algorithms for inferring effect > polymorphism? > A bit. Not a lot. I'm not clear why they need to be heuristic. shap
_______________________________________________ bitc-dev mailing list [email protected] http://www.coyotos.org/mailman/listinfo/bitc-dev
