Here's perhaps a simpler explanation, and a slight retraction. If a type
class has an associated type (which we want because we would like for
example an iterator concept to have an associated value type) and we allow
multiple instances per type, then we can pass values that are different
instances by value. If each of instances encoded as values have a different
associated type, then passing then to a function with a type like this:

deref :: Iterator i => i -> Value_Type i

Then we cannot statically check this function at compile time, because the
return type will depend on the value passed as the Iterator.

This problem can be avoided by having type-families instead of associated
types, separating the type functions from the type-classes, so perhaps it
is okay to allow multiple instances for a type class providing you have
type-families instead of associated types.


Keean.




On 31 December 2014 at 18:24, Keean Schupke <[email protected]> wrote:

> To give some support to my position that only one instance per type-class
> makes sense:
>
> The thing that brought me to that view is the mess you get into with
> applicative modules/functors in ML.  It effectively breaks the type system,
> and in the F-ing modules paper, Andreas Rossberg presents phantom types
> (effectively making each module instance have a unique type) as the
> solution.
>
> If you have type-classes and want associated types (which are equivalent
> to type families) the type system will be unsound if you can pass two
> instances by value where they have the same type because type classes /
> type families are always applicative.
>
> With this view passing a type-class as a value just doesn't make sense, it
> is always a type parameter.
>
>
> Keean.
>
>
>
> On 31 December 2014 at 17:57, Jonathan S. Shapiro <[email protected]>
> wrote:
>
>> One of the remarkable things about BitC is the number of times I come
>> back to an issue later and feel like I've missed something obvious. It is
>> flattering and humbling that so many of you have invested so much time,
>> effort, humor, and conversation to create these opportunities for me to
>> slap my forehead and think "Gosh, I'm an idiot!". :-) Thank you!
>>
>> I've been thinking about instance coherence again, and re-reading Bruno
>> Oliveira's paper, and I think I've missed something very basic in the
>> instance coherence discussion: coherence only matters *some* of the
>> time. The catch is to know when you are and are not assuming it.
>>
>> Keean Schupke sent me a note earlier today taking a very strong position.
>> In his view, multiple instances of a type class at a single type is wrong,
>> because (in his view) a type captures a set of operations. I think this is
>> consistent with the traditional Haskell view, but I think it has two
>> probems:
>>
>> 1. A type class isn't a type. It's a *constraint* on a type. It doesn't
>> claim that the methods are part of the type[s]. It merely says that
>> implementations of the methods exist that operate over instances of type[s].
>> 2. From a purely practical perspective. it's a view that makes
>> retroactive extension difficult, because it drives us firmly into the
>> instance coherence checking problem.
>>
>> Coherence is good. Coherence lets us justify various forms of context
>> reduction. From a compilation perspective, it lets us do a whole bunch of
>> aggressive inlining easily, which is especially important at ground types
>> and unboxed types. Coherence is *really* good.
>>
>> Coherence is bad. There are useful cases of type classes that say
>> "certain laws must hold" without requiring that they must hold in a
>> particular way. E.g. (Ord 'a) captures the [unstated] intention that
>> instances of orderings be commutative.
>>
>> It may be that what *I* think I want shouldn't be called type classes.
>> This may all come down to lexicon. If I'm not contemplating type classes
>> anymore, what should these be called (description follows)?
>>
>>
>> So as I say, I've been reading Bruno Oliveira's paper, and I'm coming to
>> think that we've all had the foot in the wrong shoe for quite some time. I
>> still have misgivings about implicits as a mechanism for *general*
>> type-driven value selection, but I'm still chewing on that. The thing that
>> jumped out at me on this reading - and so far as I can find Bruno never
>> actually says this in the paper - is that coherence can be viewed as a
>> property of the *use* of a type class, not a property of a type class, and
>> that both views can co-exist.
>>
>> What I'm suggesting here is that there are two kinds of type constraint
>> use-occurrences that may appear on a definition:
>>
>> 1. Coherent constraint uses. These can only be satisfied by coherent
>> instances. Coherent instances have a single instantiation at a given type.
>> They observe the usual context reduction rules. They are known at static
>> compile time. They can only be resolved [I propose] from lexical contours
>> chosen to ensure that a failure of uniqueness is detected at static compile
>> time. Coherent instances *may* be named, but (by virtue of uniqueness)
>> naming is not a requirement for a successful language design. Coherent
>> instances are immodular, because it is required to check against collision.
>>
>> 2. Non-coherent constraint uses. These can be satisfied either by
>> coherent or non-coherent instances. A non-coherent instance may overlap or
>> match a coherent instance. Both non-coherent constraints and non-coherent
>> instances *must* be named. A non-coherent constraint is a named,
>> optional parameter. Non-coherent constraints are strictly modular. For
>> human consumption, we might refer to these as "named constraints".
>>
>> The proposal is that use of non-coherent constraints in a function is
>> signaled by a declaration that the constraint is named, which introduces a
>> formal parameter name. The formal parameter typically is not referenced
>> within the function. The effect of passing a non-coherent instance is that
>> its methods become bound in the lexical scope of the procedure.
>>
>> The place where this differs from Bruno's type-directed implicit values
>> is that I'm intentionally restricting these to instance resolution.
>> Ultimately, I think I could live with type-directed *static term* resolution
>> (i.e. the term is statically defined) as opposed to *value* resolution.
>> I want desperately to preserve term substitution here, because both
>> coherent and non-coherent instances are terms, and can therefore be
>> polyinstantiated in implementations where that is appropriate. But for the
>> moment I'm only considering this for named constraints.
>>
>> I'm not sure that what results here is still type classes, because in
>> this view (Ord int) can be viewed as a *type* having multiple possible
>> instantiations.
>>
>> Also, this isn't quite the same as Bruno's implicits, because Bruno's
>> requirement is only that a unique implicit at the desired type must exist
>> in the implicit environment. So far as I can tell, that is not enough to
>> guarantee coherence.
>>
>> Yes, for those watching the bouncing ball carefully, this is another
>> variation on the "literal kind" trick. The trick is readily extended to
>> [deep] constant terms generally, provided they are statically defined.
>>
>>
>> shap
>>
>> _______________________________________________
>> bitc-dev mailing list
>> [email protected]
>> http://www.coyotos.org/mailman/listinfo/bitc-dev
>>
>>
>
_______________________________________________
bitc-dev mailing list
[email protected]
http://www.coyotos.org/mailman/listinfo/bitc-dev

Reply via email to