Gregg Reynolds wrote:
Hi,

The concept of "type" seems to be a little like porno:  I know it when
I see it, but I can't define it (apologies to Justice Stewart).  I've
picked through lots of documents that discuss types in various ways,
but I have yet to find one that actually says what a type "really" is.
 For example, definitions of the typed lambda calculus usually define
some type symbols and rules like "a : T means a is of type T", and
then reasoning ensues without discussion of what "type" means.

The non-explication of types is intentional: by not saying anything about what a type is (other than that values/expressions can have one) that frees the theory to operate generally, regardless of what types actually are in any specific instantiation. When trying to say as little as possible, people frequently refer to them as "sorts". By calling them "types" we often mean we know a little bit more about them than sorts; though the terminology is relatively interchangeable.

Not that this helps you any.


The only discussion I've found that addresses this is at the Stanford
Encyclopedia of Philosophy, article "Types and Tokens" [1].  It's all
very philosophical, but there is one point there that I think has
relevance to Haskell.  It's in section 4.1, discussing the distinction
between types and sets:

The type--token distinction uses a different notion of type. In this context "type" means a sort of platonic notion of some thing (a value idealized), whereas "tokens" are specific instantiations/references to that thing. Consider, for example, the English word "Foo". When discussing Foo generally as a word in the English language, we're discussing the type Foo. But if I'm doing natural language parsing on this email, each time that three-letter sequence occurs is a different token (or instance, or use-in-context) of the type. In NLP/MT the distinction is helpful because, due to context, different tokens of the same type may need different treatment even though they're "the same word". For example, if "the" occurs fifteen times in a sentence, we probably shouldn't translate all of them into a single occurence of "der" in the German translation.

Mathematically this is the same distinction between the idea of 1 (choose any particular idea), vs the usage of 1. Due to referential transparency we don't tend to worry about this in Haskell since all tokens are interchangeable. Consider instead an object-oriented language. We can have an object over here that is "1", and it's separate from an object over there that's also "1". We can have mutation which makes one of the tokens different, say "2", without affecting the other token (though now they are no longer tokens of the same type). This is different from making the *type* different, which would affect all its tokens.

(And philosophically it matters because of the relation to whether the presence or absence of a token is effectful on the definitions of other things, e.g. sets or species as in your citation.)

Obviously this has some relation to the programmatic theory of types, but it's indirect. In the NLP/MT context the mapping from tokens to types is fairly straightforward, but in general there's the question of how we do the mapping which depends entirely on what our types look like. For example, is "Fooing" or "Fooer" a token of Foo, or do there exist types Fooing and Fooer instead? Usually in programming, singleton types aren't too helpful and so we use types that include all the "inflected forms" as well. But there are times when we do want to break these types apart into smaller types (via OOP inheritance, via case matching, via distinguishing instances of the same value,...).

--
Live well,
~wren
_______________________________________________
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe

Reply via email to