Chris Uppal wrote: > > I'd be more careful in the analogy, there. Since type theory (as > > applied to programming languages, anyway) is really just a set of > > methods of proving arbitrary statements about program behaviors, > > Not "arbitrary" -- there is only a specific class of statements that any given > type system can address. And, even if a given statement can be proved, then > it > might not be something that most people would want to call a statement of type > (e.g. x > y).
[My understanding is that we're discussing static types and type theory here. If you mean dynamic types, then what I'm about to say won't apply.] I acknowledge that there are things that type systems are not going to be capable of proving. These are not excluded a priori from the set of problems that would count as type errors if the language's type system caught them. They are merely excluded for pragmatic reasons from the set of errors that someone would be likely to write a type system to check. That is the case simply because there is a decided disadvantage in having a compiler that isn't guaranteed to ever finish compiling your program, or that doesn't finish in polynomial time on some representative quantity in nearly all cases, etc. On the other hand, statements about program behavior that can be proved in reasonable time frames are universally valid subjects for type systems. If they don't look like type errors now, they nevertheless will when the type system is completed to demonstrate them. If someone were to write a language in which is could be statically proved that a pair of expressions has the property that when each is evaluated to their respective values x and y, (x > y) will be true, then proving this would be a valid subject of type theory. We would then define that property as a type, whose values are all pairs that have the property (or, I suppose, a nominal type whose values are constrained to be pairs that have this property, though that would get painful very quickly), and then checking this property when that type is required -- such as when I am passing (x - y) as a parameter to a function that requires a positive integer -- would be called type checking. In fact, we've come full circle. The intial point on which I disagreed with Torben, while it was couched in a terminological dispute, was whether there existed a (strict) subset of programming errors that are defined to be type errors. Torben defined the relationship between static and dynamic types in terms of when they solved "type errors", implying that this set is the same between the two. A lot of the questions I'm asking elsewhere in this thread are chosen to help me understand whether such a set truly exists for the dynamic type world; but I believe I know enough that I can absolutely deny its existence within the realm of type theory. I believe that even Pierce's phrase "for proving the absence of certain program behaviors" in his definition of a type system only excludes possible proofs that are not interesting for determining correctness, if in fact it limits the possible scope of type systems at all. However, your definition of type errors may be relevant for understanding concepts of dynamic types. I had understood you to say earlier that you thought something could validly qualify as a dynamic type system without regard for the type of problem that it verifies (that was denoted by "something" in a previous conversation). I suppose I was misinterpreting you. So your definition was: > It seems to me that most (all ? by definition ??) kinds of reasoning where > we > want to invoke the word "type" tend to have a form where we reduce values (and > other things we want to reason about) to equivalence classes[*] w.r.t the > judgements we wish to make, and (usually) enrich that structure with > more-or-less stereotypical rules about which classes are substitutable for > which other ones. So that once we know what "type" something is, we can > short-circuit a load of reasoning based on the language semantics, by > translating into type-land where we already have a much simpler calculus to > guide us to the same answer. > > (Or representative symbols, or... The point is just that we throw away the > details which don't affect the judgements.) I don't even know where I'd start in considering the forms of reasoning that are used in proving things. Hmm. I'll have to ponder this for a while. > I think that, just as for static theorem proving, there is informal reasoning > that fits the "type" label, and informal reasoning that doesn't. So while I disagree in the static case, it seems likely that this is true of what is meant by dynamic types, at least by most people in this discussion. I'd previously classified you as not agreeing. -- Chris Smith - Lead Software Developer / Technical Trainer MindIQ Corporation -- http://mail.python.org/mailman/listinfo/python-list