On 09/28/2010 03:03 AM, Patrick Browne wrote:
I had a look at the types of a and a’.
*Main>  :t a
a :: forall a obj. (Uneditable obj) =>  a ->  obj
*Main>  :t a'
a' :: forall witness a obj. (Refactored obj witness) =>  Maybe (a ->  obj)

Could you explain the example a little more.
This is going to be a long email. Logic is a bit of a rambling subject. And then I went out and bought some beer. ;0)

First thing to note: the importance of the types here is that the stuff contained inside the Maybe HAS to be the same type as the corresponding "Uneditable" function. On a conceptual level, we're trying to quantify over the axioms embodied by "Uneditable". It isn't important to my point what the axioms types are. What matters is that the new axiom scheme as the same "inner" type, wrapped in a Maybe. If you're looking to do object orientation or similar things, you would probably prefer something like:

class Uneditable obj where a :: obj -> a; b :: obj -> a

and

class Refactored obj witness where a' :: Maybe (obj -> a), b' :: Maybe (obj -> a)

You can do slick things with existential quantification too. (To a mathematician, "slick" means "so elegant it looks trivial at first glance, but deep and unifying upon further inspection". Compare to the mathematician joke about obviousness.[1] Category theory is the slickest of mathematics, under an informal, subjective ordering of mine. Mathematical logic is pretty damn slick too, but it has more primitives and so is less elegant. In different words, it is more computational than category theory, which is compositional by design)

I am not sure what you don't understand, so I will start at the beginning, at least in broad strokes.

So, going back to the "basic" theory of logic, you have axiomatizations, theories, and models. An axiomatization is a "chosen" finite a set of sentences (it's up to you to pick axioms that encode truths about what you want to study/program). A theory is a set of sentences which is closed under logical implication, and a model is a set of objects[1] which "satisfies" an axiomatization (equivalently, the theory generated by an axiomatization, because of the compactness theorem)

In order to make an axiomatization "more specific", you add axioms. In order to make it "more general", you drop axioms. So, every group is a monoid, but not vice-versa: the non-associative monoid of subtraction over the integers is not a group. The monoid does not satisfy the axiom of associativity, and so must be excluded from the class of models for the group axioms. Again, theories are "the same way", but you have to deal with logical closure. I mean, if I was to remove a sentence from a theory, it *might* still generate the same theory, because the sentence I removed was a logical consequence of the others. For example, we can have a theory of arithmetic where an axiom states that 1 + 1 = 2, in addition to the Peano axioms. If we drop the 1 + 1 = 2 axiom and generate the closure of the new theory, we will see that 1 + 1 = 2 anyway.

The notion of "satisfaction" is important. As I said, a model for an axiomatization is (conceptually) a set of objects which satisfies an axiomatization or theory. In short, to show that a model satisfies an axiomatization, you need to provide an interpretation function for the model. (This is what type class instances are for, under the Howard-Curry correspondence) An interpretation is a function which takes a sentence from the axiomatization and an object from the model and maps them to a truth value, under certain consistency constraints. For example, I(a 'and' b) = I(a) "and" I(b), where 'and' is a symbolic notion of conjunction and also "and" is a semantic notion of conjunction. (In particular, an interpretation I is a "proof-to-truth homomorphism", and can potentially be reified as a functor, among other things.

Now it gets kind of tricky. There are a few correspondences and facts to put together. First is the Howard-Curry Correspondence theorem, which tells us that in a typed programming language, functions can be re-interpreted as proofs of type-level theorems in a corresponding typed logic. That is, typed programming languages are typed constructive logics. So all the "basic" mathematical logic stuff I have talked about applies. But, I never said where axiomatizations and their models "live" on the programming side. And in fact, models are particularly tricky, because they "live" in "shifting" grounds.

For example, any logically consistent theory is its own model! The objects of the model (which the interpretation function interprets) are sentences from the theory the model models. The axioms for the theory are to be declared "true" (with respect to the model), and it is immediately obvious that the "identity functor" offers an interpretation, since I(A and B) exactly equals I(A) and I(B). This is called the "free model" for a theory/axiomatization. It is the "most general" model of an axiomatization, in a specific sense.[3]

Another correspondence to consider: every function is a relation, and every relation can be encoded as predicate by enumerating its tuples. So a function is pretty trivially equivalent to some predicate, since a predicate's extension is a set of things which it is "true" of. This is sort of the root of the Howard-Curry correspondence.

In short, models are strange abstract AND concrete things. Any single model is concrete, insofar as it is a collection of objects of some kind living "somewhere" (be it in RAM, in text on screen, in your mind, the Platonic realms, etc), but the only things that make models model-like is that they satisfy the model theory definitions/axioms (there's some more meta-modelling for you...).

If we pull in the Howard-Curry correspondence, we see that type classes definitions are meant to be Haskell's preferred method of constructing an axiomatization as a nameable thing of some kind. Then again, we can use modules, or even plain old data types and values (for example, type MethodName = String; data Uneditable obj = Uneditable [MethodName -> (a -> obj )]. "Every" collection of function definitions corresponds to an axiomatization of "something", under the correspondence. What matters is (1) that there is a finite collection of definitions, (2) and they are logically consistent (which in this case means they can be unified and don't touch an undefined Haskell expression)

So, back to the code. The "Uneditable" type class is an axiomatization of "something". Conceptually, to make a more general axiomatization, you would have to drop an axiom. Haskell's type classes don't let us do that directly. We have to force it. Forcing takes advantage of that "shiftiness" of models I discussed earlier. Since a type class is an axiomatization (of something...), it is its own model. How do we create less specific models from this model? Well, we have to change the interpretation function a little, so that the axiom we want to drop maps to... "nothing specific", except as a consequence of the interpretations for the other axioms. Consider again the "1 + 1 = 2" example. "1 + 1 = 2" would map to true, but only because it is a consequence of the regular axioms of arithmetic.

Indeed, this "dropping" or "keeping" is what the "Maybe" does in "Refactored". If we want to keep the axiom as an axiom, we stick it inside a Just. If we don't, we use "Nothing". The new free model is (potentially) "more" free, because it is constrained by fewer axioms.

So if we think of type class definitions as "axiomatizations", we should think of type class instances as equivalent to interpretations for a model. That is, typically, models in Haskell are data types that satisfy the axioms embodied in a type class definition. So what have we done? Well, we have constructed a larger free model, in virtue of constructing a more general axiomatization. It is still up to you to make specific data types instances of the axiomatization, in order to show that they are models of the axioms.

[1] "A mathematician is a person who spends 15 minutes to decide that something is obvious." Of course, the reason it might take 15 minutes is that you have to do a depth or breadth first search on arguments, and there are lots of "simple" arguments, each of which is "obvious". My email is an example of why this can be a lengthy search. In it, I have used equivalence results from possibly dozens of fields. The argument is only obvious in retrospect.

[2] I am not assuming OO semantics. Haskell values are a fair approximation of what an "object" is in this case. It seems that the CafeObj language's goal is to force these notions together. Fair enough, that's how I thought about OO objects when I did OO programming, to much annoyance from my imperative-thinking colleagues...

[3]. If our theory was "Everybody on Haskell Cafe wears hats", our theory would still be true if these hats were all blue. Or a mix of colors. Our language doesn't even go into those possibilities, so they can't be proven or disproven given our axioms. Free objects are interesting in their own right. For example, a free group generated by a set of symbols is the set of possible concatenations of these symbols. Concatenation is an associative operation. The first "big" result of elementary group theory is basically a proof that any group can be represented in terms of a set of permutations. In particular, this means that every free group can be represented in terms of a set of permutations, and vice-versa (though not in a particularly obvious way). That is, the free group generated by a group is a group of permutations on symbols, and contains the original group as a subgroup. This is a common construction where "free objects" occur. Free objects are useful, because they can be thought of as generic.
_______________________________________________
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe

Reply via email to