On 12/21/12 3:27 AM, Oleksandr Manzyuk wrote:
On Fri, Dec 21, 2012 at 4:40 AM, Alexander Solla <alex.so...@gmail.com> wrote:
I don't see how associativity fails, if we mod out alpha-equivalence.  Can
you give an example?  (If it involves the value "undefined", I'll have
something concrete to add vis a vis "moral" equivalence)

If you compute (f . g) . h, you'll get \x -> (f . g) (h x) = \x -> (\x
-> f (g x)) (h x), whereas f . (g . h) produces \x -> f ((g . h) x) =
\x -> f ((\x -> g (h x)) x).  As raw lambda-terms, these are distinct.
  They are equal if you allow beta-reduction in bodies of abstractions.
  That's what I meant when I said that we probably wanted to consider
equivalence classes modulo some equivalence relation.  That
hypothetical relation should obviously preserve beta-reduction in the
sense (\x -> e) e' = [e'/x]e.

Surely if we have any interest in the semantics of Haskell, we should be considering things modulo the usual relations. Of course, this takes us directly to the question of what semantics we're actually trying to capture?

Considering terms modulo alpha is obvious. N.B., alpha just refers to locally bound variables, not to definitions. We can define multiple copies of "the same" type with different names for the data and type constructors, and Haskell will hold these things distinct. If they truly are "the same" then they'll be isomorphic in the category, which is something we deal with all the time.

Considering things modulo delta is also pretty obvious. That is, inlining (or outlining) of definitions shouldn't have any semantic effect. If it does, then referential transparency fails, and that's one of the things we've sworn to uphold.

When we turn to beta (and iota, sigma,...), things start getting complicated. On the one hand, we'd like to include beta since it's standard in PL semantics. Moreover, since GHC does a certain amount of beta at compile time, we sorta can't get away from including it--- if we want to believe that (modulo bugs) GHC is semantics preserving. However, on the other hand, that means we can't have our semantics say too much about the operational differences between Haskell terms. Again, this is standard in denotational semantics; but there are reasons to be interested in operational semantics as well...

Turning thence to eta, I'm not sure how things stand. It's known that eta breaks certain nice properties about type inference/checking, especially once you're in the land of full dependent types, but I don't recall whether there are any *semantic* issues to be worried about. Haskell98 isn't subject to the issues dependent types have, but modern GHC is perilously close. E.g., older implementations of GADTs did occasionally require outlining in order to provide the necessary type signatures, but that's been fixed IIRC. In any case, we do certainly need to be careful about the phrasing of eta, since seq can distinguish f from (\x -> f x) if f happens to be bottom. So we don't get full eta, but how much of it can we salvage? Does it matter for the categorical treatment?

--
Live well,
~wren

_______________________________________________
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe

Reply via email to