It seems to me that this sort of thing is why haskell is difficult to
compile to efficient code. I have the impression that relaxed
semantics wouldn't hurt 99% of programs while make the compiler-writer
job easier. The only disadvantage is that tricks like the above one
wouldn't work any more.

Another point is, imho, most haskell programmers don't grasp the exact
lazyness semantics; they think of lazyness as "evaluation order is up
to the compiler", not the precise graph-reduction thing.

Hence, haskell would perhaps be better off with more "fuzzy"
semantics, since it would match the intuition of people, and allow
more optimized code. Those ideas have been explored before, if I'm
correct in the "optimistic evaluation" project, and Eager haskell.
Yet, I'm wondering what's the current opinion of the haskell "gurus"
on the subject.

I'm not sure that `lots of people think that the semantics are fuzzy' is a good reason to change the semantics to be as such. I would assert that the more rigid and tidy you make the semantics, the easier it becomes to prove nice properties etc. and from what I've seen, Haskell's semantics are already just a tiny bit too `fuzzy' to do this kind of thing.


[I am conscious that we are using bandwidth on the main Haskell mailing list for this little discussion -- perhaps we are about done, but if not
perhaps we should mail each other direct.]



The subject is very interesting to me, and I suspect many others, so feel free.

Agreed - I was very much enjoying watching this thread.

Bobb

_______________________________________________
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell

Reply via email to