Luke Palmer wrote:
It seems that there is a culture developing where people intentionally
ignore the existence of seq when reasoning about Haskell.  Indeed I've
heard many people argue that it shouldn't be in the language as it is
now, that instead it should be a typeclass.

I wonder if it's possible for the compiler to do more aggressive
optimizations if it, too, ignored the existence of seq.  Would it make
it easier to do various sorts of lambda lifting, and would it make
strictness analysis easier?

The introduction of  seq  has several implications.

The first problem is that parametricity would dictate that the only functions of type

   forall a,b. a -> b -> b

are

   const id
   const _|_
   _|_

Since seq is different from these, giving it this polymorphic type weakens parametricity. This does have implications for optimizations, in particular for fusion, see also

  http://www.haskell.org/haskellwiki/Correctness_of_short_cut_fusion

Parametricity can be restored with a class constraint

  seq :: Eval a => a -> b -> b

In fact, that's how Haskell 1.3 and 1.4 did it.


The second problem are function types. With seq on functions, eta-expansion is broken, i.e. we no longer have

  f = \x.f x

because  seq  can be used to distinguish

  _|_  and  \x. _|_

One of the many consequences of this is that simple laws like

  f = f . id

no longer hold, which is a pity.


But then again, _|_ complicates reasoning anyway and we most often pretend that we are only dealing with total functions. Unsurprisingly, this usually works. This can even be justified formally to some extend, see also

 N.A.Danielsson, J.Hughes, P.Jansson, J.Gibbons.
 Fast and Loose Reasoning is Morally Correct.
http://www.comlab.ox.ac.uk/people/jeremy.gibbons/publications/fast+loose.pdf


Regards,
apfelmus

_______________________________________________
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe

Reply via email to