| Clearly, in this instance GHC is not transforming either of these | definitions into the other. But are there other compile time transformations | performed by GHC that might result in more (excessive) sharing, in | particular the creation of new common sub-expressions such as pxs? If not, | is that a deliberate design decision?
GHC tries not to create space leaks, but does not guarantee not to. In particular, the full laziness transformation is so beneficial most of the time that, even though it can create a space leak GHC still does it (unless you turn it off with a flag). GHC only does very limited common sub-expression elimination, which again is very beneficial usually but can have bad behaviour. It's not general CSE which is why it didn't kick in in the program you gave. | Since I guess the "excessive" bit of excessive sharing probably can't be | determined until run-time, would it be practical for the run-time system to | do something about it? For example, once the run-time system notices that | the graph size has breached a certain threshold it begins to decouple | references to "reduced" but bulky sections of the graph, forcing them to be | recomputed (from the original expression) at a later time. I don't know anyone who has tried to make this work. Sounds pretty tricky. Simon _______________________________________________ Haskell mailing list [email protected] http://www.haskell.org/mailman/listinfo/haskell
