My apologies if this has been beat to death before, I'm still new to Haskell. But I was wondering if it is possible that lazy evaluation could lead to space compression, especially under heavily persistant usage patterns?

Here's the argument I'm making. Say we have a tree-based Set with, say, 1024 values in it. For ease of math we'll assume that it's perfectly balanced. Say each node in the tree takes 5 words of memory. So in an eager language (for example, Ocaml), adding a new node to this tree requires the allocation of 10 new nodes, or 50 words of memory. In Haskell, what would happen (as I understand it) is that just a new lazy thunk would be allocated- say, 10 words of memory. Conceptually, we could think of the returned value as the original tree plus a small delta. The lazy implementation is using 40 fewer words of memory than the eager implementation.

Note that the benefit isn't *big*- we're talking about 40 words of memory when the main data structure is taking up 5K plus words of memory- so it's less than 1% different. But there is a (small) upside in memory usage at least occassionally, right?

Brian

_______________________________________________
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe

Reply via email to