Peter Verswyvelen wrote:
I thought the lambda function that memo1 returns would be called over and over 
again, and instead of reevaluating the stream from the beginning, it would just 
return the stream since it is in the cache, but actually it just gets called 
twice in recursive situations: the first time it evaluates y = f x, stores the 
thunk in the cache, and returns the thunk, the second time it finds the same 
thunk in the cache, and then computation of the rest of the stream continues 
without consulting the cache anymore right?

Actually the function may be called more than twice -- but each time after the first, it uses the cached value instead of recomputing it. Even in a non-recursive situation, such as "x + x", this saves some computation. The recursive situation just make it worse.

 From my clumsy explanation you can see that I'm still thinking too imperative, 
too eager. Haskell is more lazy than I am, which is an incredible achievement 
:-)

The confusing thing here is that it is a combination of functional and imperative -- the functional evaluation is happening lazily, but the unsafe stuff causes some imperative side effects, namely the updating of the cache.

It would really help if I could see the lazy computation; do you think this kind of memo code is traceable using HAT?

I don't know -- I've never used HAT!

I'll guess I'll have to check out arrows / yampa again. A year ago I did not 
understand a single thing in those papers, but I should try it again now I read 
the SOE book :-)

Ok, good luck.

   -Paul

_______________________________________________
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe

Reply via email to