Hi Meikel, Since Laurent dragged me into the discussion:
On Thu, Mar 11, 2010 at 8:42 AM, Meikel Brandmeyer <m...@kotka.de> wrote: > Hello Laurent, > > On Mar 10, 11:45 am, Laurent PETIT <laurent.pe...@gmail.com> wrote: > > > * usage of refs : I had a bad feeling, and cgrand confirmed this to > > me by pointing an even more interesting counter-argument. Me: using > > refs is not mandatory since you do not need to synchronize this change > > with anything else. > > I don't think, that this is entirely true! You have to syncronise the > cache with the state in the strategy. This can be done with atoms only > if the cache and the strategy state are contained in the same atom and > all updates happen in a single call to swap! (or reset!). As soon as > you > have multiple calls, you need transactions again, because the atom > might > change between the two calls. And you can't swap! and return a result > at > the same time. > > Say you have the LRU strategy. You deref the cache, find the result > cached. Now you swap! in the new access time into your state. But > between the deref and the swap! another call might trigger the removal > of the entry. So you always have to guard against the atom suddenly > changing underneath - even if it's only one atom. > I agree. > Something what annoys me more is that the computation may be fired of > several times. Since the f is supposedly expensive, I'd rather avoid > that. > See my memoize5: the call isn't computed inside the swap!s > I gave it another run and came up with another solution. Even more > hairy. :P > > * a warm cache is fast > * a cache miss is potentially slow, but the missing value is computed > only > once, multiple requests for the same item wait for the initially > triggered > computation > * the cache can be updated (almost) in parallel for different items > > (defn memoize > ([f] (memoize f naive-strategy)) > ([f [cache-lookup cache-update]] > (let [cache-state (atom {})] > (fn [& args] > (if-let [e (cache-lookup cache-state args)] > @(val e) > @(locking cache-state > (if-let [e (cache-lookup cache-state args)] > (val e) > (let [result (promise)] > (.start (Thread. #(deliver result (apply f args)))) > (swap! cache-state assoc-in [:cache args] result) > (cache-update cache-state args) > result)))))))) > > But with this you can't reliably implement strategies like LRU and the > like. > Since you use a lock I think some clever combination of memoized functions can create a deadlock. > But this is really an interesting problem. I'll mull a little more > about it. :) > I agree, it's interesting and here is my entry: http://gist.github.com/330644 Christophe -- You received this message because you are subscribed to the Google Groups "Clojure" group. To post to this group, send email to clojure@googlegroups.com Note that posts from new members are moderated - please be patient with your first post. To unsubscribe from this group, send email to clojure+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/clojure?hl=en