On Oct 16, 11:11 pm, Kelvin Edmison <kel...@kindsight.net> wrote:

>   while trying to re-create this problem and point out the various errors in
> his code, I found that, in his test case, if I did not call Future.get() to
> verify the result of the set, the spyMemcached client leaked memory.  Given
> that the Spymemcached wiki says that fire-and-forget is a valid mode of
> usage, this appears to be a bug.

  I'm not entirely sure that's a memory leak.  I would expect an OOM
in the case where you're not calling f.get(), but not in the other
simply because it's keeping the queue as small as possible every step
along the way.

  Fire and forget *is* valid for most of the kinds of things people do
in an application, but I wouldn't say that using the class as a write
only bulk data loader while completely ignoring system limitations is
valid.  It has come up mostly with people doing tests in a tight loop,
or in this case, a bulk data load.

  The thing that CacheLoader does is actually *very* close to what
you've got there, except it backs off on full queues assuming it gets
an exception.

  I think the problem is that we have a change that slows down the
input queue to keep it full for the case where people aren't
necessarily bulk loading, but keeping stuff pretty full in general.  I
think even that'd work if you had a smaller operation queue or you
just set the minimum blocking timeout to 0 and used the CacheLoader.
Perhaps the bug is that CacheLoader has no way to override the queue
offer timeout and just take the exception.

  In either case, if you just wait on the result of every the sets
every once in awhile (e.g. every 250,000 sets, do an f.get()) then it
should get bursty, but just work.

Reply via email to