On 8/17/07, Evan Miller <[EMAIL PROTECTED]> wrote: > > Wraparound might actually be a better solution for you, because it > solves the concurrency problems. Currently, several clients that > simultaneously attempt to increment past the size limit will all receive > errors, and they will all attempt to run your reset logic. I don't know > about your specific use-case, but for us this means that increments get > lost in the meantime (multiple clients reset the counter to 1).
Hmm... I hadn't thought of it that way. I only have one piece of code right now that could possibly hit the wrap-around effect, and I have it locked so that only one client can perform a replenishment operation. This is triggered whenever it receives an error from the client. Now that I think about it more, its not such a big deal to me either way, as I can easily detect either condition. I'm currently used to dealing with objects that fail rather than wrap-around, so I couldn't grasp why you would want it to, but what you say does make sense. With wraparound, only one client will receive a "0" return value, and so > only one client will run the additional reset logic. > > It's true that clients which rely on the increment errors will need > tweaking, but overall wraparound makes Memcached more useful and robust > as a generic counter service. > > It would be even more useful if calling "incr" on a non-existent key set > the value to "1" in order to avoid race conditions similar to what I > describe above, but that's a different can of worms... Yes, that would actually cause me some trouble... I currently rely on "incr" failing if you try to incr a key that doesn't exist. I'm one of those zealots that litters their code with asserts to ensure that any loss of integrity on data is readily identified. As a result, I'd rather know if something failed, rather than get some default value that just might allow the code to work, but not be correct.
