On Mar 14, 5:27 am, Yigit Boyar <ybo...@gmail.com> wrote:
> ok, let me more clarify this topic since I got one more email about not
> using memcache for locking.

  I do wish you could post your topics and clarifications in a thread
that isn't a release announcement since you've removed that topic from
the list.  Too late for that, I suppose.

> For more information about my software, the locking mechanism is also
> backed up by a MVCC implementation in db layer. In other words, I
> already have a backup mechanism if memcached fails not to loose data
> integrity, but the cost of rolling back a bunch of transactions is not
> small, so I dont want to see MVCC exceptions hanging around.

  You're just thinking about your application incorrectly.  You have a
DB-based lock that you want to try to have hopeful fast access to via
memcached.  Memcached should just be caching the lock information that
you're persisting in your database.

> In addition, I need to guarantee that at least one memcached server is
> alive not to freeze my system.(e.g. if nobody can obtain lock, nobody
> can take an action).

  Failure to access a cache is not failure to obtain a lock.  Were I
to implement this using my java client, for example, I might wait up
to a half second for a cached lock before obtaining the lock the more
expensive way, and would always asynchronously cache the lock after
synchronously storing it in the reliable layer.

> Now comes the case of saving locks when a server fails. In that case, we
> may loose ~ 10K locks which will cause ~ 7K operation rollbacks. What I
> think is, with a memcache master slave implementation, this number can
> be decreased to a ommittable number.

  No information is lost just because a memcached server fails, so I
don't see why you'd feel that you'd lose locks.

> Of course there is another solution to this problem, basically depending
> on db layer MVCC implementation if memcache fails, but this will be a
> highly coupling between these two, which wont be preferable by design.

  It's not a coupling issue, it's an optimization issue.  You design
the application first, then you add optimizations.  In this case, like
all others for which it was designed, memcached is just an
optimization tool.

  You *can* use memcached as a ``hopeful'' kind of lock -- that is,
where the lock itself is just an optimization.  I've used it quite
successfully for job deduplication where any memcached failure (server
down, restarted, LRU evicted, object expired, net split, etc...)
results in unnecessary, but harmless work being performed.

 I have a similar case where I do best-effort message deduplication
where delivery of a duplicate message would just be a minor annoyance
and the cost of being sure is too high for the app.

  Any place where I needed *real* mutual exclusion locks, I used a
proper lock server.  I wrote one that had the semantics I needed last
time I needed one, but several such things exist on their own in the
wild.

> Anyway, to clarify, any master-slave like replication of memcached may
> be helpful for my case.

  Replication would only make things worse.  That would make the fast
and volatile software *significantly* slower and more complicated (I
assume you'd also want to keep hash table usage patterns in sync so
the LRUs are consistent as well as replicating evictions on insert
since the same object wouldn't necessarily get evicted from two
different servers due to the set of item locks held at transfer time
being different on each server as they are attempting to free memory
for a new item store operation).


  To your first point: yes, it's been done before.  People will
probably do it again.  It may have even been successful almost all of
the time (I doubt anyone's publishing numbers on how many false-
positives they get on locks).

  If that's good enough for you, then great.  Don't worry about
failures.

  If you want optimization over a slower, but safer underlying lock
mechanism, just write the code as if you assume it will always fail
and you should be happy with the additional performance a cache brings
you.

  However, if you want a good core lock server, don't look to a cache
designed to be lossy and volatile.

Reply via email to