Hello All,

I've been hacking the NH L2 cache for a few months, and I have come
to the conclusion that it is not really designed for distributed
caches.

Here are the issues I am concerned about:

1) Server Round Trip

With a distributed cache, a lot of the time is spent sending/receiving
data over
the socket, so performance increases dramatically when these
roundtrips are reduced.  Many modern distributed caches, like
Memcached and Redis, support client-side pipelining, where multiple
gets or puts are sent in one socket call. The NH ICache interface does
not support multiple puts or gets, so this feature is not available.

2) Locking

a) If a distributed cache supports distributed locks, then there is no
need for an in-process lock, and it just harms performance. And yet,
here is a code snippet from the
ReadWriteCache concurrency strategy

lock (_lockObject)
{
    if (log.IsDebugEnabled)
    {
        log.Debug("Caching: " + key);
     }
     try
     {
        cache.Lock(key);
.
.
.

b) distributed locks can fail due to timeout or server crashing after
acquiring the lock.
There is no logic here to support this.

I have a patch to address these issues, in my own NH fork. I've also
submitted a patch to JIRA to add distributed locks to the Enyim
memcached client. No response so far.

Is there any interest in improving support for distributed caches?

Cheers,
JL Borges

Reply via email to