Hi Jorge/Aaron/JLBorges (same e-mail different name/sign)

http://groups.google.com/group/nhcdevs/browse_thread/thread/ebb6a11601c8365b
http://groups.google.com/group/nhcdevs/browse_thread/thread/8d6b58b141e66395
http://groups.google.com/group/nhcdevs/browse_thread/thread/a3f9bf2ab44d1d0b
http://groups.google.com/group/nhibernate-development/browse_thread/thread/b62a8b7a0a1278d9

and continuing counting...

- Show quoted text -

On Wed, Feb 2, 2011 at 12:03 PM, JL Borges <[email protected]> wrote:

> Hello All,
>
> I've been hacking the NH L2 cache for a few months, and I have come
> to the conclusion that it is not really designed for distributed
> caches.
>
> Here are the issues I am concerned about:
>
> 1) Server Round Trip
>
> With a distributed cache, a lot of the time is spent sending/receiving
> data over
> the socket, so performance increases dramatically when these
> roundtrips are reduced.  Many modern distributed caches, like
> Memcached and Redis, support client-side pipelining, where multiple
> gets or puts are sent in one socket call. The NH ICache interface does
> not support multiple puts or gets, so this feature is not available.
>
> 2) Locking
>
> a) If a distributed cache supports distributed locks, then there is no
> need for an in-process lock, and it just harms performance. And yet,
> here is a code snippet from the
> ReadWriteCache concurrency strategy
>
> lock (_lockObject)
> {
>    if (log.IsDebugEnabled)
>    {
>        log.Debug("Caching: " + key);
>     }
>     try
>     {
>        cache.Lock(key);
> .
> .
> .
>
> b) distributed locks can fail due to timeout or server crashing after
> acquiring the lock.
> There is no logic here to support this.
>
> I have a patch to address these issues, in my own NH fork. I've also
> submitted a patch to JIRA to add distributed locks to the Enyim
> memcached client. No response so far.
>
> Is there any interest in improving support for distributed caches?
>
> Cheers,
> JL Borges




-- 
Fabio Maulo

Reply via email to