On Feb 2, 2:32 pm, Fabio Maulo <[email protected]> wrote:
> I don't think it is viable for gets
>
>
Yeah, we would need an ICache.MultiGet method as well.
Here is my Prefetch implementation for the standard query cache:
public override void Prefetch(IList cacheable,
IEnumerable<ICacheAssembler> returnTypes)
{
if (cacheable == null || cacheable.Count == 0) return;
var prefetchDict = new Dictionary<ICache, ISet<CacheKey>
>();
//note: first index in cacheable is a timestamp
for (var i = 1; i < cacheable.Count; i++)
{
var cacheOid = cacheable[i];
foreach (var type in returnTypes)
{
if (cacheOid ==
LazyPropertyInitializer.UnfetchedProperty ||
cacheOid == BackrefPropertyAccessor.Unknown)
continue;
var entity = type as EntityType;
if (entity == null) continue;
var entityId =
entity.GetIdentifierType(this).Assemble(cacheOid, this, null);
var entityName =
entity.GetAssociatedEntityName(Factory);
var persister =
Factory.GetEntityPersister(entityName);
var entityType = persister.IdentifierType;
var prefetchKey = new CacheKey(entityId,
entityType, entityName, EntityMode, Factory);
var cache = persister.Cache.Cache;
ISet<CacheKey> cacheKeys = null;
if ( !prefetchDict.ContainsKey(cache))
{
cacheKeys = new HashedSet<CacheKey>();
prefetchDict[cache] = cacheKeys;
}
else
{
cacheKeys = prefetchDict[cache];
}
cacheKeys.Add(prefetchKey);
}
}
//prefetch sent to all persister caches
foreach (var cache in prefetchDict.Keys)
{
var keys = prefetchDict[cache];
if (keys.Count > 1)
{
IDictionary results =
cache.MultiGet(prefetchDict[cache]);
//store in local cache
foreach (var key in results.Keys)
{
_prefetchCache.Put(new
CachePutParameters(null, key, results[key]));
}
}
}
>
>
>
>
>
>
>
> On Wed, Feb 2, 2011 at 4:25 PM, JL Borges <[email protected]> wrote:
> > Yes, I think the least invasive change would be to add the following
> > methods to ICacheProvider
>
> > void BeginPutCaching()
> > void BeginGetCaching()
> > void Flush()
>
> > Providers that don't have pipelining could ignore these.
> > Those that do support pipelining could store put/get in local
> > hashtable, then send/request all at once in Flush method.
>
> > Calls to BeginPutCaching/Flush would happen in
> > Loader.InitializeEntitiesAndCollections
>
> > Calls to BeginGetCaching/Flush would happen in StandardQueryCache.Get,
> > to optimize
> > retrieval of query results.
>
> > For locking, it would be nice to have a locking strategy for each
> > provider.
> > And we could provide a few ready made strategies:
>
> > NoLockStrategy : does nothing
> > ReadWriteStrategy: in-process locking
>
> > Then we would simply call
>
> > cache.LockStrategy().Lock()
> > ...
> > cache.LockStrategy().Unlock()
>
> > More performant would be:
>
> > cache.LockStrategy().GetReadLock() /
> > cache.LockStrategy().ReleaseReadLock()
> > cache.LockStrategy().GetWriteLock() /
> > cache.LockStrategy().ReleasWriteLock()
>
> > ///////////////////////////////////////////////////////////////////////////
> > /////////////////////////////////////////////////////////////////////
>
> > On Feb 2, 12:27 pm, Fabio Maulo <[email protected]> wrote:
> > > both (1 and 2) needs a modification of ICacheProvider as:
> > > bool SupportsMultipleGets
> > > bool SupportsMultiplePuts
> > > bool NeedsLocks
>
> > > For multiple get/put I'm not sure if it really needs a modification
> > inside
> > > the core or if it can be achieved directly inside the provider
> > > implementation (caching n-puts calls before call the underlining
> > > multiple-puts).
> > > If we have to change NH the modification may be in the
> > > Loader.InitializeEntitiesAndCollections calling the multiple-puts, per
> > > persister, for all uploaded objects (perhaps the time to implement
> > > a InitializeEntityEventListener has come).
>
> > > On Wed, Feb 2, 2011 at 2:04 PM, JL Borges <[email protected]> wrote:
> > > > Let's deal with this issue on its own merits.
>
> > > > This is about cache concurrency strategy and ICache interface.
>
> > > > Cache concurrency strategy is currently fixed to three different
> > > > strategies at the moment.
> > > > ICache interface is fixed.
>
> > > > Proposal:
>
> > > > 1) refactor concurrency strategies to avoid unnecessary locking for
> > > > distributed caches
> > > > 2) refactor ICache to support multiple get/put
>
> > > > Of course, all cache providers will have to do some work to support
> > > > the refactor.
>
> > > > Kind Regards,
> > > > JL Borges
>
> > > > On Feb 2, 10:49 am, Fabio Maulo <[email protected]> wrote:
> > > > > Only, I would know with who I'm talking.
>
> > > > > One of the previous question (using the name Aaron) was about add
> > locks
> > > > for
> > > > > no-thread-safe cache system.
> > > >https://groups.google.com/group/nhibernate-development/browse_thread/.
> > ..
>
> > > > > <
> >https://groups.google.com/group/nhibernate-development/browse_thread/..
> > > > .>Cache
> > > > > providers are injectable pieces and can be published whatever where
> > you
> > > > want
> > > > > and Jorge/Aaron/JLBorges can ask permissions to blog in nhforge.orgto
> > > > > promote his providers.
>
> > > > > In this case Jorge/Aaron/JLBorges is asking about an "improvement" of
> > > > > ICache: we can discuss it, no problem, but I would be sure that it is
> > > > really
> > > > > needed.
>
> > > > > On the other hand we have to port some other features, about cache,
> > from
> > > > > Hibernate as IsMinimalPutsEnabledByDefault and some other where
> > needed
> > > > > (read/update).
>
> > > > > On Wed, Feb 2, 2011 at 12:19 PM, Ramon <[email protected]>
> > wrote:
>
> > > > > > Fabio,
>
> > > > > > What exactly is your problem? He is just asking some questions and
> > you
> > > > > > respond with such weird stuff. I dont get it.
>
> > > > > > On Wed, Feb 2, 2011 at 4:13 PM, Fabio Maulo <[email protected]>
> > > > wrote:
>
> > > > > >> Hi Jorge/Aaron/JLBorges (same e-mail different name/sign)
>
> >http://groups.google.com/group/nhcdevs/browse_thread/thread/ebb6a1160.
> > > > ..
>
> >http://groups.google.com/group/nhcdevs/browse_thread/thread/8d6b58b14.
> > > > ..
>
> >http://groups.google.com/group/nhcdevs/browse_thread/thread/a3f9bf2ab.
> > > > ..
>
> >http://groups.google.com/group/nhibernate-development/browse_thread/t.
> > > > ..
>
> > > > > >> and continuing counting...
>
> > > > > >> - Show quoted text -
>
> > > > > >> On Wed, Feb 2, 2011 at 12:03 PM, JL Borges <[email protected]>
> > wrote:
>
> > > > > >>> Hello All,
>
> > > > > >>> I've been hacking the NH L2 cache for a few months, and I have
> > come
> > > > > >>> to the conclusion that it is not really designed for distributed
> > > > > >>> caches.
>
> > > > > >>> Here are the issues I am concerned about:
>
> > > > > >>> 1) Server Round Trip
>
> > > > > >>> With a distributed cache, a lot of the time is spent
> > > > sending/receiving
> > > > > >>> data over
> > > > > >>> the socket, so performance increases dramatically when these
> > > > > >>> roundtrips are reduced. Many modern distributed caches, like
> > > > > >>> Memcached and Redis, support client-side pipelining, where
> > multiple
> > > > > >>> gets or puts are sent in one socket call. The NH ICache interface
> > > > does
> > > > > >>> not support multiple puts or gets, so this feature is not
> > available.
>
> > > > > >>> 2) Locking
>
> > > > > >>> a) If a distributed cache supports distributed locks, then there
> > is
> > > > no
> > > > > >>> need for an in-process lock, and it just harms performance. And
> > yet,
> > > > > >>> here is a code snippet from the
> > > > > >>> ReadWriteCache concurrency strategy
>
> > > > > >>> lock (_lockObject)
> > > > > >>> {
> > > > > >>> if (log.IsDebugEnabled)
> > > > > >>> {
> > > > > >>> log.Debug("Caching: " + key);
> > > > > >>> }
> > > > > >>> try
> > > > > >>> {
> > > > > >>> cache.Lock(key);
> > > > > >>> .
> > > > > >>> .
> > > > > >>> .
>
> > > > > >>> b) distributed locks can fail due to timeout or server crashing
> > after
> > > > > >>> acquiring the lock.
> > > > > >>> There is no logic here to support this.
>
> > > > > >>> I have a patch to address these issues, in my own NH fork. I've
> > also
> > > > > >>> submitted a patch to JIRA to add distributed locks to the Enyim
> > > > > >>> memcached client. No response so far.
>
> > > > > >>> Is there any interest in improving support for distributed
> > caches?
>
> > > > > >>> Cheers,
> > > > > >>> JL Borges
>
> > > > > >> --
> > > > > >> Fabio Maulo
>
> > > > > --
> > > > > Fabio Maulo
>
> > > --
> > > Fabio Maulo
>
> --
> Fabio Maulo