Hey Rickard and Marc,

> |Yes, what I meant was that cache operations must be mutexed, 
> and preferably
> |by simly using the cache itself as a mutex. The mutex is only for
> |retrieving/working with the cache, not during the actual 
> invocation of the
> |bean!!!

I'm not sure I got clearly your point Rickard, so I recap:
The InstanceCache implementation object (IC) does not have any method
synchronized (leave apart for now getLock and removeLock). This object owns
privately the CachePolicy implementation object (CP), and IC sync access to
CP using another lock object (very common pattern). This is done to minimize
contention. It is not possible to access from outside the CP (this also
simplifies things to cache policy implementors, that can write a new
algorithm without worrying about synchronization issues).
Now, with "cache operation" I suppose you mean operations on the CP (cannot
sync operation on IC, too much contention) (IMHO better not to use CP itself
as mutex for CP, for 2 reasons at least: ease of new implementations and
often clients need more sophisticated sync than the one offered by sync all
methods: see NoPassivationCachePolicy, not useful to use a SynchronizedMap).
The passivation is sync'ed with the above mechanism, so whether it happens
completely or it is not done (otherwise a bug).
The problem is to guarantee exclusive access to the enterprise context taken
from the cache during the locking logic (ie the part of code in the instance
int'r that calls, following many logics, ctx.lock()).

So:

ctx = container.getInstanceCache().get(id);
synchronized(ctx) 
{
        ... logic ... 
        ctx.lock();
}

won't work: ctx can be passivated between the first 2 rows.

I saw this solution:

mutex = container.getInstanceCache().getLock(id);
synchronized(mutex) 
{
        ctx = container.getInstanceCache().get(id);
        ... logic ... 
        ctx.lock();
}

This other solution is faster (no hash lookup):

ic = container.getInstanceCache();
synchronized(ic)
{
        ctx = container.getInstanceCache().get(id);
        ... logic ... 
        ctx.lock();
}

Of course for both solution in the passivation code I should do,
respectively:

mutex = container.getInstanceCache().getLock(id);
synchronized(mutex) 
{
        ... passivation ...
}

or

ic = container.getInstanceCache();
synchronized(ic)
{
        ... passivation ...
}

In the ... passivation ... part I checks if the ctx can be passivated (ie
!isLocked() and does not have tx) and I also manage the sync with the CP.

Given this, the only difference between the 2 solutions is the hash table
lookup, and if you agree, I will implement the second one (sync on the IC,
is faster).

The only ones that nees to use this synchronization mechanism are the
instance int'r and the passivation code.
Anyone else that wish to use the cache *after* the instance interceptor is
safe (the ctx isLocked() already); anyone that wants to use it *before* the
instance int'r must sync on mutex (or on ic) until it is done.
Of course the one that uses the cache before the instance int'r has a
"backdoor" open, but it cannot harm the cache: the worst that can happen is
that the context that it is using is passivated meanwhile. I don't see way
to enforce a cache user to sync on the mutex (or on ic) without sync'ing the
whole IC object, which leads to too much contention.

Sorry for the long post, but I would like things to be *very* clear before
going on to code.

Any comments ?

Best Regards,

Simon

Reply via email to