Hey Marc,
> |> This lock is on the whole cache.
> |
> |What do you mean exactly ? It seems to me that this lock is
> not on the
> |cache, and it is unlocked as soon as it is not needed.
>
> the lock is on the cache...
Not much help ;-)
As Rickard proposed, the new lock() method should be implemented in the
instance cache implementation. Inside this new lock() there is a wait() that
*is* on the cache object, but client can for example insert into the cache.
> |> This means that once it is locked,
> |> NOBODY can enter the cache, even with a different context.
> |
> |This is a statement or a wish ?
> |
>
> statement
This is not true, clients can insert into the cache. Other clients that
calls invoke() have to wait little time since unlock is called just after
the context.lock logic and before getNext().invoke().
Best Regards,
Simon
> |> For stateful
> |> beans, this solution won't work since activation reads from file,
> |
> |Do not follow... For stateful this solution degenerates in the
> |synchronization statement where the sync object is the
> instance cache; the
> |activation is synchronous with the call to the cache (ie if
> the object has
> |to be activated, the instance int'r waits) ? May you expand ?
>
> this is not correct and in fact for entity it is done the
> proper way, i.e.
> the state sync is in another interceptor.
>
> Acquiring a context and setting it's state are not linked
>
> |
> |> for
> |> entity there must be a lower contention solution.
> |
> |It seems to me that the contention is low, since this lock is
> |released ASAP.
>
> Still...
>
> ok, we just finished a solution based on the sync(ctx) where
> we have only
> one sync and the cache.get is synced on teh cache.
>
> I *strongly* recommend moving the activation out of that sync
> (out of the
> cache.get) and into a interceptor for stateful beans but that
> is down the
> road, just keep it in mind for now.
>
> The new solution works, and works well (works for 2 threads,
> 10 threads, 50
> threads and almost works for 100 threads *all on the same
> object*). When I
> say *almost* I mean that now we are back on the "timeout"
> problems that send
> the server spinning loco do to a tx that is never cleanly removed
> (somewhere) we will track that one down.
>
> marc
>
> |Did you find any problem with it ? Am I missing something ?
> |
> |Best Regards,
> |
> |Simon
> |
> |
>
>