Hello all!
 
I have a standard 3-tier webapp back with OJB in my business layer. We
are using the PB API. We have a host of domain objects, that is passed
up to the web tier and used for form manipulation.
 
The standard pattern for us when editing an object is:
 
1) Retrieve business object from PersistenceService
2) Use object and integrate it to set form elements
3) Place object into HttpSession for later
4) On submit pass, take object out of HttpSession, and then populate
date from form back into object
5) Save object through PB
 
We are using the default caching strategy as it provides us with the
most amount of performance increase. A lot of our objects are static (we
are 90% read, 10% write) so we really want to keep that in place.
 
However, the problem arises with the fact that the web app is munging
with the same object reference that is in the cache! So, in my pattern
above, while we are updating the object in Session, we are also updating
the object in the cache. We have gotten around it by every object we
return from OJB we clone. I really don't like that and want to get away
from it.
 
I know that one solution to this is ODMG and to implement read/write
locks. I have been trying to stay away from that, only because it seems
like I can't find a clean pattern to establish a write lock on the
submit pass of a form when the object is in HttpSession.
 
So, my question is will the introduction of a two-level cache isolate
clients of OJB from mutating the object that is in the real cache? Are
the objects that are in the local cache versus the global cache
different references, or are they the same?

Is my only true option to go with an ODMG/OTM locking strategy to
isloate my reads from writes?

Reply via email to