Clute, Andrew wrote:
Good news, I think.

Just so I can understand, I want to clarify: The global cache will be
the same as the cache today, and will contain full graphs.

The second level cache only contain "flat objects", but as second level cache you can declare all ObjectCache implementations.
When OJB lookup an object from the TwoLevelCache the second level lookup the "flat objects" and materialize the full object graph. Here is the only performance drawback, to materialize the 1:n, m:n relations OJB have to query for the references id's.


Maybe you could mix the used cache strategies. In the jdbc-connection-descriptor declare the TLCache and for read-only object (or less updated objects) declare the default cache in class-descriptor.


When an
object is retrieved, a copy of the object is returned to the client, and
that copy is placed into the second-level global cache?

Right.


So, any object
that is used from a retrieve mechanism is dereferenced from the objects
that are in the cache, and whatever the client does to them is not
affecting the cache?


That's the theory ;-)


If so, that is very cool! I don't really want to worry about a locking
strategy, because it seems to be overhead that we don't need -- using
optimistic locking works well enough for us. This seems like it gives me
the best of both worlds -- I don't have to worry about read locks, but I
also don't have to worry about mutating the global cache until my TX
commits.

I would assume that the second-level cache doesn't commit to the global
cache until the Tx commits, right?

Right, except for new materialized objects. These objects will be immediately pushed to the second level cache (flat copies). I introduce this to populate the cache, otherwise only changed objects will be put in the cache.



I would also assume that JTA based TX
won't make a difference?


Yep, should work in non- and managed environments.


All very cool stuff!


Wait and see ;-)

You mentioned that this will be included in the next release, but I
assume you mean 1.1, and not 1.0.2, right?

No, it will be part of the upcomming 1.0.2 release (scheduled for Sunday).

Armin

If it is meant for 1.1, is
there a release that is stable enough if all I care to do is add this
caching-strategy to a 1.0.X release featureset?

Thanks for all the help!

-Andrew





-----Original Message-----
From: Armin Waibel [mailto:[EMAIL PROTECTED] Sent: Wednesday, March 09, 2005 9:40 AM
To: OJB Users List
Subject: Re: Will a two-level cache solve this problem?


Hi Andrew,

 > So, my question is will the introduction of a two-level cache isolate

clients of OJB from mutating the object that is in the real cache?


yep!

 > Are
 > the objects that are in the local cache versus the global cache  >
different references, or are they the same?
 >

They are different, the second level cache only deal with flat (no
references populated) copies of the persistent class objects. The used
CopyStrategy is pluggable.

In OJB_1_0_RELEASE branch the first version of the two-level cache work
this way (will be included in next release).


> Is my only true option to go with an ODMG/OTM locking strategy to > isloate my reads from writes? >

You could write an thin layer above the PB-api using the kernel locking api in org.apache.ojb.broker.locking (OJB_1_0_RELEASE branch).

regards,
Armin


Clute, Andrew wrote:

Hello all!

I have a standard 3-tier webapp back with OJB in my business layer. We
are using the PB API. We have a host of domain objects, that is passed
up to the web tier and used for form manipulation.

The standard pattern for us when editing an object is:

1) Retrieve business object from PersistenceService
2) Use object and integrate it to set form elements
3) Place object into HttpSession for later
4) On submit pass, take object out of HttpSession, and then populate
date from form back into object
5) Save object through PB

We are using the default caching strategy as it provides us with the
most amount of performance increase. A lot of our objects are static

(we

are 90% read, 10% write) so we really want to keep that in place.

However, the problem arises with the fact that the web app is munging
with the same object reference that is in the cache! So, in my pattern
above, while we are updating the object in Session, we are also

updating

the object in the cache. We have gotten around it by every object we
return from OJB we clone. I really don't like that and want to get

away

from it.

I know that one solution to this is ODMG and to implement read/write
locks. I have been trying to stay away from that, only because it

seems

like I can't find a clean pattern to establish a write lock on the
submit pass of a form when the object is in HttpSession.

So, my question is will the introduction of a two-level cache isolate
clients of OJB from mutating the object that is in the real cache? Are
the objects that are in the local cache versus the global cache
different references, or are they the same?

Is my only true option to go with an ODMG/OTM locking strategy to
isloate my reads from writes?



---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




--------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]




--------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]



Reply via email to