(to Armin) OK. Really do appreciate all your advices . Thanks again.

Here's my situation, resulting of a migration from 1.0.1 to 1.0.4 (within
ojb-blank) :

- I tried desperatly to use the TwoLevelCache and cannot run the first batch
(note that this batch looks like a big transaction). With 1.0.1, it executes
in a few seconds, creating 420 * 2 records (not so big). From what i saw, it
seems that TwoLevelObjectCache reads all materialized objects all time, as
if it want to check all records againsts all (following relations)! I think
it is useless reads, accordinf to my opinion. It's definitely not possible
to have such quantity of instances in memory !!

Of course, I tried to break the loading mechanism. First i saw that
auto-retrieve at "false" can produce fine results, but with the
incompatibity/disadvantage of ODMG transactions ( following your advices ) i
tried to  keep auto-retrieve and mount CGLIB proxy. The batch nevers ended,
freezing.

The only solution who worked has been to first, go back to my old cache
settings with the ObjectCacheDefaultImpl. and then, yes, first batch runs
again (faster !!). ouf.

But after, my problem was about the second batch. It seemed that there is
problems to read object by identity. I get rid of CGLIB proxies attributes
(proxy="true" on reference and proxy="dynamic" onto class descriptor) and it
worked.

So , I'm asking me :

- Does someone really get successful utilization of TwoLevelObjectCacheImpl
with little transaction as creation/update of 800 records ?

- Is TwoLevelObjectCache really needed (to avoid dirty reads)? We use to
work with ObjectCacheDefaultImpl, (re) reading all object before update and
performance are not so bad.

- Did i miss something to mount CGLib ability? I set proxy="true" on the
reference-descriptor and proxy="dynamic" on referenced class.

- I didn't see significant loading chain breaks using CGLib, with
TwoLevelObjectCache. Tell me i'm wrong and that's the only fact of the cache
manager..

Regards



On 2/6/06, Armin Waibel < [EMAIL PROTECTED]> wrote:
>
> Hi Bruno,
>
> Bruno CROS wrote:
> > About my precedent batch troubles:
> > In fact, a saw my model loaded from every where with all the actual
> > auto-retrieve="true", this means, every where !!  This means too, that
> > "back" relations are read too, circling too much. This was the cause of
> my
> > OutOfMemoryError.
> >
> > My model is a big one with a lot of relations, as complex as you can
> > imagine.
> >  So, i'm asking me about get rid of those auto-retrieve, to get all
> objects
> > reads faster (and avoid a clogging). But i read that for ODMG dev,
> > precognized settings are auto-retrieve="true" auto-update="none"
> > auto-delete="none".  Do i have to absolutely follow this ? If yes, why ?
> >
>
> In generally the auto-retrieve="true" is mandatory when using the
> odmg-api. When using it OJB take a snapshot (copy all fields and
> references to other objects) of each object when it's locked. On commit
> OJB compare the snapshot with the state of the object on commit. This
> way OJB can detect changed fields, new or deleted objects in references
> (1:1, 1:n,m:n).
> If auto-retrieve is disabled and the object is locked OJB assume that no
> references exist or the existing ones are deleted although references
> exist and not be deleted. So this can cause unexpected behavior,
> particularly with 1:1 references.
>
> The easiest way to solve your problem is to use proxy-references. For
> 1:n and m:n you can use a collection-proxy:
>
> http://db.apache.org/ojb/docu/guides/basic-technique.html#Using+a+Single+Proxy+for+a+Whole+Collection
>
> For 1:1 references you can use proxies too.
>
> http://db.apache.org/ojb/docu/guides/basic-technique.html#Using+a+Proxy+for+a+Reference
> Normally this requires the usage of a interface as persistent object
> reference field. But when using CGLib based proxies it's not required
> and you can simply set proxy="true" without any changes in your source
> code.
> http://db.apache.org/ojb/docu/guides/basic-technique.html#Customizing+the+proxy+mechanism
>
>
> If you can't use proxies (e.g. in a 3-tier application) you can disable
> auto-retrieve if you take care and:
> - disable implicit locking in generally
> - carefully lock all objects before change it (new objects too)
> - before you lock an object (for update, delete,...) retrieve the
> references of that object using method
> PersistenceBroker.retrieveAllReferences(obj)/retrieveReference(...).
>
>
> > At start, I saw that setting auto-retrieve to "true" everywhere wasn't
> > solution, but all transaction and batch processes were working fine (
> until
> > 1.0.4. ), with autoretrieve on all n relations (yes!). Chance !!
> > But with a
> > little doubt, I tell to all the dev team to avoid as possible the read
> by
> > iterating collections without any reasons, prefering ReportQuery (single
> > shot) and direct object queries.
> > Is that the good way to have a fast and robust application ?
>
> ...yep. The fastest way to lookup a single object by PK is to use
> PB.getObjectByIdentity(...). If a cache is used this method doesn't
> require a DB round trip in most cases.
> http://db.apache.org/ojb/docu/tutorials/pb-tutorial.html#Find+object+by+primary+key
>
>
>
> >
> > Another thing, does defaultPersistenceBroker always return the tx broker
> > when tx has begun (in 1.0.4)? That's very important.
> > I hope that i have not to implement with TransactionExt.getBroker  method.
> > All ours reads are with defaultPersistenceBroker.
>
> If you call PBF.defaultPB(...) OJB always return a separate PB instance
> (of the 'default' connection defined in jdbc-connection-descriptor) from
> the PB-pool using it's own connection.
> TransactionExt.getBroker always returns the current used PB instance (of
> tx). This behavior never changed.
> If you only do read operations with the separate PB instance you will
> not run into problems, except if you call tx.flush(). In that case the
> objects written to database will not be noticed by the separate PB
> instance (until tx.commit - with default DB isolation level).
>
> regards,
> Armin
>
> >
> > Thank you all in advance.
> >
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
>
>

Reply via email to