Hi Danilo,

you are right cache was not synchronized.

But some user don't use the cache (using 'empty cache'),
so maybe we need both possibilities:
* with cache synchronization - safe but less
performant
* without cache synchronization - performant

public void deleteByQuery(Query query)
does cache synchronization by default

public void deleteByQuery(Query query, boolean synchronizeCache)
choose what you want

What do you think?

regards,
Armin

----- Original Message -----
From: "Danilo Tommasina" <[EMAIL PROTECTED]>
To: "OJB Users List" <[EMAIL PROTECTED]>
Sent: Wednesday, June 04, 2003 3:48 PM
Subject: Cache inconsitence using deleteByQuery with
PersistenceBrokerImpl


Hello,

I noticed an odd behaviour when using *broker.deleteByQuery*, this issue
seems to be known (see developer mailist, msg 652 [VOTE] deleteByQuery
leaves Cache in an inconsistent state), however no info is still
available in the javadoc nor a solution seems to be available.
Get a look at this code:

       broker = PersistenceBrokerFactory.defaultPersistenceBroker();
        file://Insert entries
        try {
            broker.beginTransaction();
            UserAttrs ua;
            file://Columns:        userid, attrName, attrValue
            file://Primary Key:       x  ,    x
            ua= new UserAttrs( "id1", "attr1", "test1" );
            broker.store( ua );
            ua= new UserAttrs( "id1", "attr2", "test2" );
            broker.store( ua );
            broker.commitTransaction();
        } catch (Throwable t) {
            broker.abortTransaction();
            t.printStackTrace();
        }

        file://Delete all entries with userID = "id1"
        try {
            UserAttrs ua= new UserAttrs();
            ua.setUserid( "id1" );
            Query q = new QueryByCriteria(ua);
            broker.beginTransaction();
            broker.deleteByQuery( q );
            broker.commitTransaction();
        } catch (Throwable t) {
            broker.abortTransaction();
            t.printStackTrace();
        }

        file://Re-Insert entries
        try {
            broker.beginTransaction();
            UserAttrs ua;
            file://Columns:        userid, attrName, attrValue
            file://Primary Key:       x  ,    x
            ua= new UserAttrs( "id1", "attr1", "test1" );
            broker.store( ua );
            ua= new UserAttrs( "id1", "attr2", "test2" );
            broker.store( ua );
            broker.commitTransaction();
        } catch (Throwable t) {
            broker.abortTransaction();
            t.printStackTrace();
        }

On first execution this causes the generation of following SQL:

SELECT ATTR_NAME,USERID,ATTR_VALUE FROM USER_ATTRS WHERE USERID = 'id1'
AND ATTR_NAME = 'attr1'
INSERT INTO USER_ATTRS (USERID,ATTR_NAME,ATTR_VALUE) VALUES
('id1','attr1','test1')
SELECT ATTR_NAME,USERID,ATTR_VALUE FROM USER_ATTRS WHERE USERID = 'id1'
AND ATTR_NAME = 'attr2'
INSERT INTO USER_ATTRS (USERID,ATTR_NAME,ATTR_VALUE) VALUES
('id1','attr2','test2')
-> commit

SELECT A0.ATTR_NAME,A0.USERID,A0.ATTR_VALUE FROM USER_ATTRS A0 WHERE
A0.USERID =  'id1'
DELETE FROM USER_ATTRS WHERE USERID =  'id1'
-> commit

SELECT ATTR_NAME,USERID,ATTR_VALUE FROM USER_ATTRS WHERE USERID = 'id1'
AND ATTR_NAME = 'attr1'
UPDATE USER_ATTRS SET ATTR_VALUE='test1' WHERE USERID = 'id1'  AND
ATTR_NAME = 'attr1'
SELECT ATTR_NAME,USERID,ATTR_VALUE FROM USER_ATTRS WHERE USERID = 'id1'
AND ATTR_NAME = 'attr2'
UPDATE USER_ATTRS SET ATTR_VALUE='test2' WHERE USERID = 'id1'  AND
ATTR_NAME = 'attr2'
-> commit

The UPDATE statements in the 3. block will have no effect on the
database, this is from my point of view a seldom but potentially
dangerous BUG!!!

There is a simple workaround to this, until the code is fixed, simply
call a broker.clearCache() after the deleteByQuery transaction has been
executed.
However this is a performance killer if you are going to deleteByQuery
very often.
I adopted following solution, but since I am a OJB Newbie I'd like to
know if you see a better solution, without re-implementing the
ObjectCacheImpl class
I extended PersistenceBrokerImpl through a new class and did an override
of the deleteByQuery method, then declared this new class in the
OJB.properties int the PersistenceBrokerClass property.
Here the code:

public class SafeDeleteByQueryPBImpl extends PersistenceBrokerImpl {
    protected SafeDeleteByQueryPBImpl() {
        super();
    }
    public SafeDeleteByQueryPBImpl(PBKey key, PersistenceBrokerFactoryIF
pbf) {
        super( key, pbf );
    }

    /**
     * Bug workaround
     * Added code for clearing matching objects from cache when
executing PersistenceBrokerImpl.deleteByQuery(query)
     * @see org.apache.ojb.broker.PersistenceBroker#deleteByQuery(Query)
     */
    public void deleteByQuery(Query query) throws
PersistenceBrokerException {
        file://Clear cached objects
        Iterator it= super.getIteratorByQuery( query );  file://List all
objects affected by the query
        while ( it.hasNext() ) {
            super.objectCache.remove( new Identity( it.next(), this ) );
file://Remove matching objects form cache
        }
        file://Delegate deleteByQuery to super class
        super.deleteByQuery( query );
    }
}

Calling the method will cause an extra SELECT statment to be inserted
and all the objects to be loaded in memory, however this should be
faster than executing single deletes or clearing the cache each time.
Is there a better solution to that?
Thanks and sorry for the long message
 Danilo Tommasina

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]







---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to