Gerhard,
Which object cache are you using? ObjectCacheDefaultImpl or ObjectCachePerBrokerImpl? As the doc states, the ObjectCacheDefaultImpl does have some drawbacks pertaining to dirty reads. We use OJB in servlet environment with an extremely query intensive application, but it's a very low load.
We recently upgraded from RC4 to 1.0 and so far haven't seen any new problems that were due to changes between the versions.
-- Jason
Gerhard Grosse wrote:
Hi,
We have developed a servlet application using a slightly modified version of OJB 1.0RC5 using the ODMG API. The application has been tested with Unit Tests, comprehensive manual funcional tests, and (mainly single-threaded) performance tests and everything seemed to work well.
As last step before going online we performed several load tests on
the application and found ourselves faced with a pile of problems.
These include inacceptable performance (simple transactions taking
several minutes), database deadlocks, complete irresponsiveness of the
application server and all kinds of Java exceptions, most of which
originating deeply within the OJB code.
An obvious approach to try to resolve some of these problems would be to move to OJB version 1.0, but in an earlier attempt we found that some of our modifications were incompatible with 1.0. Given the time pressure we decided to stay with RC5 as it seemed to work for us.
Now the time pressure is quite a bit higher and we have to judge whether upgrading to OJB 1.0 would help enough to resolve the concurrency problems to justify the time necessary for the upgrade.
So my main questions are:
- Have substantial things changed between RC5 and 1.0 that might have
improved OJB's behavior under high concurrent load.
- Does anyone have experience with OJB in a servlet environment under
high load? Any hints how to get it to work and perform?
One finding that discourages us from upgrading to 1.0 is a concurrency bug we discovered in RC5 which does not seem to have been fixed in 1.0:
In class org.apache.ojb.broker.core.QueryReferenceBroker:
private void performRetrievalTasks() { while (m_retrievalTasks.size() > 0) { HashMap tmp = m_retrievalTasks; // * m_retrievalTasks = new HashMap(); // * // during execution of these tasks new tasks may be added for (Iterator it = tmp.entrySet().iterator(); it.hasNext(); ) ...
the two lines marked with // * must be put in a block that synchronizes (e.g.) on m_retrievalTasks. If not, a thread switch may occur after the first line and two instances of tmp may point to the same hash table. Seems unlikely, but under high load it does occur often enough (a java.util.ConcurrentModification exception is the consequence):
synchronized (m_retrievalTasks) { tmp = m_retrievalTasks; m_retrievalTasks = new HashMap(); }
solves the problem.
Any hints and help are greatly appreciated.
Gerhard Grosse
--------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
--------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]