Thanks Kurt,

I ran the same test on the Snapshot. Results are attached.

Basically, the memory usage seems to be much better (see MemUsage10users.png).

Unfotunately, it took a while before I could run a decent length load test because this version would lock up after a while. It locked up faster under bigger loads. I started at 30 users and kept lowering the number of users and lengthening the ramp up time. Finally I got 10 users with a 2mins ramp up time to run for over an hour before juddi locked up (results attached).

I took stack trace dumps throughout the run (attach in stackOutputs directory). Stack trace 12 and later are post lock up. Before the system locks up, the http-8080-?? threads seem to cycle through RUNNABLE, BLOKED, TIMED_WAITING, and WAITING states (not necessarily in that order). Afterwards, all threads are "WAITING (on object monitor)".

Next, I will run separate "register business" and "find business" tests next to see which one is locking up.

Jeremi



Kurt T Stam wrote:
Hi Jeremi,

I have uploaded a new portal bundle snapshot based on Hibernate. From what Jeff could see this does not seem to have the memory leak. We will investigate more why OpenJPA exhibits the leak, but this build should unblock you at least.

http://people.apache.org/repo/m2-snapshot-repository/org/apache/juddi/juddi-portal-bundle/3.0.0.SNAPSHOT/juddi-portal-bundle-3.0.0.20090723.201427-7.zip

Cheers,

--Kurt


Jeremi Thebeau wrote:
Hi all,

I ran a load test comprising of 30 virtual user continuously executing a the following senario on a jUDDI node:

- publish a business to the node with a unique name;
- publish a random number of services (>0 but <8) under that business;
- and search for the newly published business name.

This was supposed to run for two hours but the application crashed after 1h 46m. As can be seen in the attached jconsole screenshot, the heap memory usage goes up almost linearly during the load test until it hit the maximum alloted memory, 1G (inserted 'export JAVA_OPTS=-Xmx1024m' in startup.sh). Then, it hung around one gig as the request's response times got longer and longer as can be seen in the attached XLT report at about 11:25 (go to 'Requests' via the navigation drop down in the top right corner, most easily seen on the 'Averages' graphs). Eventually, I started getting 'java.lang.Exception: GC overhead limit exceeded' and 'java.lang.Exception: Java heap space' exceptions (stacktraces can be seen under 'Errors' in the XLT report) and the application crashed.

Also attached is the testcase used (TRegisterBusinessWithServices.java) and the actions called under it.

Jeremi Thebeau
QA Engineer/Consultant
Xceptance GmbH

------------------------------------------------------------------------



Reply via email to