Probably not a lot of point in speculating on that without more information re what's going on under the covers. Would be interesting to do some full-on stress testing against the wiki engine using JMeter and see what that shows up. I'd be quite happy to do this but I'm a bit stretched for time just now.

I have done that, but I haven't yet been able to figure out what call patterns are causing this. Would be cool if you could load Apache's log files and see what happens.

Could also be mod_jk's fault for all I know.


I wonder if you're just overloading the hardware. Re the multiple tomcats and jvm's... how many? Is there a reason for not using virtual domains on a single tomcat instance?

Probably I am overloading it. I've got two jvms running two tomcats, each one of them running several wikis. This is to make sure that if the most loaded site goes down, the rest won't. There is one wiki which gets pretty regular traffic, but it can't afford to be down often. So it's got its own engine.

Also, these wikis were originally split across two computers. Unfortunately the other one died a few months back, and I had to put everything into the same server (which is roughly when our problems started). If I ever get a new one, I'll split them again. It's all my own money and time I'm putting into these boxes... (well, the current box is sponsored by BaseN, but it's getting a bit old.)

Also, are you on Tomcat 5 or 6? And is this on a single CPU box?

Tomcat 5, single CPU, yes.

In general, the server JVM or jRockit should be the JVM of choice. In the long run, once these JVM's have settled down it should give much better performance than the Sun client JVM, and GC performance is much more even and 'distributed' in time. It may however, use more heap overall than the client VM.

Yes, but the problem is that JSPWiki's startup takes a very long time already, and with server VM it takes way, way longer (meaning that when things go down, they're down for a lot longer). Also the memory consumption is a problem.

/Janne

Reply via email to