You should try 1.2.18 and depending on your time frame update to 1.2.19
once it's being released this month.
We improved load balancing code and with 1.2.19 also the observability
of what's happening.
Try the alternative method B (Busyness) for the load balancer in 1.2.18.
The default method
Hello List,
scenario:
- 4 node tc 5.0.28 vertical cluster ( :-| same server... still
testing, but it could have been 8) listening on ajp
Connector address=x.x.x.x port=8009
maxProcessors=150 minProcessors=50
protocol=AJP/1.3
since you are using prefork, you must set cachesize=1 for your
workers.properties file.
However, you have 4096 MaxClients, in order to serve this up in tomcat,
your JK connector should have maxProcessors=4096.
An alternative, and safe solution, although much less performance, is to
set
Using mpm_worker gave less impressive results; I'd say about 1/2, a
much worse load average (way more than 5), and lots of swap. Seems
like prefork works better on linux and I'm surprised. Anyway,
assuming that I got the maxProcessors wrong I should have seen queues
building up @ 150*4
it is a mod_jk issue, it uses permanent connections, that is how it was
designed. setting MaxRequestsPerClient to 1, will kill the child, hence
kill the mod_jk connection, this way, you can have
maxProcessorsMaxClients otherwise, they must match
Filip
Edoardo Causarano wrote:
Using