Julio Cesar Leiva wrote:

Hi all


When we introduce load balancing and a 2nd tomcat worker, the time
to service client requests is not balanced. Some clients are serviced as above, but others may take 10, 20, 30 seconds or more. Eventually, clients timeout
and sessions are lost.

We ran our test over night with just 20 clients. All requests were serviced
evenly and we experienced no timeouts. However, the attached jkstatus
page shows some interesting results. The 'Busy' and 'Max' values for
the two workers are vastly different. For worker1, these values just
keep incrementing. Also, worker2 has serviced way more requests.
The 'Busy' and 'Max Busy' numbers in the other table keep growing
as well. Is this behaviour normal? or does this point to a problem
somewhere in the configuration?

As we add more clients, the time it takes to service requests gets more and
more imbalanced. Some requests get serviced in < 1 second, others can
take 20 or 30 seconds. The more clients we add, the longer it takes for
some requests to get serviced.

We are using apache 2.26, mod_jk 1.2.26 and tomcat 6.0.14 , JRE 1.5_06-b05
Apache and worker1 are  on the same box.  worker2 is in a different box



We are relly new on this(balancer), as you can see below we were using tomcat 5.5.20 , apache 2.2.0 and mod_jk 1.2.25 so we updated them all , but still same problems



Thanks a lot for your help



Rainer Jung wrote:

Julio Cesar Leiva schrieb:

Hi all

We have  this setup

1 web server apache 2.2.0



I hope it's not 2.2.0 but something more recent (e.g. 2.2.4 or 2.2.6)

2 tomcat servers tomcat 5.5.20
mod_jk 1.2.25

This is our workers.properties



Remove the next line, it's useless.

workers.java_home=/usr/lib64/jvm/java



worker.list=cbnbalancer,jkstatus



Maybe add connect_timeout and prepost_timeout to the next two, and if it makes sense for the app also reply_timeout. See

http://tomcat.apache.org/connectors-doc/generic_howto/timeouts.html

# Set properties for worker1 (ajp13)
worker.worker1.type=ajp13
worker.worker1.host=172.20.23.12
worker.worker1.port=8009
worker.worker1.lbfactor=1
#worker.worker1.connection_pool_timeout=600
#worker.worker1.socket_keepalive=1
#worker.worker1.socket_timeout=60
worker.worker1.socket_timeout=0
# Define prefered failover node for worker1
worker.worker1.redirect=worker2

# Set properties for worker2 (ajp13)
worker.worker2.type=ajp13
worker.worker2.host=172.20.21.211
worker.worker2.port=8009
worker.worker2.lbfactor=1
#worker.worker2.connection_pool_timeout=600
#worker.worker2.socket_keepalive=1
#worker.worker2.socket_timeout=60
worker.worker2.socket_timeout=0
# Define prefered failover node for worker2
worker.worker2.redirect=worker1



method T (Traffic) does only make sense, if you are bandwidth limited in the network and thus want to balance w.r.t bytes transferred.

# Set properties for balancer which use worker1 and worker2
worker.cbnbalancer.type=lb
worker.cbnbalancer.method=T
worker.cbnbalancer.balance_workers=worker2,worker1
# Enable sticky-sessions (aka session affinity)
worker.cbnbalancer.sticky_session=1

# Define a 'jkstatus' worker using status
worker.jkstatus.type=status
# Add the jkstatus mount point



Maybe a little simpler, if you omit the trailing '*', thus mapping the exact URL /jkmanager/.

JkMount /jkmanager/* jkstatus
# Enable the JK manager access from localhost only
<!-- Location /jkmanager/>



You don't need the next line, because you already defined this mount.

 JkMount jkstatus
 Order deny,allow
 Deny from all
#  Allow from 127.0.0.1
 Allow from all
</Location -->

This is part of one server.xml



connectionTimeout=600000 would be a good fit to the 600 in your jk configuration.

2000 Threads is a lot, are you sure, that your OS can create that many Threads for a JVM (Memory issues possible)? If you only allow 700 parallel requests in Apache, you don't need more than 700 (+1?) threads in the AJP connector.

<Engine name="Catalina" defaultHost="localhost" jvmRoute="worker1" >

<Connector port="8009" minProcessors="5" maxThreads="2000" minSpareThreads="100" maxSpareThreads="150" maxProcessors="0"
protocol="AJP/1.3" connectionTimeout="0"/>

This is the second tomcat
<Engine name="Catalina" defaultHost="localhost" jvmRoute="worker2" >

<Connector port="8009" minProcessors="5" maxThreads="2000" minSpareThreads="100" maxSpareThreads="150" maxProcessors="0"
protocol="AJP/1.3" connectionTimeout="0"/>

This is part of the apache conf



Which mpm, prefork?

# number of server processes to start
# http://httpd.apache.org/docs/2.2/mod/mpm_common.html#startservers
       StartServers         5
       # minimum number of server processes which are kept spare
# http://httpd.apache.org/docs/2.2/mod/prefork.html#minspareservers
       MinSpareServers      5
       # maximum number of server processes which are kept spare
# http://httpd.apache.org/docs/2.2/mod/prefork.html#maxspareservers
       MaxSpareServers     10
# highest possible MaxClients setting for the lifetime of the Apache process. # http://httpd.apache.org/docs/2.2/mod/mpm_common.html#serverlimit
       ServerLimit        700
       # maximum number of server processes allowed to start
# http://httpd.apache.org/docs/2.2/mod/mpm_common.html#maxclients
       MaxClients         700
       # maximum number of requests a server process serves
# http://httpd.apache.org/docs/2.2/mod/mpm_common.html#maxrequestsperchild
       MaxRequestsPerChild  10000


We are trying to test this with 600 clients , when we reach 200 , everything gets stuck....

any ideas?

Thanks in advance four your help

JulioC.



How do clients relate to parallel requests?
What's the throughput before it gets stuck?
What does a Java Thread Dump of Tomcat tell you?
What is the status in the jk status worker?
Which kind of errors in the jk log do you get?

Regards,

Rainer


---------------------------------------------------------------------
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


------------------------------------------------------------------------

JK Status Manager for 172.20.23.12:80
Server Version: Apache/2.2.6 (Linux/SUSE) mod_ssl/2.2.6 OpenSSL/0.9.8a 
PHP/5.1.2 mod_jk/1.2.26
JK Version:     mod_jk/1.2.26


Type                    lb
Sticky Sessions         True
Force Sticky            False
Sessions Retries        2
LB Method               SEssions
Locking                 Optimistic
Recover                 60
Wait Time               0
Max Replay              0               
Timeouts                0

Good                    2
Degraded                0
Bad/Stopped             0
Busy                    1841
Max Busy                1854
Next Maintenance        22/84


Balance Members

Name            worker1
Type            ajp13
Host            172.20.21.211:8009
ADDR            172.20.21.211:8009
Act             ACT
State           OK
D               0
F               1
M               1
V               4
Acc             11833228
Err             0
CE              0
RE              0
Wr              6.0G
Rd              3.4G
Busy            32
Max             34
Route           worker2
RR              worker1
Cd Rs           0/0


Name            worker2
Type            ajp13
Host            172.20.23.12:8009
ADDR            172.20.23.12:8009
Act             ACT
State           OK
D               0
F               1
M               1
V               4
Acc             8911469
Err             0
CE              0
RE              0
Wr              4.5G
Rd              2.6G
Busy            1730
Max             1748
Route           worker1
RR              worker2
Cd Rs 0/0
------------------------------------------------------------------------

---------------------------------------------------------------------
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



---------------------------------------------------------------------
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to