André Warnier wrote:
Hi.

I know next to nothing about load balancing per se, so this will be a very naive question related to the data below : does it matter ? I mean, I can see that the load appears to be uneven, but the grand total seems to be about 10% of the available cpu time.
I am only showing an example with a few users, but I can drive the load up to 100% CPU utilization. I expect the server to hold well in any case. At low user level I expect the load to be even, so I get the best possible response.
So do you really care if one instance is using 4% and another 0.4% if there is still 90% available in total ?

I am also wondering if what is shown down there is not just this phenomenon : whenever the load balancer has to decide to which back-end to pass a request, I suppose it checks first which ones are already busy.
no, it should round-robin, no matter what is the current load in each od the app. servers. That is the algorithm I am using.


Since in this case the total load is light and most of them are always free, it might just take the first of the list, which then ends up more used than the others. No ?

André


fernando castano wrote:
Hi all,

I'm new to apache. I am experiencing a problem with apache load balancer. I configured the load balancer across 10 app servers (glassfish domains), but when I see the way the cookies (and load) are distributed I see a very uneven distribution. Here is my proxy configuration:


[EMAIL PROTECTED] more proxy_cluster.conf
# configuration for clustering more then one glassfish
ProxyPass / balancer://cluster/  stickysession=JSESSIONID nofailover=Off
ProxyPassReverse / http://kenstgapp01:8080
ProxyPassReverse / http://kenstgapp01:8280
ProxyPassReverse / http://kenstgapp01:8380
ProxyPassReverse / http://kenstgapp01:8480
ProxyPassReverse / http://kenstgapp01:8580
ProxyPassReverse / http://kenstgapp01:8780
ProxyPassReverse / http://kenstgapp01:8880
ProxyPassReverse / http://kenstgapp01:8980
ProxyPassReverse / http://kenstgapp01:9080
ProxyPassReverse / http://kenstgapp01:9180
<Proxy balancer://cluster/ >
BalancerMember http://kenstgapp01:8080 route=kenstgapp01_8080 loadfactor=1 BalancerMember http://kenstgapp01:8280 route=kenstgapp01_8280 loadfactor=1 BalancerMember http://kenstgapp01:8380 route=kenstgapp01_8380 loadfactor=1 BalancerMember http://kenstgapp01:8480 route=kenstgapp01_8480 loadfactor=1 BalancerMember http://kenstgapp01:8580 route=kenstgapp01_8580 loadfactor=1 BalancerMember http://kenstgapp01:8780 route=kenstgapp01_8780 loadfactor=1 BalancerMember http://kenstgapp01:8880 route=kenstgapp01_8880 loadfactor=1 BalancerMember http://kenstgapp01:8980 route=kenstgapp01_8980 loadfactor=1 BalancerMember http://kenstgapp01:9080 route=kenstgapp01_9080 loadfactor=1 BalancerMember http://kenstgapp01:9180 route=kenstgapp01_9180 loadfactor=1
</Proxy>
[EMAIL PROTECTED]

And here is how the load gets distributed across jmeter 10 clients: as you can see, only 7 of the jvms get work, and among them the amount of work they do is very uneven (second to last entry in each row is % of cpu used by the process). These domains are exactly the same. I've checked the cookie distribution and reflects the load distribution (uneven). If I increase clients I eventually get work in all jvms (still uneven), and that just proves that all jvms can be routed thru apache load balancer. I am generating load with jmeter. Any hints of what am I doing wrong? how to fix it?


PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP 1388 root 3338M 3257M sleep 0 0 9:08:45 6.5% java/89
1414 root     3332M 3253M cpu28    0    0   7:32:01 4.2% java/92
1417 root     3333M 3253M cpu9     0    0   7:14:39 2.3% java/96
1424 root     3332M 3254M cpu12    0    0   7:03:12 2.2% java/89
1420 root     3332M 3254M cpu6     0    0   7:35:40 2.1% java/89
1411 root     3333M 3253M cpu29    0    0   7:31:31 1.9% java/87
3461 webservd   40M   32M sleep    0    0   0:00:03 0.3% httpd/1
3460 webservd   36M   26M sleep    0    0   0:00:03 0.3% httpd/1
3462 webservd   36M   26M sleep    0    0   0:00:03 0.3% httpd/1
3457 webservd   32M   27M cpu24    0    0   0:00:02 0.3% httpd/1
1423 root     3333M 3256M sleep    0    0   7:00:01 0.2% java/88
3348 webservd   40M   32M sleep    0    0   0:00:04 0.2% httpd/1
 995 root     3536K 3072K sleep  100    -   0:00:46 0.1% cpustat/33
1360 webservd   43M   35M sleep    0    0   0:00:14 0.1% httpd/1
1337 webservd   43M   35M sleep    0    0   0:00:13 0.1% httpd/1
3559 webservd   13M   11M cpu20    0    0   0:00:00 0.1% hgwebdir.cgi/1
 883 root     3848K 3832K cpu25    0    0   0:00:13 0.1% prstat/1
1011 webservd   43M   36M sleep    0    0   0:00:15 0.1% httpd/1
  77 webservd 9016K 7832K sleep    0    0   0:16:18 0.1% memcached/1
Total: 166 processes, 1525 lwps, load averages: 10.00, 10.20, 10.03

TIA,
fdo

---------------------------------------------------------------------
The official User-To-User support forum of the Apache HTTP Server Project.
See <URL:http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  "   from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



---------------------------------------------------------------------
The official User-To-User support forum of the Apache HTTP Server Project.
See <URL:http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  "   from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



---------------------------------------------------------------------
The official User-To-User support forum of the Apache HTTP Server Project.
See <URL:http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: [EMAIL PROTECTED]
  "   from the digest: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to