Takayuki Kaneko schrieb:
> Thank you for pointing out!
> I'll try this setting next time. Actually, I patched mod_jk to output
> the every worker's lb_value on every requests. In my analysis, it was
> the following situation.
> (Of cause the numbers of lb_value aren't real.)
> 
> * before test
>  tomcat1 lb_value=0, tomcat2 lb_value=0
> * run test and concentrated login requests, but they were distributed
> evenly
>  tomcat1 lb_value=50, tomcat2 lb_value=50
> * made difference **1

OK, what do you mean by "made difference **1"? What did the test do
during that time? Is it clear, why the difference happened, like huge
differences in requests per session?

>  tomcat1 lb_value=300, tomcat2 lb_value=200
> * concentrated login requests and all clients were distributed to tomcat2
>  tomcat1 lb_value=300, tomcat2 lb_value=300
> * made bigger difference
>  tomcat1 lb_value=300, tomcat2 lb_value=800
> * repeat **1 with swapping tomcat1 and tomcat2

Depending on the answer to the question above: in most real live
applications users don't send hundreds of requests per session and per
minute. So a couple of busy sessions might send a few dozens of requests
more in a minute. mod_jk divides the lb_value once a minute by 2 so that
differences that happened in the past get more and more unimportant.

If you do a synthetic test and hammer one session very fast, and then
soon after you make a lot of logins, the in fact the session
distribution will be uneven.

> ---
> I had another idea, the offset value should be shared among the apache
> processes with Busyness method. What do you think?

The offset is used to increase the chance of every worker getting some
request, so that you can detect failures even if the load is not very
high. The most observed case is when users do simple tests by sending a
couple of requests and then try to understand the load balancer
decision. But: this case is somehow artificial and not really relevant
for load balancing. Load balancing without load is not supposed to work
great.

With the busyness method, the low load situations happen more frequent,
especially since people might use it for apps, where it is not that
adequate. For usual apps with busyness, nearly all the time you will
have all lb_values equal to zero (and few equals 1 where a request is
running). In my opinion busyness is best, if parallelity is your
limiting ressource, like e.g. long running downloads. But again: if the
parallelity is very low, ie. low load, then you shouldn't really care
about evenly distributed requests. If you choose busyness, your metric
is parallelity and not accumulated requests. There is no mix.

The algorithm would be more perfect, if we would share the offset. But
there is a performance penalty for the shared offset. At the moment we
keep more data shared, than necessary. It is really necessary for the
lb_value, but lots of the others could be copied out of the shared
memory, because they only change via reconfiguration (jkstatus et.). I
think we will have more local copies in the future and in my opinion the
benefit of a shred offset is not enough to justify the likely
performance penalty.

> 
> Regards,
> 
> -Takayuki


Regards,

Rainer

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to