DO NOT REPLY TO THIS EMAIL, BUT PLEASE POST YOUR BUGĀ·
RELATED COMMENTS THROUGH THE WEB INTERFACE AVAILABLE AT
<http://issues.apache.org/bugzilla/show_bug.cgi?id=44147>.
ANY REPLY MADE TO THIS MESSAGE WILL NOT BE COLLECTED ANDĀ·
INSERTED IN THE BUG DATABASE.

http://issues.apache.org/bugzilla/show_bug.cgi?id=44147


[EMAIL PROTECTED] changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
             Status|NEW                         |RESOLVED
         Resolution|                            |FIXED




------- Additional Comments From [EMAIL PROTECTED]  2007-12-31 11:38 -------
Thank you for reporting the problem. Your patch has been applied in a slightly
modified form and will be released as part of version 1.2.27:

http://svn.apache.org/viewvc?rev=607768&view=rev

Since we released 1.2.26 only a week ago, it might take 2-4 months until 1.2.27.

Please note: the problem you describe is not a serious one. Although the code
was not really correct, it will be hard to observe this problem in reality. The
algorithm will always search through all available members of a load balancer in
order to find the member with the least load. next_offset is *only* really
useful when one uses the busyness algorithm for balancing (which is not the
default).

If one uses busyness, and the load is low, it is likely, that several workers
have the same load value (=busyness), most likely the value 0. Let's assume for
simplicity, that all members always have busyness 0. Without next_offset, we
would then always choose the first member of a balancer. That would be OK,
because whenever we have to choose a member, there would be no load at all. But
still it would be counterintuitive to always choose the same member with a load
balancer. So next_offset rotates the starting point of the search loop in order
to make the decision between members of the same load status a little more 
balanced.

It would be even more correct, to put next_offset into the shared memory data of
the lb, to make it shared between all processes. I decided to *not* do that,
because there is a tradeoff between the additional overhead of a shared memory
volatile on the one hand and slightly better balancing for busyness+prefork+low
load on the other hand.

-- 
Configure bugmail: http://issues.apache.org/bugzilla/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are the assignee for the bug, or are watching the assignee.

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to