Hi
On Sun, 18 Oct 2009 08:26:37 +0200, Willy Tarreau <w...@1wt.eu> wrote:
> Hello,
> 
> On Sat, Oct 17, 2009 at 11:18:24AM +0200, Angelo Höngens wrote:
> > Just read this thread, and I thought I would give my humble opinion
> > on this:
> > 
> > As a hosting provider we use both windows and unix backends, en we
> > use haproxy to balance requests across sites on a per-site backend
> > (with squid in front of haproxy). What I would love to see, is
> > dynamic balancing based on the round-trip time of the health check.
> > 
> > So when a backend is slower to respond, the weight should go down
> > (slowly), so the faster servers would get more requests. Now that's
> > a feature I'd love to see.. And then there would not be anything to
> > configure on the backend (we don't always have control over the
> > backend application)
> 
> Having already seen this on another equipment about 5-6 years ago, I
> can tell you this does not work at all. The reason is simple : the
> health checks should always be fast on a server, and their response
> time almost never tells anything about the server's remaining
> capacity. Some people even use static files as health checks.
> 
> What is needed though is to measure real traffic's response time. The
> difficulty comes from the fact that if you lower the weight too much,
> there is too little traffic to measure a reduced response time, and it
> is important to be able to bound the window in which the weight
> evolves.
For healthcheck-based loadbalancing healthchecks had to be something
like "make-some-number-crunching then make some-database-reading" and
quite long (because shorter requests tend to be more random). So you
will have either have long time between checks or, when checks will be
more often, lot of load on server thanks to health-checks.

I think loadbalancing should be both done by request length and server
load, but then it would have to be some kind of long term log
analysing of one often used part of page, for example:
1.If server load (simplest measure will be loadavg/cores) is below 80%,
increase weigth
2.If average request time of http://example.org/index.php is less than
90% of target_request_time, increase weight a bit.
3.If avg. req. time is more than target_request_time * 1.1 decrease
weight a bit
4. every x minutes if weigth will be less than 50 add 1 if more
subtract 50 (so values won't be "drifting" to max or 0 over time)

target request time would be some predefined value or (better)
calculated average from all nodes + 50% so u won't have situation where
every node weigth is skyrocketing or falling down because of small
load/overload

Regards
Mariusz
-- 
Mariusz Gronczewski (XANi) <xani...@gmail.com>
GnuPG: 0xEA8ACE64
http://devrandom.pl

Attachment: signature.asc
Description: PGP signature

Reply via email to