On 05/30/2017 11:56 AM, Willy Tarreau wrote:
> On Tue, May 30, 2017 at 11:04:35AM +0200, Pavlos Parissis wrote:
>> On 05/29/2017 02:58 PM, John Dison wrote:
>>> Hello,
>>>
>>> in ROADMAP I see:
>>> - spare servers : servers which are used in LB only when a minimum farm
>>> weight threshold is not satisfied anymore. Useful for inter-site LB with
>>> local pref by default.
>>>
>>>
>>> Is it possible to push this item priority to get it done for 1.8 please?  
>>> It looks like it should not require major code refactoring, just another LB 
>>> scheme.
>>>
>>> What I want to achieve is an ability to route request to "local" pool until 
>>> is get some
>>> pre-defined maximum load, and route extra request to "remote" pool of 
>>> servers.
>>>
>>> Thanks in advance.
>>>
>>
>>
>> +1 as I also find it very useful. But I am afraid it is too late for 1.8.
> 
> I'd love to have it as well for the same reasons. I think by now it
> shouldn't be too complicated to implement anymore, but all the usual
> suspects are busy on more important devs. I'm willing to take a look
> at it before 1.8 is released if we're in time with everything planned,
> but not more. However if someone wants to give it a try and doesn't
> need too much code review (which is very time consuming), I think this
> could get merged if the impact on existing code remains low (otherwise
> postponed to 1.9-dev).
> 
> In the mean time it's quite possible to achieve something more or less
> similar using two backends, one with the local servers, one with all
> servers, and to only use the second backend when the first one is full.
> It's not exactly the same, but can sometimes provide comparable results.
> 
> Willy
> 

True. I use the following to achieve it, it also avoids flipping users between 
data centers:

# Data center availability logic.
# Based on the destination IP we select the pool.
# NOTE: Destination IP is the public IP of a site and for each data center
# we use different IP address. So, in case we see IP address of dc1
# arriving in dc2 we know that dc is broken
http-request set-header X-Pool
%[str(www.foo.bar)]%[dst,map_ip(/etc/haproxy/dst_ip_dc.map,env(DATACENTER)]
use_backend %[hdr(X-Pool)] if { hdr(X-Pool),nbsrv ge 1 }

# Check for the availability of app in a data canter.
# NOTE: Two acl's with the same name produces a logical or.
acl www.foo.bardc1_down nbsrv(www.foo.bardc1) lt 1
acl www.foo.bardc1_down queue(www.foo.bardc1) ge 1
acl www.foo.bardc2_down nbsrv(www.foo.bardc2) lt 1
acl www.foo.bardc2_down queue(www.foo.bardc2) ge 1
acl www.foo.bardc3_down nbsrv(www.foo.bardc3) lt 1
acl www.foo.bardc3_down queue(www.foo.bardc3) ge 1

# We end up here if the selected pool of a data center is down.
# We don't want to use the all_dc pool as it would flip users between data
# centers, thus we are going to balance traffic across the two remaining
# data centers using a hash against the client IP. Unfortunately, we will
# check again for the availability of the data center, for which we know
# already is down. I should try to figure out a way to somehow dynamically
# know the remaining two data centers, so if dc1 is down then I should
# only check dc2 and dc3.

http-request set-var(req.selected_dc_backup) src,djb2,mod(2)

#Balance if www.foo.bardc1 is down
use_backend www.foo.bardc2 if www.foo.bardc1_down !www.foo.bardc2_down { 
var(req.selected_dc_backup)
eq 0 }
use_backend www.foo.bardc3 if www.foo.bardc1_down !www.foo.bardc3_down { 
var(req.selected_dc_backup)
eq 1 }

#Balance if www.foo.bardc2 is down
use_backend www.foo.bardc1 if www.foo.bardc2_down !www.foo.bardc1_down { 
var(req.selected_dc_backup)
eq 0 }
use_backend www.foo.bardc3 if www.foo.bardc2_down !www.foo.bardc3_down { 
var(req.selected_dc_backup)
eq 1 }

#Balance if www.foo.bardc3 is down
use_backend www.foo.bardc1 if www.foo.bardc3_down !www.foo.bardc1_down { 
var(req.selected_dc_backup)
eq 0 }
use_backend www.foo.bardc2 if www.foo.bardc3_down !www.foo.bardc2_down { 
var(req.selected_dc_backup)
eq 1 }

# If two data centers are down then for simplicity reasons just use the all_dc 
pool
default_backend www.foo.barall_dc

Cheers,
Pavlos

Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to