FW: LUA and doing things

2018-09-24 Thread Franks Andy (IT Technical Architecture Manager)
Sorry to be a nag, but anyone any ideas with this. Or is it just indicated to 
regularly parse log files (seems a bit of a hacky solution).
Thanks!


From: Franks Andy (IT Technical Architecture Manager) 
[mailto:andy.fra...@sath.nhs.uk]
Sent: 21 September 2018 13:20
To: haproxy@formilux.org
Subject: LUA and doing things

Hi all,
  Just hopefully a really quick question.. I would like to use LUA to, on 
connection use of a specific backend service, do something (like write an entry 
to a log file for example). I realise the example here is possibly locking etc 
but I'm not too worried at this point about that.
LUA seems, with my basic knowledge, to expect to do something to the traffic - 
for example I have this :

frontend test_84
  bind 0.0.0.0:84
  mode http
  default_backend bk_test_84

backend bk_test_84
  mode http
  stick on src table connections_test_84
  server localhost 127.0.0.1:80

I have a working lua script to do something like core.Alert("hello world").
The thing I would like to do is run this script without any effect on traffic - 
if I try and use 'http-request' or 'stick on' or similar keywords which can use 
lua scripts, they want me to program in some action that decides what criteria 
to stick on or what to do with that http-request. I just want something to 
"fire" and do nothing but run the lua script and carry on. Can I do it?
Please forgive my noobiness.

Thanks
Andy


LUA and doing things

2018-09-21 Thread Franks Andy (IT Technical Architecture Manager)
Hi all,
  Just hopefully a really quick question.. I would like to use LUA to, on 
connection use of a specific backend service, do something (like write an entry 
to a log file for example). I realise the example here is possibly locking etc 
but I'm not too worried at this point about that.
LUA seems, with my basic knowledge, to expect to do something to the traffic - 
for example I have this :

frontend test_84
  bind 0.0.0.0:84
  mode http
  default_backend bk_test_84

backend bk_test_84
  mode http
  stick on src table connections_test_84
  server localhost 127.0.0.1:80

I have a working lua script to do something like core.Alert("hello world").
The thing I would like to do is run this script without any effect on traffic - 
if I try and use 'http-request' or 'stick on' or similar keywords which can use 
lua scripts, they want me to program in some action that decides what criteria 
to stick on or what to do with that http-request. I just want something to 
"fire" and do nothing but run the lua script and carry on. Can I do it?
Please forgive my noobiness.

Thanks
Andy


Cookies, load balancing, stick tables.

2018-03-28 Thread Franks Andy (IT Technical Architecture Manager)
Hi all,
  Hopefully an easy one, but I can't really find the solution.
We've come up with a control system for haproxy, where we manually can clear 
stick table entries from a GUI. We're also using a cookie to set the server in 
a backend as we're expecting to deal with clients behind a nat device.

It's the customers (just internal IT in another dept) request that they should 
be able to close down a stick table entry and have the client not be able to go 
to that stick-table selected server AT ALL, even when presenting a cookie.
It seems to me that HA is designed to allow these cookie selected server 
connections irrespective of the stick table entries, so there are two ways to 
continue to me:


1)  Have the application remove the separate cookie we insert when the 
application gets logged off or times out (timeout happens at 15 minutes of app 
idle time).

2)  We get HAProxy to control the expiry time of the cookie we send over, 
and refresh that expiry each time a transaction happens.

3)  Live with the imbalance of clients from NATted source ip addresses and 
ditch the cookie insertion.

We would all prefer #2, since the devs don't want to spend time redeveloping, 
and HAProxy can seemingly do just about anything! #3 would work, but removing 
entries from the stick table during testing or certain maintenance may well 
remove more than just the intended target.

Any ideas?
Thanks
Andy


RE: Logging actual fetched URL after request is re-written

2018-03-28 Thread Franks Andy (IT Technical Architecture Manager)
Thanks Chad,
  I found that setting the path in the frontend (instead of the backend) 
removed the selection of backend based on the path via acl, presumably because 
the rewrite is set before the acl is processed ; bit of a catch 22.
It's possible I misunderstand an implied concept of ordering the config but the 
acl was before we altered the path, which is where I'd imagine it should be to 
do the acl bit first. HA seems to like any backend selection stuff to be later 
in the frontend config I think too, so no go there  - I couldn't select the 
backend before rewriting the path.
I also couldn't get the http-request capture path len 32 bit to work without 
using the log display of capture.req.hdr(idx) instead of just %[path] or path 
in there, said it needed access to the http headers which it didn't have, or 
something similar.

I found that I can record the rewritten URL though - just for reference I ended 
up realising from Cyril's example that I was using the wrong scope - setting a 
variable in the backend against the "req" scope didn't work, but the "txn" did 
record the necessary in the logs.


frontend ft_web_ssl
..
  acl audioserve path_beg -i /audioserve
  log-format %ci:%cp\ [id=%ID]\ [%t]\ %f\ %b/%s\ %Tq/%Tw/%Tc/%Tr/%Tt\ %ST\ %B\ 
%CC\ %CS\ %tsc\ %ac/%fc/%bc/%sc/%rc\ %sq/%bq\ {%hrl}\ {%hsl}\ %{+Q}r\ 
%{+Q}[var(txn.filteredurl)]
  use_backend bk_audioserve_3000 if audioserve
..

backend bk_audioserve_3000
mode http
option httplog
# issue with double slashes on certain URLs - need to raise 
with audioserve coder - https://github.com/izderadicka/audioserve
# also remove the /audioserve/ virtual directory as this app 
works from the root as a node.js webapp.
http-request set-var(txn.filteredurl) 
capture.req.uri,regsub(/audioserve[/]?,/,g),regsub(//,/,g)
http-request set-path %[var(txn.filteredurl)]
server www.andyjfranks.uk 127.0.0.1:3000

Thanks chaps!
Andy


From: Chad Lavoie [mailto:clav...@haproxy.com]
Sent: 27 March 2018 18:04
To: haproxy@formilux.org
Cc: Franks Andy (IT Technical Architecture Manager)
Subject: Re: Logging actual fetched URL after request is re-written


Greetings,

Sorry, pressed wrong button so didn't include on CC.


On 03/27/2018 01:03 PM, Chad Lavoie wrote:

Greetings,

On 03/27/2018 12:49 PM, Franks Andy (IT Technical Architecture Manager) wrote:
Hi all,
  Logging with HTTP as standard, the %{+Q}r log variable records the requested 
URL in the logs. I'd like to also record the URL that's actually fetch after an 
http-request set-path directive is given (for debugging purposes). It's linked 
to an application that provides next to no debugging, and tcpdump isn't much 
help either - having it in haproxy logs would be really useful.
Can I do this or am I thinking too much outside the box? I tried setting a 
dynamic variable and then setting that in the frontend log-format, but it 
didn't seem to record anything even despite populating the variable.

You should be able to add "http-request capture path len 32" at the end of a 
frontend to capture the path after all the modifications.
Variables should work too, though without knowing exactly what your variable 
rules looked like I can't guess as to why it didn't capture anything.

- Chad

Thanks





Logging actual fetched URL after request is re-written

2018-03-27 Thread Franks Andy (IT Technical Architecture Manager)
Hi all,
  Logging with HTTP as standard, the %{+Q}r log variable records the requested 
URL in the logs. I'd like to also record the URL that's actually fetch after an 
http-request set-path directive is given (for debugging purposes). It's linked 
to an application that provides next to no debugging, and tcpdump isn't much 
help either - having it in haproxy logs would be really useful.
Can I do this or am I thinking too much outside the box? I tried setting a 
dynamic variable and then setting that in the frontend log-format, but it 
didn't seem to record anything even despite populating the variable.
Thanks



RE: Peer tables don't synch on clear

2018-02-13 Thread Franks Andy (IT Technical Architecture Manager)
Thanks for the update,
  Looks like I need to clear from both nodes simultaneously then, or use the 
option to shut down connections on return of the non-backup server(s).
Thanks again
Andy

-Original Message-
From: Frederic Lecaille [mailto:flecai...@haproxy.com] 
Sent: 13 February 2018 07:35
To: Franks Andy (IT Technical Architecture Manager); 'haproxy@formilux.org'
Subject: Re: Peer tables don't synch on clear

On 02/12/2018 04:28 PM, Franks Andy (IT Technical Architecture Manager) 
wrote:
> Hi Fred,

Hi Franks,

Please bottom post when you reply.

>Thanks for the reply.
> I have two peers synchronising (we use keepalived over the two to control 
> which is live).
> 
> HAProxy config:
> 
> peers lb_replication
>peer server1 10.128.176.141:1024
>peer server2 10.128.176.142:1024
> 
> backend sourceaddr
>   stick-table type ip size 10240k expire 30m peers lb_replication
> 
> frontend ft_web_ssl
>   bind 0.0.0.0:443 name https ssl crt /etc/haproxy/certs/main.pem
>   mode http
>   option httplog
> 
>  acl is_from_outside src 192.168.110.0/24
>  acl is_empty_path path /
> acl is_webmail hdr(host) -i webmail
> acl is_webmail_fqdn hdr(host) -i webmail.domain
> 
> redirect location /owa/ code 302 if is_webmail is_empty_path ! 
> is_from_outside
> redirect location /owa/ code 302 if is_webmail_fqdn is_empty_path ! 
> is_from_outside
> default_backend bk_web_ssl
> 
> backend bk_web_ssl
>   mode http
>   option httplog
>   cookie SERVERID insert nocache indirect
>   stick on src table sourceaddr
>   server server1 10.128.176.150:443 check ssl
>   server server2 10.51.0.150:443 check ssl backup
> 
> It's fine for new connections - it records the correct server1/server2 
> information. It's hard to demonstrate, but I can see when I use haproxyctl to 
> clear an entry :
> 
> Haproxyctl clear table sourceaddr key 

Haproxy stick-table are synchronized between peers but only to create or 
update entries. The deletions are not synchronized.

The stick-table synchronizations are performed thanks to peers protocol 
(see doc/peers* files). There is nothing in this protocol which 
synchronize the deletions.

So you cannot reproduce your issue with haproxyctl.

The stick-table entries are cleared when they expire (exp == 0) and when 
there is no more usage of these entries (use == 0). As the expiry values 
are synchronized,the stick-table are supposed to be purged at almost the 
same time.

> .. it doesn't clear the secondary node entry. When that entry for the client 
> re-presents the expiry time on the secondary updates but the entry never 
> clears.
> 
> I can't really include pictures on these emails, but the tables are kind of 
> standard:
> 
> e.g.
> 
> 0x7fa8b247a4f4: key=217.40.203.34 use=0 exp=1574957 server_id=1
> 
> Thanks
> Andy
> 
> -Original Message-
> From: Frederic Lecaille [mailto:flecai...@haproxy.com]
> Sent: 12 February 2018 12:56
> To: Franks Andy (IT Technical Architecture Manager); 'haproxy@formilux.org'
> Subject: Re: Peer tables don't synch on clear
> 
> On 02/08/2018 11:22 AM, Franks Andy (IT Technical Architecture Manager)
> wrote:
>> Hi all,
> 
> Hello Franks,
> 
>>     Haproxy 1.6.13
>>
>>     I've checked the documentation again but can't see an option for this.
>>
>> We sometimes clear backup path server use for individual connections and
>> whilst the peers synchronisation works for new connections, it doesn't
>> clear on the secondary peer node we're using.
>>
>> Is this by design or an option I'm not seeing?
> 
> Please give us more information about your configuration. If possible,
> also provide us with the information of stick-table entries concerned
> with this issue (see "show table" CLI command).
> 
> Do not forget to obfuscate the critical data.
> 
> Regards,
> 
> Fred.
> 
> 
> 



RE: Peer tables don't synch on clear

2018-02-12 Thread Franks Andy (IT Technical Architecture Manager)
Hi Fred,
  Thanks for the reply.
I have two peers synchronising (we use keepalived over the two to control which 
is live).

HAProxy config:

peers lb_replication
  peer server1 10.128.176.141:1024
  peer server2 10.128.176.142:1024

backend sourceaddr
stick-table type ip size 10240k expire 30m peers lb_replication 

frontend ft_web_ssl
bind 0.0.0.0:443 name https ssl crt /etc/haproxy/certs/main.pem
mode http
option httplog

acl is_from_outside src 192.168.110.0/24
acl is_empty_path path /
   acl is_webmail hdr(host) -i webmail
   acl is_webmail_fqdn hdr(host) -i webmail.domain

   redirect location /owa/ code 302 if is_webmail is_empty_path ! 
is_from_outside
   redirect location /owa/ code 302 if is_webmail_fqdn is_empty_path ! 
is_from_outside
   default_backend bk_web_ssl

backend bk_web_ssl
mode http
option httplog
cookie SERVERID insert nocache indirect   
stick on src table sourceaddr  
server server1 10.128.176.150:443 check ssl
server server2 10.51.0.150:443 check ssl backup

It's fine for new connections - it records the correct server1/server2 
information. It's hard to demonstrate, but I can see when I use haproxyctl to 
clear an entry :

Haproxyctl clear table sourceaddr key  

.. it doesn't clear the secondary node entry. When that entry for the client 
re-presents the expiry time on the secondary updates but the entry never clears.

I can't really include pictures on these emails, but the tables are kind of 
standard:

e.g. 

0x7fa8b247a4f4: key=217.40.203.34 use=0 exp=1574957 server_id=1

Thanks
Andy

-Original Message-
From: Frederic Lecaille [mailto:flecai...@haproxy.com] 
Sent: 12 February 2018 12:56
To: Franks Andy (IT Technical Architecture Manager); 'haproxy@formilux.org'
Subject: Re: Peer tables don't synch on clear

On 02/08/2018 11:22 AM, Franks Andy (IT Technical Architecture Manager) 
wrote:
> Hi all,

Hello Franks,

>    Haproxy 1.6.13
> 
>    I've checked the documentation again but can't see an option for this.
> 
> We sometimes clear backup path server use for individual connections and 
> whilst the peers synchronisation works for new connections, it doesn't 
> clear on the secondary peer node we're using.
> 
> Is this by design or an option I'm not seeing?

Please give us more information about your configuration. If possible, 
also provide us with the information of stick-table entries concerned 
with this issue (see "show table" CLI command).

Do not forget to obfuscate the critical data.

Regards,

Fred.





Peer tables don't synch on clear

2018-02-08 Thread Franks Andy (IT Technical Architecture Manager)
Hi all,
  Haproxy 1.6.13
  I've checked the documentation again but can't see an option for this.
We sometimes clear backup path server use for individual connections and whilst 
the peers synchronisation works for new connections, it doesn't clear on the 
secondary peer node we're using.
Is this by design or an option I'm not seeing?
Thanks
Andy


Quick question re errorloc urls

2017-11-03 Thread Franks Andy (IT Technical Architecture Manager)
Hi all,
  We have a test haproxy instance that is using a letscrypt certificate, and is 
in a DMZ zone. We have internal network servers delivering an application via 
https using an internal CA certificate, and all works fine - the client 
connects, sees the letscrypt cert and since the load balancer o/s has the 
internal CA in its trusted root certs, the verification works there too.

The query I have is where we fail both backend servers we're using and have the 
client redirect to an internal website errorloc directive via https to deliver 
a "sorry the service isn't available" :

Here's the config:

frontend ft_web_ssl
  bind 0.0.0.0:443 name https ssl crt 
/usr/local/bin/dehydrated/certs/portal.fqdn/concat.pem
  mode http
  option httplog
  option forwardfor
  tcp-request connection track-sc0 src table connections
  default_backend bk_portal_ssl

backend bk_portal_ssl
  mode http
  option httplog
  option httpchk GET /Login/Heartbeat HTTP/1.0\r\nHost:\ 
portal.sath.nhs.uk
  http-check expect rstatus 200
  balance roundrobin
  stick on src table connections
  cookie SERVERID insert nocache indirect
  default-server inter 2s rise 10 fall 20 on-marked-down 
shutdown-sessions
  errorloc 503 https://intranet/clinicalportal/clinicalportalholding.htm
  server RSH-CP-IIS1 192.168.176.175:443 cookie 1 check ssl
  server RSH-CP-IIS2 192.168.176.176:443 cookie 2 check ssl

First of all, the errorloc "redirection" from a 503 works fine but since this 
intranet page is configured using an internal CA certificate and for some 
reason the client doesn't see the letscrypt certificate, rather the one from 
the website itself, it therefore warns on the client browser if they don't have 
the internal CA root cert installed as trusted.

I would have thought that the client should get the letscrypt frontend 
certificate in this case? How come not, and is there any way of fixing this? 
I'd rather keep ssl between DMZ and intranet zone if possible; obviously using 
plain http for the errorloc 503 is one way around, but would rather not do this.

Thanks
Andy


FW: https status codes

2017-07-26 Thread Franks Andy (IT Technical Architecture Manager)


-Original Message-
From: Franks Andy (IT Technical Architecture Manager) 
Sent: 26 July 2017 13:52
To: 'Aleksandar Lazic'
Subject: RE: https status codes

Thanks Alexander.
I'd imagine that 
option httpchk GET /Login/Heartbeat HTTP/1.1\r\nHost:\ rsh-cp-iis1
presents the same rsh-cp-iis1 to both the iis1 and iis2 server? It seems to 
work like this with the way I got it working, i.e. option httpchk GET 
https://rsh-cp-iis1/Login/Heartbeat, but I would need rsh-cp-iis1 "name" to be 
presented to that server, and iis2 to the iis2 server and so on, could be an 
eventual list of quite a few backends.

I'll have a look at the resolver you suggested though..
Thanks again
Andy

-Original Message-
From: Aleksandar Lazic [mailto:al-hapr...@none.at] 
Sent: 26 July 2017 12:00
To: Franks Andy (IT Technical Architecture Manager)
Cc: haproxy@formilux.org
Subject: Re: https status codes

Hi Andy,

Franks Andy (IT Technical Architecture Manager) wrote on 26.07.2017:

> Hi all,
>
> HAProxy 1.7.6
>  
>   I have a hopefully easy question to answer - I'm trying to do server 
> checks against 2x IIS nodes which require sending of the destination 
> host name (virtual hosts) before delivering content. I'm trying to 
> work out how to send the backend  server name with the check request. 
> At the moment the IIS server
>
> isn't seeing the name, rather an IP address as far as I can tell, and 
> responding with a 404.
>  
> This is the config
>  
>backend bk_web_ssl
>   mode http
>   option httplog
>   option httpchk GET https://rsh-cp-iis1/Login/Heartbeat

As described in the doc you just need to add the host header.

http://cbonte.github.io/haproxy-dconv/1.7/configuration.html#4-option%20httpchk

option httpchk GET /Login/Heartbeat HTTP/1.1\r\nHost:\ rsh-cp-iis1


>   http-check expect rstatus 200
>   balance roundrobin
>   stick on src table connections
>   cookie SERVERID insert nocache indirect
>   server RSH-CP-IIS1 192.168.176.175:443 cookie 1 check ssl
>   server RSH-CP-IIS2 192.168.176.176:443 cookie 2 check ssl
>  
>  
> I can sort of get it to work on one of the two by including that 
> servers name in the option httpchk line as seen:
>  
>   option httpchk GET https://rsh-cp-iis1/Login/Heartbeat
>  
> .. but would rather just do option httpchk GET /Login/Heartbeat
>  
> ..And something like 
>   server RSH-CP-IIS1 RSH-CP-IIS1:443 cookie 1 check ssl
>   server RSH-CP-IIS2 RSH-CP-IIS2:443 cookie 2 check ssl

When you want to use names you will need to add a resolver in 1.7.

http://cbonte.github.io/haproxy-dconv/1.7/configuration.html#5.3
  
> Is there some keyword I'm missing somewhere or a better way of doing this?
>  
> Thanks
> Andy

--
Best Regards
Aleks




https status codes

2017-07-26 Thread Franks Andy (IT Technical Architecture Manager)
Hi all,

HAProxy 1.7.6

  I have a hopefully easy question to answer - I'm trying to do server checks 
against 2x IIS nodes which require sending of the destination host name 
(virtual hosts) before delivering content. I'm trying to work out how to send 
the backend server name with the check request. At the moment the IIS server
isn't seeing the name, rather an IP address as far as I can tell, and 
responding with a 404.

This is the config

   backend bk_web_ssl
  mode http
  option httplog
  option httpchk GET https://rsh-cp-iis1/Login/Heartbeat
  http-check expect rstatus 200
  balance roundrobin
   stick on src table connections
  cookie SERVERID insert nocache indirect
  server RSH-CP-IIS1 192.168.176.175:443 cookie 1 check ssl
  server RSH-CP-IIS2 192.168.176.176:443 cookie 2 check ssl


I can sort of get it to work on one of the two by including that servers name 
in the option httpchk line as seen:

  option httpchk GET https://rsh-cp-iis1/Login/Heartbeat

.. but would rather just do option httpchk GET /Login/Heartbeat

..And something like
  server RSH-CP-IIS1 RSH-CP-IIS1:443 cookie 1 check ssl
  server RSH-CP-IIS2 RSH-CP-IIS2:443 cookie 2 check ssl

Is there some keyword I'm missing somewhere or a better way of doing this?

Thanks
Andy


Quick (hopefully) question about clearing stick table entry

2017-05-10 Thread Franks Andy (IT Technical Architecture Manager)
Hi all,
  Is there a way to clear a stick table entry (using socat obviously) by 
referring to the individual 'reference' id given at the beginning of the entry, 
e.g. "0x7faef417d3ec" ?
Looking at the manual it seems the clearing function is based on key (ip in my 
case) or data field - server id etc. I could use the key, but I'm not sure this 
will always be individual - I may not always use "stick on src".
Maybe I'm confused :) and IP key IS the best.
Thanks
Andy


RE: Choosing servers based on IP address

2015-06-05 Thread Franks Andy (IT Technical Architecture Manager)
Ah, that's very useful information. Thank you for taking time to draw
out the diagram and explain so well, really appreciate it.

Haproxy is pretty new to me so I ddn't realise you could do this
direction to backend from frontend acl.. 

I've already done comprehensive tests with cookies, stick tables and so
on, just missed that fairly obvious routing option.

Forgive my ignorance!
Thanks again
andy

-Original Message-
From: Holger Just [mailto:hapr...@meine-er.de] 
Sent: 04 June 2015 21:29
To: Franks Andy (IT Technical Architecture Manager)
Cc: HAProxy
Subject: Re: Choosing servers based on IP address

Hi Andy,

Please always CC the mailing list so that others can help you too and
can learn from the discussion.

Franks Andy (IT Technical Architecture Manager) wrote:
 Hi Holger,
   Sorry, I will elaborate a bit more!
 We are going to implement Microsoft exchange server 2010 (sp3) over 
 two AD sites. At the moment we have two servers, one at each site.
 With a two site AD implementation with out-of-the-box settings, even 
 if the two sites are connected via a decent link, clients from site A 
 are not permitted to use the interface to the database (the CAS) at 
 site B to get to the database at site A, unless the whole site is
down.
 I would like to have 2 load balancing solutions - one at each site 
 with a primary connection to the server at same site, but then a 
 failover if that server goes down.
 That's all fine, but it would be ideal if we had a load balancing 
 solution that could take connections from site A and route them to the

 server at site B in normal situations too with some logic that said 
 If client is from IP x.x.x.x, then always use server B rather than 
 A/B depending on the hard coded weight.
 It would open up lots more DR recovery potential for a multiple site 
 like this. Thinking about it, I can't really understand why it's not 
 done more - redirecting based on where something is coming from.. You 
 could redirect DMZ traffic one way and ordinary another without 
 complicated routing.
 Am I missing a trick?
 Thanks
 Andy

If I understood you right, you have two sites, each with an Exchange
server and some clients. You normally want the clients on Site A to only
connect to EXCH-A (exchange server at Site A). However, if the server is
down, you want toe clients on Site A to connect to the exchange server
on Site B instead.


SITE A|SITE B
--+
  |
Client-1A ---,|   ,--- Client-2A
  \   |  /
Client-1B -- HAPROXY -+ HAPROXY -- Client-2B
  /   \\  | //   \
Client-1C ---'   EXCH-A   |  EXCH-B   `--- Client-2C
  |

This is easily possible with a backend section where one server is
designated as a backup server which will thus only used if all
non-backup-servers are down:

backend SMTP-A
  server exch-a 10.1.0.1:25 check
  server exch-b 10.2.0.1:25 check backup

With this config, the primary server (exch-a) is used for all
connections. If it is down, the backup server exch-b is used until
exch-a is up again.

Now, in order to route clients from Site B to their own exchange, even
if they arrive on the HAproxy from Site A, you can define an additional
backend with flipped roles:

backend SMTP-B
  server exch-a 10.1.0.1:25 check backup
  server exch-b 10.2.0.1:25 check

you can then route requests in the frontend to the appropriate backend
based on the source IP:

frontend smtp
  bind :25

  acl from-site-a src 10.1.0.0/16
  acl from-site-b src 10.2.0.0/16

  use_backend SMTP-A if from-site-a
  use_backend SMTP-B if from-site-b
  default_backend SMTP-A

I hope, this is clear. Please read the configuration manual regarding
additional server options which can affect stickiness and handling of
existing sessions on failover:

http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#5.2

Regards,
Holger



Choosing servers based on IP address

2015-06-02 Thread Franks Andy (IT Technical Architecture Manager)
Hi all,
  Quick question - can anyone think of a way to change a server's weight
based on some criteria, for example source IP address? It would be so
useful when dealing with a common service that has two distinct sites,
and rules in place that stop access to resources from the wrong site,
like Exchange (where you can't access your mailbox from the wrong
site-based CAS server).
I found a patch that does dynamic server weighting at a preset time for
all clients, but not a per-client weighting scheme.
If I can't do this, could I do it with LVS does anybody know?
Thanks
Andy


FW: Choosing servers based on IP address

2015-06-02 Thread Franks Andy (IT Technical Architecture Manager)
I guess not then! I did see something about the newer version having
some lua based choice of server, but it may have nothing to do with what
I'm after.

Not to worry.

Thanks

Andy

 

From: Franks Andy (IT Technical Architecture Manager)
[mailto:andy.fra...@sath.nhs.uk] 
Sent: 02 June 2015 09:12
To: haproxy@formilux.org
Subject: Choosing servers based on IP address

 

Hi all,

  Quick question - can anyone think of a way to change a server's weight
based on some criteria, for example source IP address? It would be so
useful when dealing with a common service that has two distinct sites,
and rules in place that stop access to resources from the wrong site,
like Exchange (where you can't access your mailbox from the wrong
site-based CAS server).

I found a patch that does dynamic server weighting at a preset time for
all clients, but not a per-client weighting scheme.

If I can't do this, could I do it with LVS does anybody know?

Thanks

Andy