On 04/05/2017 01:16 μμ, Olivier Houchard wrote:
> On Thu, May 04, 2017 at 10:03:07AM +0000, Pierre Cheynier wrote:
>> Hi Olivier,
>>
>> Many thanks for that ! As you know, we are very interested on this topic.
>> We'll test your patches soon for sure.
>>
>> Pierre
> 
> Hi Pierre :)
> 
> Thanks ! I'm very interested in knowing how well it works for you.
> Maybe we can talk about that around a beer sometime.
> 
> Olivier
> 

Hi,

I finally managed to find time to perform some testing.

Fristly, let me explain environment.

Server and generator are on different servers (bare medal) with the same spec,
network interrupts are pinned to all CPUs and irqbalancer daemon is disabled.
Both nodes have 10GbE network interfaces.

I compared HAPEE with HAProxy using the following versions:

### HAProxy
The git SHA isn't mentioned in the output because I created the tarball
with:

git archive --format=tar --prefix="haproxy-1.8.0/" HEAD | gzip -9 >
haproxy-1.8.0.tar.gz

as I had to build the rpm using a tar ball, but I used the latest haproxy
at f494977bc1a361c26f8cc0516366ef2662ac9502 commit.

/usr/sbin/haproxy -vv
HA-Proxy version 1.8-dev1 2017/04/03
Copyright 2000-2017 Willy Tarreau <wi...@haproxy.org>

Build options :
  TARGET  = linux2628
  CPU     = generic
  CC      = gcc
  CFLAGS  = -DMAX_HOSTNAME_LEN=42
  OPTIONS = USE_LINUX_TPROXY=1 USE_CPU_AFFINITY=1 USE_REGPARM=1 USE_OPENSSL=1
USE_PCRE=1 USE_PCRE_JIT=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with OpenSSL version : OpenSSL 1.0.1e-fips 11 Feb 2013
Running on OpenSSL version : OpenSSL 1.0.1e-fips 11 Feb 2013
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT
IP_FREEBIND
Built with network namespace support.
Built without compression support (neither USE_ZLIB nor USE_SLZ are set).
Compression algorithms supported : identity("identity")
Encrypted password support via crypt(3): yes
Built with PCRE version : 8.32 2012-11-30
Running on PCRE version : 8.32 2012-11-30
PCRE library supports JIT : yes

Available polling systems :
      epoll : pref=300,  test result OK
       poll : pref=200,  test result OK
     select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available filters :
        [SPOE] spoe
        [COMP] compression
        [TRACE] trace

### HAPEE version
/opt/hapee-1.7/sbin/hapee-lb -vv
HA-Proxy version 1.7.0-1.0.0-163.180 2017/04/10
Copyright 2000-2016 Willy Tarreau <wi...@haproxy.org>

Build options :
  TARGET  = linux2628
  CPU     = generic
  CC      = gcc
  CFLAGS  = -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement
-DMAX_SESS_STKCTR=10 -DSTKTABLE_EXTRA_DATA_TYPES=10
  OPTIONS = USE_MODULES=1 USE_LINUX_SPLICE=1 USE_LINUX_TPROXY=1 USE_SLZ=1
USE_CPU_AFFINITY=1 USE_REGPARM=1 USE_OPENSSL=1 USE_LUA=1 USE_PCRE= 
USE_PCRE_JIT=1
USE_NS=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Encrypted password support via crypt(3): yes
Built with libslz for stateless compression.
Compression algorithms supported : identity("identity"), deflate("deflate"),
raw-deflate("deflate"), gzip("gzip")
Built with OpenSSL version : OpenSSL 1.0.1e-fips 11 Feb 2013
Running on OpenSSL version : OpenSSL 1.0.1e-fips 11 Feb 2013
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes
Built with PCRE version : 8.32 2012-11-30
Running on PCRE version : 8.32 2012-11-30
PCRE library supports JIT : yes
Built with Lua version : Lua 5.3.3
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT
IP_FREEBIND
Built with network namespace support

Available polling systems :
      epoll : pref=300,  test result OK
       poll : pref=200,  test result OK
     select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available filters :
        [COMP] compression
        [TRACE] trace
        [SPOE] spoe


The configuration is the same and it is attached. As you can use I use nbproc >1
and each process is pinned to different CPU. We have 12 real CPUs as Intel hyper
threading is disabled, but we only use 10 CPUs for haproxy, the remaining two 
CPUs
are left for other daemons to use.

I experimented with wrk2 and httpress stress tools and decided to use wrk2 for
these tests. I didn't want to use the inject and other tools provided by haproxy
as I believe using different clients provides higher chances to spot problems.

In my tests I see that wrk2 reports higher read errors with HAProxy (3890) than
HAPEE (36). I don't know the meaning of the read error and it could be some
stupidity in the code of wrk2. I am saying this because two years ago we spent
four weeks stress testing HAPEE and found out that all open source http stress
tool sucks and some of the errors they report are client errors rather server.
But, in this case wrk2 was always reporting higher read errors with HAProxy.

Below is the report and I have run the same tests 3-4 times.
Another thing I would like to test is any possible performance degradation,
but this requires to build a proper stress environment and I don't have the time
to do it right now.

### HAPEE without reload

wrk2 -c 12000 -d 20s -t 12 -R 80000 http://10.6.213.3/
Running 20s test @ http://10.6.213.3/
  12 threads and 12000 connections
  Thread calibration: mean lat.: 1.966ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 2.012ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 2.096ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 2.435ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 1.985ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 2.506ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 2.047ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 2.058ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 1.980ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 1.927ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 1.957ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 2.195ms, rate sampling interval: 10ms
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     2.28ms    2.94ms  89.47ms   95.98%
    Req/Sec     7.06k     2.48k   78.67k    88.73%
  1403057 requests in 19.99s, 305.08MB read
Requests/sec:  70187.86
Transfer/sec:     15.26MB

### HAPEE with reload
while (true); do systemctl reload hapee-1.7-lb.service;sleep 1;done


wrk2 -c 12000 -d 20s -t 12 -R 80000 http://10.6.213.3/
Running 20s test @ http://10.6.213.3/
  12 threads and 12000 connections
  Thread calibration: mean lat.: 2.734ms, rate sampling interval: 11ms
  Thread calibration: mean lat.: 2.124ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 2.034ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 2.210ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 2.025ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 2.165ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 2.055ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 2.112ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 3.358ms, rate sampling interval: 16ms
  Thread calibration: mean lat.: 2.211ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 2.157ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 2.217ms, rate sampling interval: 10ms
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     2.34ms    1.96ms  31.70ms   93.16%
    Req/Sec     7.06k     2.15k   28.10k    85.06%
  1402923 requests in 19.98s, 308.61MB read
  Socket errors: connect 0, read 36, write 0, timeout 0
Requests/sec:  70204.08
Transfer/sec:     15.44MB
root at lablb-202 in ~

### HAProxy without reload

wrk2 -c 12000 -d 20s -t 12 -R 80000 http://10.6.213.3/
Running 20s test @ http://10.6.213.3/
  12 threads and 12000 connections
  Thread calibration: mean lat.: 2.050ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 1.958ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 2.070ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 2.079ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 3.192ms, rate sampling interval: 15ms
  Thread calibration: mean lat.: 2.011ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 2.103ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 1.974ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 2.059ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 2.478ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 2.032ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 3.027ms, rate sampling interval: 14ms
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     2.31ms    1.95ms  33.50ms   92.14%
    Req/Sec     7.05k     2.51k   31.30k    86.44%
  1401915 requests in 19.98s, 304.83MB read
Requests/sec:  70161.32
Transfer/sec:     15.26MB

### HAProxy with reload
while (true); do systemctl reload haproxy.service;sleep 1;done


wrk2 -c 12000 -d 20s -t 12 -R 80000 http://10.6.213.3/
Running 20s test @ http://10.6.213.3/
  12 threads and 12000 connections
  Thread calibration: mean lat.: 2.135ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 3.418ms, rate sampling interval: 16ms
  Thread calibration: mean lat.: 2.166ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 2.283ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 2.057ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 2.164ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 3.200ms, rate sampling interval: 14ms
  Thread calibration: mean lat.: 2.232ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 2.206ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 2.212ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 2.154ms, rate sampling interval: 10ms
  Thread calibration: mean lat.: 3.431ms, rate sampling interval: 16ms
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     2.69ms    4.09ms 880.64ms   93.43%
    Req/Sec     7.06k     2.50k   27.00k    86.45%
  1402222 requests in 19.99s, 308.69MB read
  Socket errors: connect 0, read 3890, write 1, timeout 0
Requests/sec:  70147.32
Transfer/sec:     15.44MB

Cheers,
Pavlos

global
    nbproc  10
    stats   socket     /run/lb_engine/process-1.sock user lbengine group 
lbengine mode 660 level admin process 1
    stats   socket     /run/lb_engine/process-2.sock user lbengine group 
lbengine mode 660 level admin process 2
    stats   socket     /run/lb_engine/process-3.sock user lbengine group 
lbengine mode 660 level admin process 3
    stats   socket     /run/lb_engine/process-4.sock user lbengine group 
lbengine mode 660 level admin process 4
    stats   socket     /run/lb_engine/process-5.sock user lbengine group 
lbengine mode 660 level admin process 5
    stats   socket     /run/lb_engine/process-6.sock user lbengine group 
lbengine mode 660 level admin process 6
    stats   socket     /run/lb_engine/process-7.sock user lbengine group 
lbengine mode 660 level admin process 7
    stats   socket     /run/lb_engine/process-8.sock user lbengine group 
lbengine mode 660 level admin process 8
    stats   socket     /run/lb_engine/process-9.sock user lbengine group 
lbengine mode 660 level admin process 9
    stats   socket     /run/lb_engine/process-10.sock user lbengine group 
lbengine mode 660 level admin process 10
    cpu-map 1 2
    cpu-map 2 3
    cpu-map 3 4
    cpu-map 4 5
    cpu-map 5 6
    cpu-map 6 7
    cpu-map 7 8
    cpu-map 8 9
    cpu-map 9 10
    cpu-map 10 11
    user lbengine
    group lbengine
    chroot /var/empty
    daemon
    log 127.0.0.1 len 4096 local2
    maxconn 500000
    ssl-default-bind-ciphers 
ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:ECDHE-RSA-DES-CBC3-SHA:ECDHE-ECDSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA:!EDH
    ssl-default-bind-options no-sslv3 no-tls-tickets
    ssl-server-verify none
    stats maxconn 100
    tune.bufsize 49152
    tune.ssl.default-dh-param 1024

defaults
    option     redispatch
    option     prefer-last-server
    log-format 
{\"lbgroup\":\""${LBGROUP}"\",\"dst_ip\":\"%fi\",\"dst_port\":\"%fp\",\"client_ip\":\"%ci\",\"client_port\":\"%cp\",\"timestamp\":\"%t\",\"frontend_name\":\"%ft\",\"backend_name\":\"%b\",\"server_name\":\"%s\",\"tq\":\"%Tq\",\"ta\":\"%Ta\",\"td\":\"%Td\",\"th\":\"%Th\",\"ti\":\"%Ti\",\"trf\":\"%TR\",\"tw\":\"%Tw\",\"tc\":\"%Tc\",\"tr\":\"%Tr\",\"tt\":\"%Tt\",\"status_code\":\"%ST\",\"bytes_read\":\"%B\",\"termination_state\":\"%tsc\",\"actconn\":\"%ac\",\"feconn\":\"%fc\",\"beconn\":\"%bc\",\"srv_conn\":\"%sc\",\"retries\":\"%rc\",\"srv_queue\":\"%sq\",\"backend_queue\":\"%bq\",\"toptalkers\":\"%[http_first_req]\",\"vhost\":\"%[capture.req.hdr(0),lower]\",\"ssl_ciphers\":\"%sslc\",\"ssl_version\":\"%sslv\",\"http_method\":\"%HM\",\"http_version\":\"%HV\",\"http_uri\":\"%HP\"}

    backlog 65535
    balance roundrobin
    log global
    maxconn 500000
    mode http
    no option dontlognull
    option contstats
    option http-keep-alive
    option tcp-smart-accept
    option tcp-smart-connect
    retries 2
    timeout check 5s
    timeout client 30s
    timeout connect 4s
    timeout http-request 30s
    timeout queue 1m
    timeout server 30s

frontend test.com
    bind 10.6.213.3:80 process 1
    bind 10.6.213.3:80 process 2
    bind 10.6.213.3:80 process 3
    bind 10.6.213.3:80 process 4
    bind 10.6.213.3:80 process 5
    bind 10.6.213.3:80 process 6
    bind 10.6.213.3:80 process 7
    bind 10.6.213.3:80 process 8
    bind 10.6.213.3:80 process 9
    bind 10.6.213.3:80 process 10

    default_backend robot

backend robot
    server server1 server1:80  weight 1 check

frontend test-ipv4.foo.com_https_lhr4
    bind 5.1.1.8:80 process 1
    bind 5.1.1.8:80 process 2
    bind 5.1.1.8:80 process 3
    bind 5.1.1.8:80 process 4
    bind 5.1.1.8:80 process 5
    bind 5.1.1.8:80 process 6
    bind 5.1.1.8:80 process 7
    bind 5.1.1.8:80 process 8
    bind 5.1.1.8:80 process 9
    bind 5.1.1.8:80 process 10
    bind 5.1.1.8:443 process 1 crt /etc/ssl/certs/wildcard.foo.com-bundle.pem 
crt /etc/ssl/certs/www.foo.com-bundle.pem ssl
    bind 5.1.1.8:443 process 2 crt /etc/ssl/certs/wildcard.foo.com-bundle.pem 
crt /etc/ssl/certs/www.foo.com-bundle.pem ssl
    bind 5.1.1.8:443 process 3 crt /etc/ssl/certs/wildcard.foo.com-bundle.pem 
crt /etc/ssl/certs/www.foo.com-bundle.pem ssl
    bind 5.1.1.8:443 process 4 crt /etc/ssl/certs/wildcard.foo.com-bundle.pem 
crt /etc/ssl/certs/www.foo.com-bundle.pem ssl
    bind 5.1.1.8:443 process 5 crt /etc/ssl/certs/wildcard.foo.com-bundle.pem 
crt /etc/ssl/certs/www.foo.com-bundle.pem ssl
    bind 5.1.1.8:443 process 6 crt /etc/ssl/certs/wildcard.foo.com-bundle.pem 
crt /etc/ssl/certs/www.foo.com-bundle.pem ssl
    bind 5.1.1.8:443 process 7 crt /etc/ssl/certs/wildcard.foo.com-bundle.pem 
crt /etc/ssl/certs/www.foo.com-bundle.pem ssl
    bind 5.1.1.8:443 process 8 crt /etc/ssl/certs/wildcard.foo.com-bundle.pem 
crt /etc/ssl/certs/www.foo.com-bundle.pem ssl
    bind 5.1.1.8:443 process 9 crt /etc/ssl/certs/wildcard.foo.com-bundle.pem 
crt /etc/ssl/certs/www.foo.com-bundle.pem ssl
    bind 5.1.1.8:443 process 10 crt /etc/ssl/certs/wildcard.foo.com-bundle.pem 
crt /etc/ssl/certs/www.foo.com-bundle.pem ssl

    mode http
    capture request header Host len 48
    acl site_dead nbsrv(test-ipv4.foo.com_https_all) lt 0
    monitor-uri   /site_alive
    monitor fail  if site_dead
    http-request add-header X-Header-Order %[req.hdr_names(:)]
    http-request add-header F5SourceIP %[src]
    http-request add-header F5Nodename %H
    http-request add-header F5-Proto https if { ssl_fc }
    http-request add-header F5-Proto http unless { ssl_fc }
    http-request add-header F5CipherName %sslc if { ssl_fc }
    http-request add-header F5CipherVersion %sslv if { ssl_fc }
    http-request add-header F5CipherBits %[ssl_fc_use_keysize] if { ssl_fc }
    http-request add-header F5TrackerID %{+X}Ts%{+X}[rand()]
    http-response set-header X-XSS-Protection "1; mode=block"

    http-request set-var(txn.lb_trace) req.hdr(X-Lb-Trace),lower if { 
req.hdr(X-Lb-Trace) -m found }
    acl x_lb_debug_on var(txn.lb_trace) -m str yes

    acl x_lb_header res.hdr(X-Lb) -m found
    http-response replace-header  X-Lb (^.*$) DLB,\1 if x_lb_header 
x_lb_debug_on
    http-response add-header      X-Lb DLB           if !x_lb_header 
x_lb_debug_on

    acl x_lb_node_header res.hdr(X-Lb-Node) -m found
    http-response replace-header  X-Lb-Node (^.*$) %H,\1 if x_lb_node_header 
x_lb_debug_on
    http-response add-header      X-Lb-Node %H           if !x_lb_node_header 
x_lb_debug_on


    default_backend test-ipv4.foo.com_https_all

frontend www-ipv6.foo.com_https_lhr4
    bind 2001:5040:0:f::aaaa:80 process 1
    bind 2001:5040:0:f::aaaa:80 process 2
    bind 2001:5040:0:f::aaaa:80 process 3
    bind 2001:5040:0:f::aaaa:80 process 4
    bind 2001:5040:0:f::aaaa:80 process 5
    bind 2001:5040:0:f::aaaa:80 process 6
    bind 2001:5040:0:f::aaaa:80 process 7
    bind 2001:5040:0:f::aaaa:80 process 8
    bind 2001:5040:0:f::aaaa:80 process 9
    bind 2001:5040:0:f::aaaa:80 process 10
    bind 2001:5040:0:f::aaaa:443 process 1 crt 
/etc/ssl/certs/wildcard.foo.com-bundle.pem ssl
    bind 2001:5040:0:f::aaaa:443 process 2 crt 
/etc/ssl/certs/wildcard.foo.com-bundle.pem ssl
    bind 2001:5040:0:f::aaaa:443 process 3 crt 
/etc/ssl/certs/wildcard.foo.com-bundle.pem ssl
    bind 2001:5040:0:f::aaaa:443 process 4 crt 
/etc/ssl/certs/wildcard.foo.com-bundle.pem ssl
    bind 2001:5040:0:f::aaaa:443 process 5 crt 
/etc/ssl/certs/wildcard.foo.com-bundle.pem ssl
    bind 2001:5040:0:f::aaaa:443 process 6 crt 
/etc/ssl/certs/wildcard.foo.com-bundle.pem ssl
    bind 2001:5040:0:f::aaaa:443 process 7 crt 
/etc/ssl/certs/wildcard.foo.com-bundle.pem ssl
    bind 2001:5040:0:f::aaaa:443 process 8 crt 
/etc/ssl/certs/wildcard.foo.com-bundle.pem ssl
    bind 2001:5040:0:f::aaaa:443 process 9 crt 
/etc/ssl/certs/wildcard.foo.com-bundle.pem ssl
    bind 2001:5040:0:f::aaaa:443 process 10 crt 
/etc/ssl/certs/wildcard.foo.com-bundle.pem ssl

    mode http
    capture request header Host len 48
    acl site_dead nbsrv(www-ipv6.foo.com_http_all) lt 0
    monitor-uri   /site_alive
    monitor fail  if site_dead
    http-request add-header X-Header-Order %[req.hdr_names(:)]
    http-request add-header F5SourceIP %[src]
    http-request add-header F5Nodename %H
    http-request add-header F5-Proto https if { ssl_fc }
    http-request add-header F5-Proto http unless { ssl_fc }
    http-request add-header F5CipherName %sslc if { ssl_fc }
    http-request add-header F5CipherVersion %sslv if { ssl_fc }
    http-request add-header F5CipherBits %[ssl_fc_use_keysize] if { ssl_fc }
    http-request add-header F5TrackerID %{+X}Ts%{+X}[rand()]
    http-response set-header X-XSS-Protection "1; mode=block"

    http-request set-var(txn.lb_trace) req.hdr(X-Lb-Trace),lower if { 
req.hdr(X-Lb-Trace) -m found }
    acl x_lb_debug_on var(txn.lb_trace) -m str yes

    acl x_lb_header res.hdr(X-Lb) -m found
    http-response replace-header  X-Lb (^.*$) DLB,\1 if x_lb_header 
x_lb_debug_on
    http-response add-header      X-Lb DLB           if !x_lb_header 
x_lb_debug_on

    acl x_lb_node_header res.hdr(X-Lb-Node) -m found
    http-response replace-header  X-Lb-Node (^.*$) %H,\1 if x_lb_node_header 
x_lb_debug_on
    http-response add-header      X-Lb-Node %H           if !x_lb_node_header 
x_lb_debug_on

    default_backend www-ipv6.foo.com_http_all

frontend bar.foo.com_gui_tcp_lhr4
    bind 5.1.1.8:8080 process 1
    bind 5.1.1.8:8080 process 2
    bind 5.1.1.8:8080 process 3
    bind 5.1.1.8:8080 process 4
    bind 5.1.1.8:8080 process 5
    bind 5.1.1.8:8080 process 6
    bind 5.1.1.8:8080 process 7
    bind 5.1.1.8:8080 process 8
    bind 5.1.1.8:8080 process 9
    bind 5.1.1.8:8080 process 10

    log-format 
{\"lbgroup\":\""${LBGROUP}"\",\"dst_ip\":\"%fi\",\"dst_port\":\"%fp\",\"client_ip\":\"%ci\",\"client_port\":\"%cp\",\"timestamp\":\"%t\",\"frontend_name\":\"%ft\",\"backend_name\":\"%b\",\"server_name\":\"%s\",\"tw\":\"%Tw\",\"tc\":\"%Tc\",\"tt\":\"%Tt\",\"bytes_read\":\"%B\",\"termination_state\":\"%tsc\",\"actconn\":\"%ac\",\"feconn\":\"%fc\",\"beconn\":\"%bc\",\"srv_conn\":\"%sc\",\"retries\":\"%rc\",\"srv_queue\":\"%sq\",\"backend_queue\":\"%bq\"}
    mode tcp

    default_backend bar.foo.com_gui_tcp_all

backend bar.foo.com_gui_tcp_all
    mode tcp
    default-server inter 2s fall 2 rise 2
    no option prefer-last-server
    option  tcplog
    retries 1
    timeout  check 10s
    timeout  queue 10m
    timeout  server 10m

    server bar-101foo.com 10.1.2.33:443 weight 1 check
    server bar-102foo.com 10.1.181.38:443 weight 1 check
    server bar-103foo.com 10.1.207.3:443 weight 1 check
    server bar-104foo.com 10.1.213.14:443 weight 1 check
    server bar-105foo.com 10.1.181.25:443 weight 1 check
    server bar-106foo.com 10.1.206.28:443 weight 1 check
    server bar-107foo.com 10.1.210.10:443 weight 1 check
    server bar-108foo.com 10.3.147.32:443 weight 1 check
    server bar-109foo.com 10.1.29.61:443 weight 1 check
    server bar-110foo.com 10.1.29.39:443 weight 1 check
    server bar-111foo.com 10.3.147.22:443 weight 1 check
    server bar-112foo.com 10.3.162.24:443 weight 1 check
    server bar-113foo.com 10.1.29.55:443 weight 1 check
    server bar-114foo.com 10.3.162.11:443 weight 1 check
    server bar-115foo.com 10.1.33.14:443 weight 1 check
    server bar-116foo.com 10.1.145.31:443 weight 1 check
    server bar-117foo.com 10.1.70.8:443 weight 1 check
    server bar-118foo.com 10.1.69.2:443 weight 1 check
    server bar-201.lhr4.prod.foo.com 10.11.11.13:443 weight 1 check
    server bar-202.lhr4.prod.foo.com 10.11.3.25:443 weight 1 check
    server bar-203.lhr4.prod.foo.com 10.11.2.34:443 weight 1 check
    server bar-204.lhr4.prod.foo.com 10.11.193.20:443 weight 1 check
    server bar-205.lhr4.prod.foo.com 10.11.194.15:443 weight 1 check
    server bar-206.lhr4.prod.foo.com 10.11.178.15:443 weight 1 check
    server bar-207.lhr4.prod.foo.com 10.11.11.22:443 weight 1 check
    server bar-208.lhr4.prod.foo.com 10.11.2.29:443 disabled weight 1 check
    server bar-210.lhr4.prod.foo.com 10.11.217.30:443 weight 1 check
    server bar-211.lhr4.prod.foo.com 10.11.14.42:443 weight 1 check
    server bar-212.lhr4.prod.foo.com 10.4.100.68:443 weight 1 check
    server bar-213.lhr4.prod.foo.com 10.11.28.58:443 weight 1 check
    server bar-214.lhr4.prod.foo.com 10.4.94.76:443 weight 1 check
    server bar-215.lhr4.prod.foo.com 10.11.24.9:443 weight 1 check
    server bar-216.lhr4.prod.foo.com 10.11.24.22:443 weight 1 check

backend test-ipv4.foo.com_http_all
    mode http
    default-server inter 5s


backend test-ipv4.foo.com_https_all
    mode http
    default-server inter 5s


backend www-ipv6.foo.com_http_all
    mode http
    default-server inter 5s

    server app1foo.com 10.1.2.1:80 weight 1 check

Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to