Re: tune.bufsize and tune.maxrewrites questions

2015-09-17 Thread John Skarbek
Certainly,

```
[~]$ haproxy -vv
HA-Proxy version 1.5.14 2015/07/02
Copyright 2000-2015 Willy Tarreau 

Build options :
  TARGET  = linux26
  CPU = generic
  CC  = gcc
  CFLAGS  = -m64 -march=x86-64 -O2 -g -fno-strict-aliasing
  OPTIONS = USE_ZLIB=yes USE_OPENSSL=1 USE_PCRE=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 200

Encrypted password support via crypt(3): yes
Built with zlib version : 1.2.3
Compression algorithms supported : identity, deflate, gzip
Built with OpenSSL version : OpenSSL 1.0.1e-fips 11 Feb 2013
Running on OpenSSL version : OpenSSL 1.0.1e-fips 11 Feb 2013
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes
Built with PCRE version : 7.8 2008-09-05
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Built with transparent proxy support using: IP_TRANSPARENT IP_FREEBIND

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.
```

And the config:
```
global
  log 127.0.0.1   local0
  log 127.0.0.1   local1 notice
  maxconn 20
  tune.ssl.default-dh-param 1024
  nbproc 20

defaults
  log global
  modehttp
  compression algo gzip
  compression type text/html text/plain
  retries 3
  timeout client 400s
  timeout connect 5s
  timeout server 400s
  timeout tunnel 400s
  option abortonclose
  option redispatch
  option tcpka

  option http-keep-alive
  timeout http-keep-alive 15s

  balance leastconn

listen admin
  bind 192.0.2.200:901
  mode http
  stats uri /
  stats enable

frontend main
  option httplog
  capture request header CF-Connecting-IP len 64
  capture request header CF-Ray len 64
  bind 192.0.2.100:80
  bind 192.0.2.100:443 ssl crt /etc/ssl/certs/example.com ciphers
ECDH+AESGCM:ECDH+AES256:ECDH+AES128:ECDH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS:!DH
no-sslv3
  maxconn 12

  reqidel ^x-forwarded-for:.*
  reqidel ^client-ip:.*
  acl static_asset_url url_beg /static/assets
  use_backend example_s3_static_backend if static_asset_url

  acl some_url url_beg /something
  use_backend some_backend if some_url

  redirect scheme https code 301 if !{ ssl_fc }

  acl prod_is_down nbsrv(main_backend) lt 1
  use_backend status_page if prod_is_down

  default_backend main_backend

backend some_backend
  option forwardfor
  option httplog
  reqirep ([\w:]+\s)(\/[\w\d]+)(\/.*) \1\ \3
  option httpchk GET /healthcheck
  server somenode01 192.0.2.1:8282 weight 10 slowstart 1m maxconn 8192 check
  server somenode02 192.0.2.2:8282 weight 10 slowstart 1m maxconn 8192 check

backend main_backend
  option forwardfor
  option httplog
  fullconn 132000
  http-check expect status 200
  cookie SERVERID insert indirect nocache
  option httpchk GET /healthcheck
  server mainnode01 192.0.2.11:443 weight 10 slowstart 1m maxconn 8192
check check-ssl ssl verify none cookie ID1
  server mainnode02 192.0.2.12:443 weight 10 slowstart 1m maxconn 8192
check check-ssl ssl verify none cookie ID2
  server mainnode03 192.0.2.13:443 weight 10 slowstart 1m maxconn 8192
check check-ssl ssl verify none cookie ID3

backend example_s3_static_backend
  option forwardfor
  option httplog
  reqirep  ^Host:   Host:\ example-static.s3.amazonaws.com
  reqirep ^([^\ :]*)\ (/[^/]+/[^/]+)(.*) \1\ \3
  reqidel ^Authorization:.*
  rspidel ^x-amz-id-2:.*
  rspidel ^x-amz-request-id:.*
  rspidel ^Server:.*
  server aws_s3 example-static.s3-us-west-2.amazonaws.com:443 weight 10
slowstart 1m maxconn 8192 check check-ssl ssl verify required ca-file
/etc/ssl/certs/ca-bundle.crt inter 60s

backend status_page
  redirect location http://unavailable.example.com code 307
```

On Thu, Sep 17, 2015 at 12:18 AM, Aleksandar Lazic 
wrote:

> Hi John.
>
> Am 17-09-2015 07:03, schrieb John Skarbek:
>
>> Good Morning!
>>
>> So recently I went into battle between our CDN provider and our
>> application team due to some HTTP400's coming from somewhere.  At first
>> I never suspected haproxy to be at fault due to the way I was groking
>> our logs.  The end result is that I discovered haproxy doesn't log the
>> GET request, but rather only logs a `BADREQ` with a termination state of
>> `PR--`.  Which based on reading documentation haproxy isn't going to log
>> a 414, but instead a 400.  I ponder if this is due to something being
>> truncated forcing haproxy to see a malformed request.
>>
>> Digging into documentation, I glossed over the fact that the default
>> buffer size isn't 16k, but actually a lower 8192.  Unfortunately my
>> fault, reading quickly got me to this point.  But due to reading further
>> the following statement is where I finally have a question; under the
>> config item tune.maxrewrite:
>>
>> "...It is genera

tune.bufsize and tune.maxrewrites questions

2015-09-16 Thread John Skarbek
Good Morning!

So recently I went into battle between our CDN provider and our application
team due to some HTTP400's coming from somewhere.  At first I never
suspected haproxy to be at fault due to the way I was groking our logs.
The end result is that I discovered haproxy doesn't log the GET request,
but rather only logs a `BADREQ` with a termination state of `PR--`.  Which
based on reading documentation haproxy isn't going to log a 414, but
instead a 400.  I ponder if this is due to something being truncated
forcing haproxy to see a malformed request.

Digging into documentation, I glossed over the fact that the default buffer
size isn't 16k, but actually a lower 8192.  Unfortunately my fault, reading
quickly got me to this point.  But due to reading further the following
statement is where I finally have a question; under the config item
tune.maxrewrite:

"...It is generally wise to set it to about 1024. It is automatically
readjusted to half of bufsize if it is larger than that. This means you
don't have to worry about it when changing bufsize"

I do not see in the source code that actually supports that statement.  We
plan on mucking around with this setting; starting at 1024.  Perhaps the
documentation is simply not clear to me, but if I need it larger,
documentation indicates it'll revert back to being half of `bufsize` which
is not where I want to be; forcing me to need to tune `bufsize` instead of
`maxrewrite.`

Secondly, what would occur if we blow out the maxrewrite reserves?  I did
some quick and absurd testing and I was not able to force haproxy to throw
an HTTP400, instead the request went to a backend server just fine.  But I
worry that the Headers may be getting truncated.

Thank you much.

-- 
John


Re: How to disable backend servers without health check

2015-07-16 Thread John Skarbek
Krishna,

I've recently had to deal with this as well.  Our solution involves a
couple of aspects.  Firstly, one must configure an admin socket per
process.  In our case we run with 20 processes, so we've got a
configuration that looks similar to this in our global section:

  stats socket /var/run/haproxy_admin1.sock mode 600 level admin process 1
  stats socket /var/run/haproxy_admin2.sock mode 600 level admin process 2
  stats socket /var/run/haproxy_admin3.sock mode 600 level admin process 3
  stats socket /var/run/haproxy_admin4.sock mode 600 level admin process 4

Counting all the way up to 20...

After that we can do a simple one liner that disables a single server;
using bash:

for i in {1..20}; do echo 'disable server the_backend/the_server' | socat
/var/run/haproxy_admin$i.sock stdio; done

This loops through each admin socket and disables 'the_server' from
'the_backend'.

I hope this gets you started in looking for a solution.

I like your route of accomplishing this though.  With our 20 proc
configuration we've decided to deal with the pain of 20 health checks which
has caused us some issues, but nothing being a show stopper.

On Thu, Jul 16, 2015 at 5:53 AM, Krishna Kumar (Engineering) <
krishna...@flipkart.com> wrote:

> Hi all,
>
> We have a large set of machines running haproxy (1.5.12), and each of
> them have hundreds of backends, many of which are the same across
> systems. nbproc is set to 12 at present for our 48 core systems. We are
> planning a centralized health check, and disable the same in haproxy, to
> avoid each process on each server doing health check for the same
> backend servers.
>
> Is there any way to disable a single backend from the command line, such
> that each haproxy instance finds that this backend is disabled? Using
> socat with the socket only makes the handling process set it's status of
> the
> backend as MAINT, but others don't get this information.
>
> Appreciate if someone can show if this can be done.
>
> Regards,
> - Krishna Kumar
>
>
>
> --
>
> This email and any files transmitted with it are confidential and intended
> solely for the use of the individual or entity to whom they are addressed.
> If you have received this email in error please notify the system manager.
> This message contains confidential information and is intended only for the
> individual named. If you are not the named addressee you should not
> disseminate, distribute or copy this e-mail. Please notify the sender
> immediately by e-mail if you have received this e-mail by mistake and
> delete this e-mail from your system. If you are not the intended recipient
> you are notified that disclosing, copying, distributing or taking any
> action in reliance on the contents of this information is strictly
> prohibited. Although Flipkart has taken reasonable precautions to ensure no
> viruses are present in this email, the company cannot accept responsibility
> for any loss or damage arising from the use of this email or attachments
>



-- 

[image: rally-logo-68x68.jpg]

John T Skarbek | jskar...@rallydev.com

Infrastructure Engineer, Engineering

1101 Haynes Street, Suite 105, Raleigh, NC 27604

720.921.8126 Office


Re: haproxy upgrade strategy for primary/secondary model

2015-06-05 Thread John Skarbek
Amol,

My suggestion would be to leverage the use of keepalived for such an
operation as it is designed for this type of problem.  If keepalived
is configured appropriately, there should be minimal impact.

Force a failover such that keepalived on the second node takes over
the IP addresses.  Assuming all is configured the same on the second
node, he should pickup and serve or route traffic appropriately.

For additional assistance using my suggestion, it may be better
answered in the keepalived mailing lists.

On Fri, Jun 5, 2015 at 11:44 AM, Amol  wrote:
> Hi All,
> I want to get an idea from you all about a scenario that i am facing..
> So i have 2 haproxy servers as load balancer primary and secondary, all the
> connections always go to primary, when primary fails i have keepalived
> running so the connections will fail-over to secondary
>
> Now when i am upgrading i can upgrade secondary without any issues as the
> server never has active connections, but my question is how can i upgrade
> the primary without causing any downtime to my users.
>
> I have 2 Apache servers running behind the load balancers.
>
> so far i have tried the following on the primary , but have no luck.
>
> echo "1" > /proc/sys/net/ipv4/ip_forward
> iptables -P FORWARD ACCEPT
> iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j DNAT
> --to-destination :80
> iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 443 -j DNAT
> --to-destination :443
> iptables -t nat -A POSTROUTING -j MASQUERADE
> iptables -t nat -L -v
>
> my website does not get redirected to the secondary even after i do this
>
> any suggestions?