[ANNOUNCE] haproxy-1.5.5

2014-10-08 Thread Willy Tarreau
Hi,

as promised last week, here's haproxy 1.5.5. Nothing really critical,
just a collection of minor bugs and updates over the last month. People
who have been experiencing segfaults on the config parser will want to
upgrade. Also, some users were bothered by the warning about the stats
directive being used in a multi-process frontend. This is now improved
as the validity checker will only emit this if there's a way to get to
a random process, but if the frontend relies on single-process "bind"
lines (typically one port per process), then there is no warning. This
makes it much more convenient to declare stats instances! Two persons
reported an issue with the http-server-close mode being ignored in
backends. In fact it's any HTTP mode which was ignored in backend when
switching from the frontend to the backend, so this was fixed. As
recently requested, it is now possible to manipulate headers on status
code 101. There was a report of tcp-check not detecting failures when
there was no rule, this was fixed as well. Ah, and last thing, the
systemd-wrapper now supports extra signals to be compatible with other
tools (supervisord was reported to work).

I find that the code stabilizes quickly, because the bugs that have been
reported were regressions brought in the very last versions due to the
latest changes needed to support end-to-end keep-alive, improve multi-
process mode and the check agent. So that's a good indicator that we're
possibly fixing the tail of the bug queue.

Since there's nothing really important here, you can pick it and test it
for some time before deploying it. Those still on 1.4 who are hesitating
on 1.5 can try to start with this one I think.

I'm appending the full changelog here (quite short) :

- DOC: Address issue where documentation is excluded due to a gitignore 
rule.
- MEDIUM: Improve signal handling in systemd wrapper.
- BUG/MINOR: config: don't propagate process binding for dynamic use_backend
- MINOR: Also accept SIGHUP/SIGTERM in systemd-wrapper
- DOC: clearly state that the "show sess" output format is not fixed
- MINOR: stats: fix minor typo fix in stats_dump_errors_to_buffer()
- DOC: indicate in the doc that track-sc* can wait if data are missing
- MEDIUM: http: enable header manipulation for 101 responses
- BUG/MEDIUM: config: propagate frontend to backend process binding again.
- MEDIUM: config: properly propagate process binding between proxies
- MEDIUM: config: make the frontends automatically bind to the listeners' 
processes
- MEDIUM: config: compute the exact bind-process before listener's maxaccept
- MEDIUM: config: only warn if stats are attached to multi-process bind 
directives
- MEDIUM: config: report it when tcp-request rules are misplaced
- MINOR: config: detect the case where a tcp-request content rule has no 
inspect-delay
- MEDIUM: systemd-wrapper: support multiple executable versions and names
- BUG/MEDIUM: remove debugging code from systemd-wrapper
- BUG/MEDIUM: http: adjust close mode when switching to backend
- BUG/MINOR: config: don't propagate process binding on fatal errors.
- BUG/MEDIUM: check: rule-less tcp-check must detect connect failures
- BUG/MINOR: tcp-check: report the correct failed step in the status
- DOC: indicate that weight zero is reported as DRAIN

 Usual URLs come below :
  Site index   : http://www.haproxy.org/
  Sources  : http://www.haproxy.org/download/1.5/src/
  Git repository   : http://git.haproxy.org/git/haproxy-1.5.git/
  Git Web browsing : http://git.haproxy.org/?p=haproxy-1.5.git
  Changelog: http://www.haproxy.org/download/1.5/src/CHANGELOG
  Cyril's HTML doc : 
http://cbonte.github.com/haproxy-dconv/configuration-1.5.html
 
Willy




Re: HAproxy errorfile and HEAD request

2014-10-08 Thread Willy Tarreau
Hi Nenad,

On Sun, Oct 05, 2014 at 07:33:33PM +0200, Nenad Merdanovic wrote:
> Hello,
> 
> I accidentally noticed that HAproxy doesn't follow the HTTP standard 
> when it comes to HEAD requests. Using an 'errofile' directive will 
> return whatever is inside, which often means a HTTP header + a body 
> (used to display a nice error page to the end user).
> 
> The standard of course forbids this (https://tools.ietf.org/html/rfc7231):
> 4.3.2. HEAD
>The HEAD method is identical to GET except that the server MUST NOT
>send a message body in the response (i.e., the response terminates at
>the end of the header section).
> 
> 
> The easiest fix would be to just ignore errorfile directive on HEAD 
> requests, but that might cause problems for people who, like me, abuse 
> the errorfile directive to rewrite the return error code (and remove the 
> body in the process). If this is the approach we are willing to take, 
> the patch is attached.
> 
> The other option I see is to add something like 'force-send' to the 
> errorfile directive for people who are sure they want to send the 
> content on HEAD requests.

Sorry, I didn't notice your mail previously. You brought a good point.
The thing is that the errorfiles are intentionally full HTTP so that
haproxy sends all the bytes on the wire without parsing anything. But
I think that could be improved. In practice, there are rarely huge
headers in such responses so we can imagine that the code used to send
them looks for the empty header on the fly while sending. Another option
would be to change the error message structs to store the header length
and that this is checked when the file is loaded.

I don't know what option is best, as the sending function is
http_server_error() and it's the same which is used to send redirects,
so as you guess I'd rather avoid to risk degrading it. But I still
think it can reasonably be done.

However we still have some direct calls to stream_int_retnclose()
with the error message in arguments. This is generally on the hard
error path (eg: failed to parse a request), so that's not critical.
For example we have this also in the reqdeny rules or the tarpit.
We also have it on the 502/504, which is more annoying to process.

I think that the current state is not critical, in that the only
way a client could be affected is that if it uses pipelining, sends
multiple requests at once, and the error message does not contain
"Connection: close" so that the client considers the body as the
start of response to the second request, and unfortunately this
body parses as a valid HTTP response. There could be other corner
cases as well, but I think that at least for the sake of protocol
compliance, we should add it to the TODO list and try to address it
as well as we can. And anyway for HTTP/2 we won't be able to proceed
this way anymore!

Best regards,
Willy




connection resets during transfers

2014-10-08 Thread Glenn Elliott
Hi All,

I am in the process of migrating from ultramonkey (lvs & heartbeat) to haproxy 
1.5.4 for our environment. I have been really impressed with haproxy so far 
particularly the ssl offload feature and the Layer 7 flexibility for our jboss 
apps.

One of the VIPS that I have moved to haproxy is our exchange 2013 environment 
which is running in tcp mode (expecting approx 1500 concurrent connections on 
this VIP). I don't have any application/user issues yet but I wanted to get a 
handle on the haproxy stats page and particularly the 'resp errors' on the 
backend servers. The total 'resp error' count for the backend is 249 but when I 
hover over the cell it tells me 'connection resets during transfer 314 client, 
597 server'. This doesn't seem to add up?

I assume this counter is accumulative?

As a rule of thumb what sort of percentage would I be concerned with when 
looking at this figure?


[cid:image001.png@01CFE340.8D3F6D30]

My config snippets are:

defaults
log global
modehttp
option  tcplog
option  dontlognull
option  redispatch
retries 3
timeout http-request15s
timeout queue   30s
timeout connect 5s
timeout client  5m
timeout server  5m
timeout http-keep-alive 1s
timeout check   10s
timeout tarpit  1m
backlog 1
maxconn 2000


#-
# exchange vip
#-
frontend  exchange
bind 192.168.1.172:443
bind 192.168.1.172:25
bind 192.168.1.172:80
bind 192.168.1.172:587
bind 192.168.1.172:995
mode tcp
maxconn 1

default_backend exchange-backend

#-
# exchange backend
#-
backend exchange-backend
mode tcp
option ssl-hello-chk
balance roundrobin
server  exch01 exch01 maxconn 5000 check port 443 inter 15s
server  exch02 exch02 maxconn 5000 check port 443 inter 15s
server  exch03 exch03 maxconn 5000 check port 443 inter 15s
server  exch04 exch04 maxconn 5000 check port 443 inter 15s


Thanks very much for your time!

Rgds,

Glenn Elliott.


[PATCH] BUG/MEDIUM: systemd: set KillMode to 'mixed'

2014-10-08 Thread Apollon Oikonomopoulos
By default systemd will send SIGTERM to all processes in the service's
control group. In our case, this includes the wrapper, the master
process and all worker processes.

Since commit c54bdd2a the wrapper actually catches SIGTERM and survives
to see the master process getting killed by systemd and regard this as
an error, placing the unit in a failed state during "systemctl stop".

Since the wrapper now handles SIGTERM by itself, we switch the kill mode
to 'mixed', which means that systemd will deliver the initial SIGTERM to
the wrapper only, and if the actual haproxy processes don't exit after a
given amount of time (default: 90s), a SIGKILL is sent to all remaining
processes in the control group. See systemd.kill(5) for more
information.

This should also be backported to 1.5.
---
 contrib/systemd/haproxy.service.in | 1 +
 1 file changed, 1 insertion(+)

diff --git a/contrib/systemd/haproxy.service.in 
b/contrib/systemd/haproxy.service.in
index 1a3d2c0..0bc5420 100644
--- a/contrib/systemd/haproxy.service.in
+++ b/contrib/systemd/haproxy.service.in
@@ -5,6 +5,7 @@ After=network.target
 [Service]
 ExecStart=@SBINDIR@/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg -p 
/run/haproxy.pid
 ExecReload=/bin/kill -USR2 $MAINPID
+KillMode=mixed
 Restart=always
 
 [Install]
-- 
2.1.1




Re: connection resets during transfers

2014-10-08 Thread Baptiste
On Wed, Oct 8, 2014 at 12:51 PM, Glenn Elliott
 wrote:
>
> Hi All,
>
>
>
> I am in the process of migrating from ultramonkey (lvs & heartbeat) to 
> haproxy 1.5.4 for our environment. I have been really impressed with haproxy 
> so far particularly the ssl offload feature and the Layer 7 flexibility for 
> our jboss apps.
>
>
>
> One of the VIPS that I have moved to haproxy is our exchange 2013 environment 
> which is running in tcp mode (expecting approx 1500 concurrent connections on 
> this VIP). I don't have any application/user issues yet but I wanted to get a 
> handle on the haproxy stats page and particularly the 'resp errors' on the 
> backend servers. The total 'resp error' count for the backend is 249 but when 
> I hover over the cell it tells me 'connection resets during transfer 314 
> client, 597 server'. This doesn't seem to add up?
>
>
>
> I assume this counter is accumulative?
>
>
>
> As a rule of thumb what sort of percentage would I be concerned with when 
> looking at this figure?
>
>
>
>
>
>
>
>
> My config snippets are:
>
>
>
> defaults
>
> log global
>
> modehttp
>
> option  tcplog
>
> option  dontlognull
>
> option  redispatch
>
> retries 3
>
> timeout http-request15s
>
> timeout queue   30s
>
> timeout connect 5s
>
> timeout client  5m
>
> timeout server  5m
>
> timeout http-keep-alive 1s
>
> timeout check   10s
>
> timeout tarpit  1m
>
> backlog 1
>
> maxconn 2000
>
>
>
>
>
> #-
>
> # exchange vip
>
> #-
>
> frontend  exchange
>
> bind 192.168.1.172:443
>
> bind 192.168.1.172:25
>
> bind 192.168.1.172:80
>
> bind 192.168.1.172:587
>
> bind 192.168.1.172:995
>
> mode tcp
>
> maxconn 1
>
>
>
> default_backend exchange-backend
>
>
>
> #-
>
> # exchange backend
>
> #-
>
> backend exchange-backend
>
> mode tcp
>
> option ssl-hello-chk
>
> balance roundrobin
>
> server  exch01 exch01 maxconn 5000 check port 443 inter 15s
>
> server  exch02 exch02 maxconn 5000 check port 443 inter 15s
>
> server  exch03 exch03 maxconn 5000 check port 443 inter 15s
>
> server  exch04 exch04 maxconn 5000 check port 443 inter 15s
>
>
>
>
>
> Thanks very much for your time!
>
>
>
> Rgds,
>
>
>
> Glenn Elliott.
>
>
> __
> For the purposes of protecting the integrity and security of the SVHA network 
> and the information held on it, all emails to and from any email address on 
> the "svha.org.au" domain (or any other domain of St Vincent's Health 
> Australia Limited or any of its related bodies corporate) (an "SVHA Email 
> Address") will pass through and be scanned by the Symantec.cloud anti virus 
> and anti spam filter service. These services may be provided by Symantec from 
> locations outside of Australia and, if so, this will involve any email you 
> send to or receive from an SVHA Email Address being sent to and scanned in 
> those locations.



Hi Glenn,

It means either the client or the server purposely closed the
connection (using RST) during the DATA phase (after handshake since
you're in TCP mode).
Have a look in your logs and search for 'SD' or 'CD' termination flags
to know on which service did the problem occurred.

If you want / need to dig further, you may have to improve the log
line generated or split your configuration in frontend/backend per
service.
That way, you'll know on which TCP port (hence service) those errors
are generated.

Note that you can get some configuration templates for HAProxy and
Exchange 2013 from our appliance documentation:
http://haproxy.com/static/media/uploads/eng/resources/aloha_load_balancer_appnotes_0065_exchange_2013_deployment_guide_en.pdf

Baptiste



RE: connection resets during transfers

2014-10-08 Thread Glenn Elliott
Thanks Baptiste,

The pdf was very helpful. I have split off https into its own frontend/backend 
and can see that this is the cause of a large number of "CD" client disconnect 
errors. So I need to investigate this further.. it looks like the majority of 
the client session closures end with a "CD" error. The users don't report any 
application issues however (e.g. OWA & Outlook perform fine).

I wonder if its related to TOE or other offload features on the 10Gig Emulex 
nic we are using (HP 554FLR-SFP+). Alternative it might be just the behaviour 
of the clients in combination to a http-keep-alive timeout.



ethtool -k eth0

Features for eth0:
rx-checksumming: on
tx-checksumming: on
scatter-gather: on
tcp-segmentation-offload: on
udp-fragmentation-offload: off
generic-segmentation-offload: on
generic-receive-offload: on
large-receive-offload: off
ntuple-filters: off
receive-hashing: on

Rgds,

Glenn


-Original Message-
From: Baptiste [mailto:bed...@gmail.com] 
Sent: Thursday, 9 October 2014 1:21 AM
To: Glenn Elliott
Cc: haproxy@formilux.org
Subject: Re: connection resets during transfers

On Wed, Oct 8, 2014 at 12:51 PM, Glenn Elliott  
wrote:
>
> Hi All,
>
>
>
> I am in the process of migrating from ultramonkey (lvs & heartbeat) to 
> haproxy 1.5.4 for our environment. I have been really impressed with haproxy 
> so far particularly the ssl offload feature and the Layer 7 flexibility for 
> our jboss apps.
>
>
>
> One of the VIPS that I have moved to haproxy is our exchange 2013 environment 
> which is running in tcp mode (expecting approx 1500 concurrent connections on 
> this VIP). I don't have any application/user issues yet but I wanted to get a 
> handle on the haproxy stats page and particularly the 'resp errors' on the 
> backend servers. The total 'resp error' count for the backend is 249 but when 
> I hover over the cell it tells me 'connection resets during transfer 314 
> client, 597 server'. This doesn't seem to add up?
>
>
>
> I assume this counter is accumulative?
>
>
>
> As a rule of thumb what sort of percentage would I be concerned with when 
> looking at this figure?
>
>
>
>
>
>
>
>
> My config snippets are:
>
>
>
> defaults
>
> log global
>
> modehttp
>
> option  tcplog
>
> option  dontlognull
>
> option  redispatch
>
> retries 3
>
> timeout http-request15s
>
> timeout queue   30s
>
> timeout connect 5s
>
> timeout client  5m
>
> timeout server  5m
>
> timeout http-keep-alive 1s
>
> timeout check   10s
>
> timeout tarpit  1m
>
> backlog 1
>
> maxconn 2000
>
>
>
>
>
> #-
>
> # exchange vip
>
> #-
>
> frontend  exchange
>
> bind 192.168.1.172:443
>
> bind 192.168.1.172:25
>
> bind 192.168.1.172:80
>
> bind 192.168.1.172:587
>
> bind 192.168.1.172:995
>
> mode tcp
>
> maxconn 1
>
>
>
> default_backend exchange-backend
>
>
>
> #-
>
> # exchange backend
>
> #-
>
> backend exchange-backend
>
> mode tcp
>
> option ssl-hello-chk
>
> balance roundrobin
>
> server  exch01 exch01 maxconn 5000 check port 443 inter 15s
>
> server  exch02 exch02 maxconn 5000 check port 443 inter 15s
>
> server  exch03 exch03 maxconn 5000 check port 443 inter 15s
>
> server  exch04 exch04 maxconn 5000 check port 443 inter 15s
>
>
>
>
>
> Thanks very much for your time!
>
>
>
> Rgds,
>
>
>
> Glenn Elliott.
>
>
> __
> For the purposes of protecting the integrity and security of the SVHA network 
> and the information held on it, all emails to and from any email address on 
> the "svha.org.au" domain (or any other domain of St Vincent's Health 
> Australia Limited or any of its related bodies corporate) (an "SVHA Email 
> Address") will pass through and be scanned by the Symantec.cloud anti virus 
> and anti spam filter service. These services may be provided by Symantec from 
> locations outside of Australia and, if so, this will involve any email you 
> send to or receive from an SVHA Email Address being sent to and scanned in 
> those locations.



Hi Glenn,

It means either the client or the server purposely closed the connection (using 
RST) during the DATA phase (after handshake since you're in TCP mode).
Have a look in your logs and search for 'SD' or 'CD' termination flags to know 
on which service did the problem occurred.

If you want / need to dig further, you may have to improve the log line 
generated or split your configuration 

Freezing haproxy traffic with maxconn 0 and keepalive connections

2014-10-08 Thread Ivan Kurnosov
Since `haproxy v1.5.0` it was possible to temporarily stop reverse-proxying
traffic to frontends using

set maxconn frontend  0

command.

I've noticed that if haproxy is configured to maintain keepalive
connections between hapxory and a client then said connections will
continue be served whereas the new ones will continue awaiting for
"un-pausing" a frontend.

The question is: is it possible to terminate current keepalive connections
*gracefully* so that a client was required to establish new connections?

I've only found `shutdown session` and `shutdown sessions` commands but
they are obviously not graceful at all.

-- 
With best regards, Ivan Kurnosov