Re: Table sticky counters decrementation problem

2021-03-30 Thread Thomas SIMON

Hi willy,

just to confirm that sticky counter decrement is okay with your patch on 
2.3.8 version, so no objection for 2.3.9 patching neither :)


thomas

Le 30/03/2021 à 15:47, Willy Tarreau a écrit :

On Tue, Mar 30, 2021 at 03:17:34PM +0200, Sander Klein wrote:

On 2021-03-30 15:13, Willy Tarreau wrote:


diff --git a/src/time.c b/src/time.c
index 0cfc9bf3c..fafe3720e 100644
--- a/src/time.c
+++ b/src/time.c
@@ -268,7 +268,7 @@ void tv_update_date(int max_wait, int interrupted)
 old_now_ms = global_now_ms;
 do {
 new_now_ms = old_now_ms;
-   if (tick_is_lt(new_now_ms, now_ms))
+   if (tick_is_lt(new_now_ms, now_ms) || !new_now_ms)
 new_now_ms = now_ms;
 }  while (!_HA_ATOMIC_CAS(&global_now_ms, &old_now_ms,
new_now_ms));

Do I need to apply this on top of the other fixes? Or should this be done on
the vanilla 2.2.11?

It's indeed on top of other fixes like those present in 2.3.8 or queued
in 2.2-maint.

Just let me know if you need some help with the patch or if you need another
one. I've mostly focused on 2.3 for now since 2.3.8 was expected to be
definitely fixed and I wanted to do 2.2.12 today based on it.

Thanks!
Willy


--
Thomas SIMON
Responsable Infrastructures
Neteven




Re: Table sticky counters decrementation problem

2021-03-30 Thread Thomas SIMON

Hi Lukas,

I'm on 2.3.8 yes

root@web12:~# haproxy -vv
HA-Proxy version 2.3.8-1~bpo10+1 2021/03/25 - https://haproxy.org/
Status: stable branch - will stop receiving fixes around Q1 2022.
Known bugs: http://www.haproxy.org/bugs/bugs-2.3.8.html
Running on: Linux 5.4.78-2-pve #1 SMP PVE 5.4.78-2 (Thu, 03 Dec 2020 
14:26:17 +0100) x86_64

Build options :
  TARGET  = linux-glibc
  CPU = generic
  CC  = cc
  CFLAGS  = -O2 -g -O2 -fdebug-prefix-map=/build/haproxy-2.3.8=. 
-fstack-protector-strong -Wformat -Werror=format-security -Wdate-time 
-D_FORTIFY_SOURCE=2 -Wall -Wextra -Wdeclaration-after-statement -fwrapv 
-Wno-unused-label -Wno-sign-compare -Wno-unused-parameter -Wno-clobbered 
-Wno-missing-field-initializers -Wno-cast-function-type -Wtype-limits 
-Wshift-negative-value -Wshift-overflow=2 -Wduplicated-cond 
-Wnull-dereference
  OPTIONS = USE_PCRE2=1 USE_PCRE2_JIT=1 USE_OPENSSL=1 USE_LUA=1 
USE_ZLIB=1 USE_SYSTEMD=1

  DEBUG   =

Feature list : +EPOLL -KQUEUE +NETFILTER -PCRE -PCRE_JIT +PCRE2 
+PCRE2_JIT +POLL -PRIVATE_CACHE +THREAD -PTHREAD_PSHARED +BACKTRACE 
-STATIC_PCRE -STATIC_PCRE2 +TPROXY +LINUX_TPROXY +LINUX_SPLICE +LIBCRYPT 
+CRYPT_H +GETADDRINFO +OPENSSL +LUA +FUTEX +ACCEPT4 -CLOSEFROM +ZLIB 
-SLZ +CPU_AFFINITY +TFO +NS +DL +RT -DEVICEATLAS -51DEGREES -WURFL 
+SYSTEMD -OBSOLETE_LINKER +PRCTL +THREAD_DUMP -EVPORTS


Default settings :
  bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with multi-threading support (MAX_THREADS=64, default=8).
Built with OpenSSL version : OpenSSL 1.1.1d  10 Sep 2019
Running on OpenSSL version : OpenSSL 1.1.1d  10 Sep 2019
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2 TLSv1.3
Built with Lua version : Lua 5.3.3
Built with network namespace support.
Built with the Prometheus exporter as a service
Built with zlib version : 1.2.11
Running on zlib version : 1.2.11
Compression algorithms supported : identity("identity"), 
deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with transparent proxy support using: IP_TRANSPARENT 
IPV6_TRANSPARENT IP_FREEBIND

Built with PCRE2 version : 10.32 2018-09-10
PCRE2 library supports JIT : yes
Encrypted password support via crypt(3): yes
Built with gcc compiler version 8.3.0

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available multiplexer protocols :
(protocols marked as  cannot be specified using 'proto' keyword)
  h2 : mode=HTTP   side=FE|BE mux=H2
    fcgi : mode=HTTP   side=BE    mux=FCGI
    : mode=HTTP   side=FE|BE mux=H1
    : mode=TCP    side=FE|BE mux=PASS

Available services : prometheus-exporter
Available filters :
    [SPOE] spoe
    [CACHE] cache
    [FCGI] fcgi-app
    [COMP] compression
    [TRACE] trace

I'm using buster-backports repository, and I've updated package 
yesterday morning


[11:15:44]root@web12:~# cat /etc/apt/sources.list.d/haproxy.list
deb [signed-by=/usr/share/keyrings/haproxy.debian.net.gpg] 
http://haproxy.debian.net buster-backports-2.3 main


root@web12:~# aptcp haproxy
haproxy:
  Installed: 2.3.8-1~bpo10+1
  Candidate: 2.3.8-1~bpo10+1
  Version table:
 *** 2.3.8-1~bpo10+1 100
    100 http://haproxy.debian.net buster-backports-2.3/main amd64 
Packages

    100 /var/lib/dpkg/status

root@web12:~# grep haproxy /var/log/dpkg.log
2021-03-29 09:31:56 upgrade haproxy:amd64 2.3.7-1~bpo10+1 2.3.8-1~bpo10+1
2021-03-29 09:31:56 status half-configured haproxy:amd64 2.3.7-1~bpo10+1
2021-03-29 09:31:56 status unpacked haproxy:amd64 2.3.7-1~bpo10+1
2021-03-29 09:31:56 status half-installed haproxy:amd64 2.3.7-1~bpo10+1
2021-03-29 09:31:56 status unpacked haproxy:amd64 2.3.8-1~bpo10+1
2021-03-29 09:31:57 configure haproxy:amd64 2.3.8-1~bpo10+1 
2021-03-29 09:31:57 status unpacked haproxy:amd64 2.3.8-1~bpo10+1
2021-03-29 09:32:06 conffile /etc/logrotate.d/haproxy keep
2021-03-29 09:32:06 status half-configured haproxy:amd64 2.3.8-1~bpo10+1
2021-03-29 09:32:08 status installed haproxy:amd64 2.3.8-1~bpo10+1

And I confirm you than when rolling back with source compilation and 
2.3.7 version (can't do this with repository as only last version is 
available) , counters decrements well.


thanks

thomas

Le 30/03/2021 à 10:17, Lukas Tribus a écrit :

Hello Thomas,


this is a known issue in any release train other than 2.3 ...

https://github.com/haproxy/haproxy/issues/1196

However neither 2.3.7 (does not contain the offending commits), nor
2.3.8 (contains all the fixes) should be affected by this.


Are you absolutely positive that you are running 2.3.8 and not
something like 2.2 or 2.0 ? Can you provide the full output of haproxy
-vv?



Thanks,

Lukas


--
Thomas SIMON
Responsable Infrastructures
Neteven




Table sticky counters decrementation problem

2021-03-30 Thread Thomas SIMON

Hi all,

Since version 2.3.8, I've noticed problem with come sticky counters, 
which only increments, and never decrements. The behavior was OK in 2.3.7



frontend web
    bind *:443 ssl crt /etc/ssl/certs/...
    http-request track-sc0 src table global_limits

backend global_limits
 stick-table type ip size 1m expire 1h store 
conn_cur,http_req_rate(20s),http_err_rate(1h)



Stick table

echo "show table global_limits" | socat stdio 
unix-connect:/run/haproxy/admin.sock
0x7ff0f4027d40: key=195.219.xxx.xxx use=2 exp=3599384 conn_cur=2 
http_req_rate(2)=607 http_err_rate(360)=0


One minute after :

0x7ff0f4027d40: key=195.219.250.105 use=2 exp=3599923 conn_cur=2 
http_req_rate(2)=689 http_err_rate(360)=0


Conn_cur increments and decrements well, but http_req_rate and 
http_err_rate doesn't.



regards
thomas




Integration of modsecurity v3 with haproxy

2020-11-10 Thread Thomas SIMON
Hi all,

Is there a way to use some mecanism (spoe or other) to use modsecurity v3
with haproxy (2.x) ?
I found documentation on modsecurity v2 integration with spoe , but nothing
on v3.

My goal is to protect backends with modsecurity using owasp CRS.

I've setup a nginx with modsecurity v3 on another server, and l'd like to
proxy requests on this server for filtering before processing authorized
traffic on backends.

best regards
thomas


Re: QUIC and the PROXY protocol

2020-10-09 Thread Simon Ser
On Friday, October 9, 2020 4:28 PM, Frederic Lecaille  
wrote:

> > > The IETF-QUIC transport protocol spec [1] hasn't been ratified, but
> > > there exists a number of QUIC deployments in the wild. I'm writing a
> > > proxy and I'd like to add support for QUIC. Are there any plans to add
> > > QUIC4/QUIC6 to the list of PROXY transport protocols?
> >
> > from what I know the ongoing work on haproxy side is visible here
> > https://github.com/haproxytech/quic-dev
> > I let haproxy team answer if they have more details but there are
> > already some preparation work done in haproxy.git mainline.
> > Best,
>
> As already mentioned by William, I confirmed we have started to work on
> QUIC support for HAProxy.
>
> Note that the last QUIC draft version is 31.

Thanks both for replying.

My question was less about haproxy and more about the PROXY protocol
specification [1]. How should proxies forward QUIC connection metadata
via the PROXY protocol? There is \x11 for TCP over IPv4, but nothing
for QUIC over IPv4.

[1]: https://www.haproxy.org/download/2.3/doc/proxy-protocol.txt



QUIC and the PROXY protocol

2020-10-09 Thread Simon Ser
Hi all,

The IETF-QUIC transport protocol spec [1] hasn't been ratified, but
there exists a number of QUIC deployments in the wild. I'm writing a
proxy and I'd like to add support for QUIC. Are there any plans to add
QUIC4/QUIC6 to the list of PROXY transport protocols?

Thanks,

Simon Ser

[1]: https://tools.ietf.org/html/draft-ietf-quic-transport-30



Re: req.body_param([])

2018-05-20 Thread Simon Schabel

Dear Jarno,

Thanks for the feedback on this.

I investigated further to find the solution to this issue. It turned out 
that not all PUT requests had a problem with filtering out the 
customerId parameter we're looking for.


Some requests seem to encode the HTTP PUT body somehow or have set some 
HTTP header which prevents haproxy from extracting the parameter.


I'm still trying to find out what exactly is different.

Best
Simon

Am 14.05.2018 um 14:40 schrieb Jarno Huuskonen:

Hi Simon,

On Mon, May 14, Simon Schabel wrote:

HA-Proxy version 1.7.5-2~bpo8+1 2017/05/27

The setting for the logging was done in the /default /section as:

    log-format %Ci:%Cp\ [%t]\ %ft\ %b/%s\ %Tq/%Tw/%Tc/%Tr/%Tt\ %st\
%B\ %cc\ %cs\ %tsc\ %ac/%fc/%bc/%sc/%rc\ %sq/%bq\ %hr\ %hs\
%[capture.req.hdr(0)]\ %{+Q}r
     option log-separate-errors
     option log-health-checks

and in the /http /and /https /section the body parameter capturing
is activated as:

     # enable HTTP body logging
     option http-buffer-request
     declare capture request len 4
     http-request capture req.body_param(customerId) id 0

As my haproxy version differs from yours I'm unsure where I might
made a configuration error.

I tested with 1.8.8 and 1.7.5 and with both versions I managed to
log customerId (with simple curl -X PUT/POST).

Are the POST/PUT requests large, is it possible that the customerId doesn't
fit in haproxy buffer (default 16k (I think)) ?

Can you test with curl to see if customerId is logged then:
curl -v -X PUT -d'customerId=911' http://yourhost.yourdomain/yourpath

# bigfile is some random file much larger than 16k
and curl -v -X PUT -d@bigfile -d'customerId=912' 
http://yourhost.yourdomain/yourpath

-Jarno



--
SEMKNOX GmbH
Simon Schabel
Webergasse 1, Haus B/1 · 01067 Dresden
simon.scha...@semknox.com · +49-351-32 123 102 · www.semknox.com




Re: req.body_param([])

2018-05-14 Thread Simon Schabel

Dear Jarno,

Thanks for your message.

Ok, my /haproxy -vv/ output is as follows:

HA-Proxy version 1.7.5-2~bpo8+1 2017/05/27
Copyright 2000-2017 Willy Tarreau 

Build options :
  TARGET  = linux2628
  CPU = generic
  CC  = gcc
  CFLAGS  = -g -O2 -fPIE -fstack-protector-strong -Wformat 
-Werror=format-security -D_FORTIFY_SOURCE=2
  OPTIONS = USE_GETADDRINFO=1 USE_ZLIB=1 USE_REGPARM=1 USE_OPENSSL=1 
USE_LUA=1 USE_PCRE=1 USE_NS=1


Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Encrypted password support via crypt(3): yes
Built with zlib version : 1.2.8
Running on zlib version : 1.2.8
Compression algorithms supported : identity("identity"), 
deflate("deflate"), raw-deflate("deflate"), gzip("gzip")

Built with OpenSSL version : OpenSSL 1.0.2k  26 Jan 2017
Running on OpenSSL version : OpenSSL 1.0.2l  25 May 2017
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes
Built with PCRE version : 8.35 2014-04-04
Running on PCRE version : 8.35 2014-04-04
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Built with Lua version : Lua 5.3.1
Built with transparent proxy support using: IP_TRANSPARENT 
IPV6_TRANSPARENT IP_FREEBIND

Built with network namespace support

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available filters :
    [COMP] compression
    [TRACE] trace
    [SPOE] spoe



The setting for the logging was done in the /default /section as:

   log-format %Ci:%Cp\ [%t]\ %ft\ %b/%s\ %Tq/%Tw/%Tc/%Tr/%Tt\ %st\ %B\ 
%cc\ %cs\ %tsc\ %ac/%fc/%bc/%sc/%rc\ %sq/%bq\ %hr\ %hs\ 
%[capture.req.hdr(0)]\ %{+Q}r

    option log-separate-errors
    option log-health-checks

and in the /http /and /https /section the body parameter capturing is 
activated as:


    # enable HTTP body logging
    option http-buffer-request
    declare capture request len 4
    http-request capture req.body_param(customerId) id 0

As my haproxy version differs from yours I'm unsure where I might made a 
configuration error.


Thanks for your help,
best
Simon


Am 11.05.2018 um 12:43 schrieb Jarno Huuskonen:

Hi,

On Wed, May 09, Simon Schabel wrote:

We use the req.body_param([]) setting to retrieve body
parameter from the incoming HTTP queries and place them into the
logs.

Unfortunately this only works with HTTP POST requests. In our case
we need to extract the parameter from PUT requests as well.

Would it be an option to use req.body_param([]) on any HTTP
method type instead of restricting it to HTTP POST?

Can you share your haproxy -vv and the logging config ?

I just tested with haproxy-ss-20180507 and this minimal test
seems to get req.body_param(log) to stick table on both POST/PUT requests
(tested w/curl -X PUT|POST):
...
frontend test
 option  http-buffer-request
 bind ipv4@127.0.0.1:8080
 http-request track-sc2 req.body_param(log),lower table test_be if 
METH_POST || METH_PUT

 default_backend test_be

backend test_be
 stick-table type string len 48 size 64 expire 240s store http_req_cnt
 server wp2 some.ip.add.ress:80 id 2
...

curl -X PUT -d'log=user1' http://127.0.0.1:8080/
curl -X POST -d'log=user2' http://127.0.0.1:8080/

-Jarno



--
SEMKNOX GmbH
Simon Schabel
Webergasse 1, Haus B/1 · 01067 Dresden
simon.scha...@semknox.com · +49-351-32 123 102 · www.semknox.com



req.body_param([])

2018-05-09 Thread Simon Schabel

Hello,

We use the req.body_param([]) setting to retrieve body parameter 
from the incoming HTTP queries and place them into the logs.


Unfortunately this only works with HTTP POST requests. In our case we 
need to extract the parameter from PUT requests as well.


Would it be an option to use req.body_param([]) on any HTTP method 
type instead of restricting it to HTTP POST?


Best
Simon

--
SEMKNOX GmbH
Simon Schabel
Webergasse 1, Haus B/1 · 01067 Dresden
simon.scha...@semknox.com · +49-351-32 123 102 · www.semknox.com




Re: AW: Load balancing a single page application

2018-04-27 Thread Simon McLeish

Curl thinks it's being set:

< HTTP/1.1 200 OK
< Server: Apache-Coyote/1.1
< Set-Cookie: JSESSIONID=7B26C0CAABC706C1942ACF4187FD0608; Path=/path; 
HttpOnly
< P3P: CP='IDC DSP COR ADM DEVi TAIi PSA PSD IVAi IVDi CONi HIS OUR IND 
CNT TST'

< Cache-Control: no-store
< Pragma: no-cache
< Expires: -1
< Last-Modified: 0
< Content-Type: text/html;charset=utf-8
< Transfer-Encoding: chunked
< Vary: Accept-Encoding
< Date: Fri, 27 Apr 2018 10:20:07 GMT

So, yes, it is still being set by the application.

Simon


On 27/04/18 10:59, Maximilian Boehm wrote:

Hi,


Sessions are managed on haproxy with a stick table which stores a JSESSIONID

Just to make sure: Are you sure this application still uses JSESSIONID as 
identifier for the session? As far as I know, that kind of applications often 
use things like Web Tokens instead of regular session cookies.

Max

-Ursprüngliche Nachricht-
Von: Simon McLeish 
Gesendet: Freitag, 27. April 2018 11:41
An: haproxy@formilux.org
Betreff: Load balancing a single page application

Hi,

We are using haproxy to load balance a vendor application across three servers. 
This has been very successful, but recently the vendor has released a new 
interface which is a single page AngularJS application, and since we went live 
on Tuesday we have been having problems with premature session timeouts.

Sessions are managed on haproxy with a stick table which stores a JSESSIONID 
cookie value which is also shared with the application session management. This 
has a timeout set to 30 minutes both in haproxy and in the application.

The way the timeouts are happening suggests that haproxy may be assuming that 
there is no activity because the page is never reloaded, though of course there 
will be JS traffic across the connection. Could you advise whether this is 
likely? And if so, are there any recommendations for settings which would work 
better?

This is the session-related part of our configuration for reference (redacted 
slightly):

backend prd-http
          stick on urlp(jsessionid) table front_jsessionid
      stick on urlp(jsessionid,;) table front_jsessionid
      stick on cookie(JSESSIONID) table front_jsessionid
      stick store-response cookie(JSESSIONID) table front_jsessionid

      http-request del-header Proxy
      option httplog
      option originalto
      balance roundrobin
      option httpchk GET /
      http-check expect status 200
      server server1
      server server2
      server server3 backup

backend front_jsessionid
      stick-table type string len 52 size 10m expire 30m

There is a further backend which is the server used for authentication; the 
session cookie is not shared with that.

Thanks,

Simon







Re: AW: Load balancing a single page application

2018-04-27 Thread Simon McLeish
Interesting - the vendor didn't notify us about such a change. I'll have 
to investigate.


Simon


On 27/04/18 10:59, Maximilian Boehm wrote:

Hi,


Sessions are managed on haproxy with a stick table which stores a JSESSIONID

Just to make sure: Are you sure this application still uses JSESSIONID as 
identifier for the session? As far as I know, that kind of applications often 
use things like Web Tokens instead of regular session cookies.

Max

-Ursprüngliche Nachricht-
Von: Simon McLeish 
Gesendet: Freitag, 27. April 2018 11:41
An: haproxy@formilux.org
Betreff: Load balancing a single page application

Hi,

We are using haproxy to load balance a vendor application across three servers. 
This has been very successful, but recently the vendor has released a new 
interface which is a single page AngularJS application, and since we went live 
on Tuesday we have been having problems with premature session timeouts.

Sessions are managed on haproxy with a stick table which stores a JSESSIONID 
cookie value which is also shared with the application session management. This 
has a timeout set to 30 minutes both in haproxy and in the application.

The way the timeouts are happening suggests that haproxy may be assuming that 
there is no activity because the page is never reloaded, though of course there 
will be JS traffic across the connection. Could you advise whether this is 
likely? And if so, are there any recommendations for settings which would work 
better?

This is the session-related part of our configuration for reference (redacted 
slightly):

backend prd-http
          stick on urlp(jsessionid) table front_jsessionid
      stick on urlp(jsessionid,;) table front_jsessionid
      stick on cookie(JSESSIONID) table front_jsessionid
      stick store-response cookie(JSESSIONID) table front_jsessionid

      http-request del-header Proxy
      option httplog
      option originalto
      balance roundrobin
      option httpchk GET /
      http-check expect status 200
      server server1
      server server2
      server server3 backup

backend front_jsessionid
      stick-table type string len 52 size 10m expire 30m

There is a further backend which is the server used for authentication; the 
session cookie is not shared with that.

Thanks,

Simon







Load balancing a single page application

2018-04-27 Thread Simon McLeish

Hi,

We are using haproxy to load balance a vendor application across three 
servers. This has been very successful, but recently the vendor has 
released a new interface which is a single page AngularJS application, 
and since we went live on Tuesday we have been having problems with 
premature session timeouts.


Sessions are managed on haproxy with a stick table which stores a 
JSESSIONID cookie value which is also shared with the application 
session management. This has a timeout set to 30 minutes both in haproxy 
and in the application.


The way the timeouts are happening suggests that haproxy may be assuming 
that there is no activity because the page is never reloaded, though of 
course there will be JS traffic across the connection. Could you advise 
whether this is likely? And if so, are there any recommendations for 
settings which would work better?


This is the session-related part of our configuration for reference 
(redacted slightly):


backend prd-http
        stick on urlp(jsessionid) table front_jsessionid
    stick on urlp(jsessionid,;) table front_jsessionid
    stick on cookie(JSESSIONID) table front_jsessionid
    stick store-response cookie(JSESSIONID) table front_jsessionid

    http-request del-header Proxy
    option httplog
    option originalto
    balance roundrobin
    option httpchk GET /
    http-check expect status 200
    server server1
    server server2
    server server3 backup

backend front_jsessionid
    stick-table type string len 52 size 10m expire 30m

There is a further backend which is the server used for authentication; 
the session cookie is not shared with that.


Thanks,

Simon




Re: [PATCH 0/2] MEDIUM: stats: Add JSON output option to show (info|stat)

2017-03-13 Thread Simon Horman
Hi Willy,

this patchset seems to have stalled.
I'd like to find a way to revive it.

On Sun, Jan 08, 2017 at 11:52:46AM +0100, Simon Horman wrote:
> Hi Willy,
> 
> On Sun, Jan 08, 2017 at 07:37:24AM +0100, Willy Tarreau wrote:
> > Hi Simon,
> > 
> > On Wed, Jan 04, 2017 at 09:37:24AM +0100, Simon Horman wrote:
> > > Hi,
> > > 
> > > this short series is an RFC implementation of adding JSON format
> > > output to show (info|stat). It also adds a new show schema json
> > > stats command to allow retreival of the schema which describes
> > > the JSON output of show (info|stat).
> > (...)
> > 
> > Thanks for this. I'm seeing in stats_emit_json_field_tags() that
> > you have to emit the names of the various types, scopes, etc...
> > I think this is the reason why you mention in patch 1 that it needs
> > to be updated if the structure evolves. Probably that we should put
> > these fields in an array declared just next to the enums. This way
> > the declaration will be a bit more centralized.

I see how that may make things more centralised in theory.
But in practice I wonder how it might work.

enum field_origin, for example, is declared in stats.h.

However, if I put something like the following into the same file
then it will be defined multiple times as the header file is included
in the source for multiple objects.

const char *field_origin_long_name[] = {
[FO_METRIC] = "Metric",
...
};

Of course I can move the above into a .c file but then it is no
longer centralised with the enum it relates to.

> > > Some areas for possible discussion:
> > > * Use of STAT_STARTED in first patch
> > > * Possible automatic generation of (part) of schema in 2nd patch
> > > * Improved documentation
> > 
> > For now I don't see anything there which needs further discussion, and
> > nobody commented on your patches either, possibly indicating you're on
> > the right track. If you want I can merge this series, it will be easier
> > for you to update it later using incremental patches.
> > 
> > > Some discussion of the size of JSON output is included as an appendix
> > > to the changelog of the first patch.
> > 
> > I'd prefer to integrate this with your commit message because it's quite 
> > useful as-is.
> > 
> > Just let me know if you want the series to get merged or if you prefer
> > to respin it.
> 
> I'd prefer if you merged the series as-is
> and I then provided incremental updates.

FWIW, I believe the series still applies cleanly on master.



pre-connect header problem

2017-03-06 Thread Simon E. Silva Lauinger
Hallo HAProxy community,
I have a problem with chrome's pre-connect feature in conjunction with
haproxy's "choose-backend-by-acl" feature. I really hope sombody can
help me.
Here a simplified version of my config:

frontend https
bind *:443 name *:443 ssl crt /path/to/cert.pem
mode tcp

acl acl_58a704192fda06.42911106 hdr_end(host) -i .example.net
use_backend _.example.net if acl_58a704192fda06.42911106

acl acl_58ad6d70f2d8f9.76023768 hdr(host) -i example.net
use_backend example.net if acl_58ad6d70f2d8f9.76023768
   
backend _.example.net
mode http
server _.example.net 1.2.3.4:443 ssl verify none

backend example.net
mode http
server example.net 5.6.7.8:443 ssl verify none

The problem is that in Chrome random files are not being received. I
think it has to do with Chromes pre-connect "feature". Chrome opens a
HTTP-connection without sending anything, even no http header. HAProxy
tries to assign the request to a backend using the not available host
header, so it fails.
Does somebody know how to deal with this problem? Can I say HAProxy to
wait for data before trying to assign the request to a backend?

Thank you for your help!

Regards,
Simon



Rate limit by country

2017-02-27 Thread Simon Green
Hi all,

I need to rate limit users by country[1], and my Google foo is failing
me. I know that I can use "src_conn_rate gt N"[2], but that rate limits
on a per IP basis. I want to be able to rate limit based on the total
number of connections from a country, and have different limits for
different countries.

Is this possible with HAProxy, and if so, how?

TIA for your guidance.

-- 
Simon


[1] where "country" is defined by MaxMind's GeoIP list. It's not
perfect, but better than nothing.
[2] and along with src -f TLD.lst can set different limits for different
countries.



Re: [PATCH 0/2] MEDIUM: stats: Add JSON output option to show (info|stat)

2017-01-12 Thread Simon Horman
On Thu, Jan 12, 2017 at 03:14:27PM +, Scott McKeown wrote:
> Doh, that would do it.
> Sorry wrong git branch.
> 
> # haproxy -v
> HA-Proxy version 1.8-dev0 2016/11/25
> Copyright 2000-2016 Willy Tarreau 
> 
> # echo "show info json" | socat /tmp/haproxy.stat stdio | python -m
> json.tool
> [
> {
> "field": {
> "name": "Name",
> "pos": 0
> },
> "processNum": 1,
> "tags": {
> "nature": "Output",
> "origin": "Product",
> "scope": "Service"
> },
> "value": {
> "type": "str",
> "value": "HAProxy"
> }
> },
> {
> "field": {
> "name": "Version",
> "pos": 1
> },
> "processNum": 1,
> "tags": {
> "nature": "Output",
> "origin": "Product",
> "scope": "Service"
> },
> "value": {
> 
> etc. etc. etc.

Thanks, that looks like what I would expect.

-- 
Simon Horman  si...@horms.nl
Horms Solutions BV  www.horms.nl
Parnassusweg 819, 1082 LZ Amsterdam, Netherlands
Tel: +31 (0)20 800 6155Skype: horms7



Re: [PATCH 0/2] MEDIUM: stats: Add JSON output option to show (info|stat)

2017-01-12 Thread Simon Horman
On Thu, Jan 12, 2017 at 01:27:37PM +, Scott McKeown wrote:
> Hi Simon,
> 
> Output below:

...

Thanks

> On 12 January 2017 at 13:23, Simon Horman  wrote:
> 
> > Hi Scott,
> >
> > could you send the output of the following?
> >
> > echo "show info json" | socat /tmp/haproxy.stat stdio
> >
> > On Thu, Jan 12, 2017 at 01:18:54PM +, Scott McKeown wrote:
> > > Hi Guys,
> > > Sorry for the delay I got tied up with some other issues yesterday but
> > I've
> > > just finished with Simons git repo pull.
> > > Simon are you sure this is correct as I thought this was for a 1.8-dev
> > > build which could be why I had problems on Tuesday.
> > >
> > > # haproxy -v
> > > HA-Proxy version 1.6-dev1 2015/03/11
> > > Copyright 2000-2015 Willy Tarreau 

...

The above seems wrong. I see:

./haproxy -v
HA-Proxy version 1.8-dev0-b738a8-135 2017/01/04
Copyright 2000-2016 Willy Tarreau 

Could you verify that the head commit you have checked out is as follows?

git log --oneline -1
b738a8b596d8 MEDIUM: stats: Add show json schema



Re: [PATCH 0/2] MEDIUM: stats: Add JSON output option to show (info|stat)

2017-01-12 Thread Simon Horman
Hi Scott,

could you send the output of the following?

echo "show info json" | socat /tmp/haproxy.stat stdio

On Thu, Jan 12, 2017 at 01:18:54PM +, Scott McKeown wrote:
> Hi Guys,
> Sorry for the delay I got tied up with some other issues yesterday but I've
> just finished with Simons git repo pull.
> Simon are you sure this is correct as I thought this was for a 1.8-dev
> build which could be why I had problems on Tuesday.
> 
> # haproxy -v
> HA-Proxy version 1.6-dev1 2015/03/11
> Copyright 2000-2015 Willy Tarreau 
> 
> I'm still getting the following though:
> # echo "show info json" | socat /tmp/haproxy.stat stdio | python -m
> json.tool
> No JSON object could be decoded
> 
> Its a basic build with the following options:
> # make TARGET=linux26 USE_STATIC_PCRE=1 USE_LINUX_TPROXY=1
> 
> Oh and I'm running with your config file now.
> 
> Anything else that I can try or detail you would like to make sure its not
> me?
> 
> 
> ~Scott
> 
> 
> 
> On 9 January 2017 at 16:13, Scott McKeown  wrote:
> 
> > No problem I'll have another look tomorrow morning and I'll let you all
> > know how I get on.
> >
> >
> > On 9 January 2017 at 15:18, Simon Horman  wrote:
> >
> >> Hi Scott,
> >>
> >> thanks for testing.
> >>
> >> For reference the code I am using is here:
> >>
> >> https://github.com/horms/haproxy.git show-json
> >>
> >> And my minimal config file is as follows.
> >> Would it be possible for you to share you config with me (privately) ?
> >>
> >> global
> >> daemon
> >> stats socket /tmp/haproxy.stat mode 600 level admin
> >> pidfile /tmp/haproxy.pid
> >> log /dev/log local4
> >> #tune.bufsize 2048
> >> #tune.bufsize 4096
> >> tune.bufsize 8192
> >> #tune.bufsize 16384
> >> tune.maxrewrite 1024
> >>
> >> defaults
> >> mode http
> >> balance roundrobin
> >> timeout connect 4000
> >> timeout client 42000
> >> timeout server 43000
> >> log global
> >>
> >>
> >> listen VIP_Name
> >> bind 127.0.0.1:10080 transparent
> >> mode http
> >> balance leastconn
> >> cookie SERVERID insert nocache indirect
> >> server backup 127.0.0.1:9081 backup  non-stick
> >> option http-keep-alive
> >> option forwardfor
> >> option redispatch
> >> option abortonclose
> >> maxconn 4
> >> log global
> >> option httplog
> >> option log-health-checks
> >> server RIP1_Name 127.0.0.1  weight 100  cookie RIP_Name
> >> agent-check agent-port 12345 agent-inter 2000 check port 80 inter 2000 rise
> >> 2 fall 3  minconn 0 maxconn 0s on-marked-down shutdown-sessions disabled
> >> server RIP2_Name 127.0.0.1  weight 100  cookie RIP_Name
> >> agent-check agent-port 12345 agent-inter 2000 check port 80 inter 2000 rise
> >> 2 fall 3  minconn 0 maxconn 0s on-marked-down shutdown-sessions
> >>
> >
> >
> >
> > --
> > With Kind Regards.
> >
> > Scott McKeown
> > Loadbalancer.org
> > http://www.loadbalancer.org
> > Tel (UK) - +44 (0) 3303801064 <0330%20380%201064> (24x7)
> > Tel (US) - +1 888.867.9504 <+1%20888-867-9504> (Toll Free)(24x7)
> >
> 
> 
> 
> -- 
> With Kind Regards.
> 
> Scott McKeown
> Loadbalancer.org
> http://www.loadbalancer.org
> Tel (UK) - +44 (0) 3303801064 (24x7)
> Tel (US) - +1 888.867.9504 (Toll Free)(24x7)

-- 
Simon Horman  si...@horms.nl
Horms Solutions BV  www.horms.nl
Parnassusweg 819, 1082 LZ Amsterdam, Netherlands
Tel: +31 (0)20 800 6155Skype: horms7



Re: [PATCH 0/2] MEDIUM: stats: Add JSON output option to show (info|stat)

2017-01-09 Thread Simon Horman
Hi Scott,

thanks for testing.

For reference the code I am using is here:

https://github.com/horms/haproxy.git show-json

And my minimal config file is as follows.
Would it be possible for you to share you config with me (privately) ?

global
daemon
stats socket /tmp/haproxy.stat mode 600 level admin
pidfile /tmp/haproxy.pid
log /dev/log local4
#tune.bufsize 2048
#tune.bufsize 4096
tune.bufsize 8192
#tune.bufsize 16384
tune.maxrewrite 1024

defaults
mode http
balance roundrobin
timeout connect 4000
timeout client 42000
timeout server 43000
log global


listen VIP_Name
bind 127.0.0.1:10080 transparent
mode http
balance leastconn
cookie SERVERID insert nocache indirect
server backup 127.0.0.1:9081 backup  non-stick
option http-keep-alive
option forwardfor
option redispatch
option abortonclose
maxconn 4
log global
option httplog
option log-health-checks
server RIP1_Name 127.0.0.1  weight 100  cookie RIP_Name agent-check 
agent-port 12345 agent-inter 2000 check port 80 inter 2000 rise 2 fall 3  
minconn 0 maxconn 0s on-marked-down shutdown-sessions disabled
server RIP2_Name 127.0.0.1  weight 100  cookie RIP_Name agent-check 
agent-port 12345 agent-inter 2000 check port 80 inter 2000 rise 2 fall 3  
minconn 0 maxconn 0s on-marked-down shutdown-sessions



Re: [PATCH 0/2] MEDIUM: stats: Add JSON output option to show (info|stat)

2017-01-08 Thread Simon Horman
Hi Willy,

On Sun, Jan 08, 2017 at 07:37:24AM +0100, Willy Tarreau wrote:
> Hi Simon,
> 
> On Wed, Jan 04, 2017 at 09:37:24AM +0100, Simon Horman wrote:
> > Hi,
> > 
> > this short series is an RFC implementation of adding JSON format
> > output to show (info|stat). It also adds a new show schema json
> > stats command to allow retreival of the schema which describes
> > the JSON output of show (info|stat).
> (...)
> 
> Thanks for this. I'm seeing in stats_emit_json_field_tags() that
> you have to emit the names of the various types, scopes, etc...
> I think this is the reason why you mention in patch 1 that it needs
> to be updated if the structure evolves. Probably that we should put
> these fields in an array declared just next to the enums. This way
> the declaration will be a bit more centralized.
> 
> > Some areas for possible discussion:
> > * Use of STAT_STARTED in first patch
> > * Possible automatic generation of (part) of schema in 2nd patch
> > * Improved documentation
> 
> For now I don't see anything there which needs further discussion, and
> nobody commented on your patches either, possibly indicating you're on
> the right track. If you want I can merge this series, it will be easier
> for you to update it later using incremental patches.
> 
> > Some discussion of the size of JSON output is included as an appendix
> > to the changelog of the first patch.
> 
> I'd prefer to integrate this with your commit message because it's quite 
> useful as-is.
> 
> Just let me know if you want the series to get merged or if you prefer
> to respin it.

I'd prefer if you merged the series as-is
and I then provided incremental updates.

-- 
Simon Horman  si...@horms.nl
Horms Solutions BV  www.horms.nl
Parnassusweg 819, 1082 LZ Amsterdam, Netherlands
Tel: +31 (0)20 800 6155Skype: horms7



[PATCH 1/2] MEDIUM: stats: Add JSON output option to show (info|stat)

2017-01-04 Thread Simon Horman
Add a json parameter to show (info|stat) which will output information
in JSON format. A follow-up patch will add a JSON schema which describes
the format of the JSON output of these commands.

The JSON output is without any extra whitespace in order to reduce the
volume of output. For human consumption passing the output through a
pretty printer may be helpful.

e.g.:
$ echo "show info json" | socat /var/run/haproxy.stat stdio | \
 python -m json.tool

STAT_STARTED has bee added in order to track if show output has begun or
not. This is used in order to allow the JSON output routines to only insert
a "," between elements when needed. I would value any feedback on how this
might be done better.

Signed-off-by: Simon Horman 
---

For the simple configuration below a comparison of the size of info
and stats output is as follows:

$ show stat   =>  1654 bytes
$ show stat typed =>  7081 bytes
$ show stat json  => 45331 bytes
$ show stat json (pretty printed[*]) => 113390 bytes

$ show info   =>  527 bytes
$ show info typed =>  937 bytes
$ show info json  => 5330 bytes
$ show info json (pretty printed[*]) => 11456 bytes

[*] pretty printed using python -m json.tool

--- begin config ---
global
daemon
stats socket /tmp/haproxy.stat mode 600 level admin
pidfile /tmp/haproxy.pid
log /dev/log local4
tune.bufsize 16384
tune.maxrewrite 1024

defaults
mode http
balance roundrobin
timeout connect 4000
timeout client 42000
timeout server 43000
log global

listen VIP_Name
bind 127.0.0.1:10080 transparent
mode http
balance leastconn
cookie SERVERID insert nocache indirect
server backup 127.0.0.1:9081 backup  non-stick
option http-keep-alive
option forwardfor
option redispatch
option abortonclose
maxconn 4
log global
option httplog
option log-health-checks
server RIP_Name 127.0.0.1  weight 100  cookie RIP_Name agent-check 
agent-port 12345 agent-inter 2000 check port 80 inter 2000 rise 2 fall 3  
minconn 0 maxconn 0s on-marked-down shutdown-sessions disabled
server RIP_Name 127.0.0.1  weight 100  cookie RIP_Name agent-check 
agent-port 12345 agent-inter 2000 check port 80 inter 2000 rise 2 fall 3  
minconn 0 maxconn 0s on-marked-down shutdown-sessions
--- end config ---

Changes since RFC:
* Handle cases where output exceeds available buffer space
* Document that consideration should be given to updating
  dump functions if struct field is updated
* Limit JSON integer values to the range [-(2**53)+1, (2**53)-1] as per
  the recommendation for interoperable integers in section 6 of RFC 7159.
---
 doc/management.txt|  45 +++--
 include/types/stats.h |   5 +
 src/stats.c   | 272 +-
 3 files changed, 306 insertions(+), 16 deletions(-)

diff --git a/doc/management.txt b/doc/management.txt
index 683b99790160..623ac6375552 100644
--- a/doc/management.txt
+++ b/doc/management.txt
@@ -1760,16 +1760,18 @@ show errors [|] [request|response]
 show backend
   Dump the list of backends available in the running process
 
-show info [typed]
+show info [typed|json]
   Dump info about haproxy status on current process. If "typed" is passed as an
   optional argument, field numbers, names and types are emitted as well so that
   external monitoring products can easily retrieve, possibly aggregate, then
   report information found in fields they don't know. Each field is dumped on
-  its own line. By default, the format contains only two columns delimited by a
-  colon (':'). The left one is the field name and the right one is the value.
-  It is very important to note that in typed output format, the dump for a
-  single object is contiguous so that there is no need for a consumer to store
-  everything at once.
+  its own line. If "json" is passed as an optional argument then
+  information provided by "typed" output is provided in JSON format as a
+  list of JSON objects. By default, the format contains only two columns
+  delimited by a colon (':'). The left one is the field name and the right
+  one is the value.  It is very important to note that in typed output
+  format, the dump for a single object is contiguous so that there is no
+  need for a consumer to store everything at once.
 
   When using the typed output format, each line is made of 4 columns delimited
   by colons (':'). The first column is a dot-delimited series of 3 elements. 
The
@@ -1846,6 +1848,16 @@ show info [typed]
   6.Uptime.2:MDP:str:0d 0h01m28s
   (...)
 
+  The format of JSON output is described in a schema which may be output
+  using

[PATCH 0/2] MEDIUM: stats: Add JSON output option to show (info|stat)

2017-01-04 Thread Simon Horman
Hi,

this short series is an RFC implementation of adding JSON format
output to show (info|stat). It also adds a new show schema json
stats command to allow retreival of the schema which describes
the JSON output of show (info|stat).

Some areas for possible discussion:
* Use of STAT_STARTED in first patch
* Possible automatic generation of (part) of schema in 2nd patch
* Improved documentation

Some discussion of the size of JSON output is included as an appendix
to the changelog of the first patch.

Changes since RFC noted in per-patch changelogs.

Simon Horman (2):
  MEDIUM: stats: Add JSON output option to show (info|stat)
  MEDIUM: stats: Add show json schema

 doc/management.txt|  74 ++--
 include/types/stats.h |   6 +
 src/stats.c   | 506 +-
 3 files changed, 570 insertions(+), 16 deletions(-)

-- 
2.7.0.rc3.207.g0ac5344




[PATCH 2/2] MEDIUM: stats: Add show json schema

2017-01-04 Thread Simon Horman
This may be used to output the JSON schema which describes the output of
show info json and show stats json.

The JSON output is without any extra whitespace in order to reduce the
volume of output. For human consumption passing the output through a
pretty printer may be helpful.

e.g.:
$ echo "show schema json" | socat /var/run/haproxy.stat stdio | \
 python -m json.tool

The implementation does not generate the schema. Some consideration could
be given to integrating the output of the schema with the output of
typed and json info and stats. In particular the types (u32, s64, etc...)
and tags.

A sample verification of show info json and show stats json using
the schema is as follows. It uses the jsonschema python module:

cat > jschema.py <<  __EOF__
import json

from jsonschema import validate
from jsonschema.validators import Draft3Validator

with open('schema.txt', 'r') as f:
schema = json.load(f)
Draft3Validator.check_schema(schema)

with open('instance.txt', 'r') as f:
instance = json.load(f)
validate(instance, schema, Draft3Validator)
__EOF__

$ echo "show schema json" | socat /var/run/haproxy.stat stdio > schema.txt
$ echo "show info json" | socat /var/run/haproxy.stat stdio > instance.txt
python ./jschema.py
$ echo "show stats json" | socat /var/run/haproxy.stat stdio > instance.txt
python ./jschema.py

Signed-off-by: Simon Horman 
---

In this case the pretty printer increases the size of the output by
about 200% illustrating the value of output without whitespace.

  $ echo "show schema json" | socat /var/run/haproxy.stat stdio | wc -c
  2690
  $ echo "show schema json" | socat /var/run/haproxy.stat stdio | \
  python -m json.tool | wc -c
  8587

Changes since RFC:
* Add errors to schema and use in the case where output exceeds
  available buffer space
* Document that consideration should be given to updating
  schema function if struct field is updated
* Correct typos
* Register "show", "schema", json" rather than "show", "schema".
  This allows the parse callback to be omitted and some simplification
  of the dump callback.
* Limit integer values to the range [-(2**53)+1, (2**53)-1] as
  per the recommendation for interoperable integers in
  section 6 of RFC 7159.
---
 doc/management.txt|  33 ++-
 include/types/stats.h |   5 +-
 src/stats.c   | 234 ++
 3 files changed, 268 insertions(+), 4 deletions(-)

diff --git a/doc/management.txt b/doc/management.txt
index 623ac6375552..70af03b07271 100644
--- a/doc/management.txt
+++ b/doc/management.txt
@@ -1849,7 +1849,14 @@ show info [typed|json]
   (...)
 
   The format of JSON output is described in a schema which may be output
-  using "show schema json" (to be implemented).
+  using "show schema json".
+
+  The JSON output contains no extra whitespace in order to reduce the
+  volume of output. For human consumption passing the output through a
+  pretty printer may be helpful. Example :
+
+  $ echo "show info json" | socat /var/run/haproxy.sock stdio | \
+python -m json.tool
 
   The JSON output contains no extra whitespace in order to reduce the
   volume of output. For human consumption passing the output through a
@@ -2128,7 +2135,14 @@ show stat [{|}  ] [typed|json]
 (...)
 
   The format of JSON output is described in a schema which may be output
-  using "show schema json" (to be implemented).
+  using "show schema json".
+
+  The JSON output contains no extra whitespace in order to reduce the
+  volume of output. For human consumption passing the output through a
+  pretty printer may be helpful. Example :
+
+  $ echo "show stat json" | socat /var/run/haproxy.sock stdio | \
+python -m json.tool
 
   The JSON output contains no extra whitespace in order to reduce the
   volume of output. For human consumption passing the output through a
@@ -2237,6 +2251,21 @@ show tls-keys [id|*]
   specified as parameter, it will dump the tickets, using * it will dump every
   keys from every references.
 
+show schema json
+  Dump the schema used for the output of "show info json" and "show stat json".
+
+  The contains no extra whitespace in order to reduce the volume of output.
+  For human consumption passing the output through a pretty printer may be
+  helpful. Example :
+
+  $ echo "show schema json" | socat /var/run/haproxy.sock stdio | \
+python -m json.tool
+
+  The schema follows "JSON Schema" (json-schema.org) and accordingly
+  verifiers may be used to verify the output of "show info json" and "show
+  stat json" against the schema.
+
+
 shutdown frontend 
   Completely delete the specified frontend. All the ports it was bound to will
   

Re: [PATCH/RFC 2/2] MEDIUM: stats: Add show json schema

2016-11-29 Thread Simon Horman
On Tue, Nov 29, 2016 at 08:12:28AM +0100, Willy Tarreau wrote:
> So here are a few comments about typos I noticed (irrelevant parts trimmed).
> 
> > diff --git a/doc/management.txt b/doc/management.txt
> > index 4934c575a543..0592fc2da287 100644
> > --- a/doc/management.txt
> > +++ b/doc/management.txt
> > @@ -2220,6 +2234,21 @@ show tls-keys [id|*]
> >specified as parameter, it will dump the tickets, using * it will dump 
> > every
> >keys from every references.
> >  
> > +show schema json
> > +  Dump the schema used for the output of "show info json" and "show stat 
> > json".
> > +
> > +  The contains no extra whitespace in order to reduce the volume of output.
> > +  For human consumption passing the output through a pretty printer may be
> > +  helpful. Example :
> > +
> > +  $ echo "show schema json" | socat /var/run/haproxy.sock stdio | \
> > +python -m json.tool
> > +
> > +  The schema follows "JSON Svhema" (json-schema.org) and accordingly
> 
> s/Svhema/Schema/
> 
> > diff --git a/src/stats.c b/src/stats.c
> > index 8c0e3d688cc8..808ac96da2d9 100644
> > --- a/src/stats.c
> > +++ b/src/stats.c
> > @@ -3262,6 +3262,218 @@ static int stats_dump_info_to_buffer(struct 
> > stream_interface *si)
> (...)
> 
> > + "\"tags\":{"
> > +  "\"type\":\"object\","
> > +  "\"origin\":{"
> > +   "\"type\":\"string\","
> > +   "\"enum\":[\"Metric\",\"Satus\",\"Key\",\"Config\","
> 
> s/Satus/Status/
> 
> > +   "\"Product\",""\"Unknown\"]"
> > +  "},"
> > +  "\"nature\":{"
> > +   "\"type\":\"string\","
> > +   "\"enum\":[\"Gague\",\"Limit\",\"Min\",\"Max\","
> 
> s/Gague/Gauge/

Thanks, I'll fix the typos you highlighted above.

> (...)
> > + "\"value\":{"
> > +  "\"type\":\"integer\","
> > +  "\"minimum\":0,"
> > +  "\"maximum\":18446744073709551615"
> 
> That reminds me and old discussion going back to the websocket design
> discussions. Someone said that in java and javascript ints are necessarily
> signed and the largest positive number that can be represented is 2^63-1.
> It might be worth checking if the value above is properly parsed by
> utilities written in such languages or if we'd rather cap it to half of
> its range (anyway we're not expecting to reach 2^63 bytes more often than
> 2^64), it requires 23 years of traffic at 100 Gbps to have that high a
> byte count instead of 46 years, so we have some margin.

I agree this could be a sticky point. I believe the limits are left up
to the implementation. But I'll do a bit of research and see if I can find
a better number.

> > +static int cli_parse_show_schema(char **args, struct appctx *appctx, void 
> > *private)
> > +{
> > +   if (strcmp(args[2], "json") == 0) {
> > +   appctx->ctx.stats.flags |= STAT_FMT_JSON;
> > +   } else  {
> > +   appctx->ctx.cli.msg = "json parameter missing.\n";
> > +   appctx->st0 = CLI_ST_PRINT;
> > +   return 1;
> > +   }
> > +
> > +   return 0;
> > +}
> 
> You can even simplify this one by registering "show", "schema", "json" at
> once as I don't think we'll have any other schema for a while. But it's
> just a matter of taste, I personally have no preference.

Your suggestion sounds more like what I was after. I'll see if I can make
it so.

-- 
Simon Horman  si...@horms.nl
Horms Solutions BV  www.horms.nl
Parnassusweg 819, 1082 LZ Amsterdam, Netherlands
Tel: +31 (0)20 800 6155Skype: horms7



Re: [PATCH/RFC 0/2] MEDIUM: stats: Add JSON output option to show (info|stat)

2016-11-29 Thread Simon Horman
On Tue, Nov 29, 2016 at 08:01:53AM +0100, Willy Tarreau wrote:
> Hi Simon,
> 
> On Mon, Nov 28, 2016 at 04:18:52PM +0100, Simon Horman wrote:
> > Hi,
> > 
> > this short series is an RFC implementation of adding JSON format
> > output to show (info|stat). It also adds a new show schema json
> > stats command to allow retreival of the schema which describes
> > the JSON output of show (info|stat).
> 
> Thanks!
> 
> > Some areas for possible discussion:
> > * Use of STAT_STARTED in first patch
> 
> I don't see how you can get rid of it. You can't rely on the field
> number to know that you've already dumped something since you can
> start by dumping empty fields. The only solution would be to
> systematically append a comma after a value like we do in C enums,
> but I'm not sure most JSON parsers will like it.

Unforuntunately (and very painfully IMHO) they do not.

I was mainly thinking that perhaps I was not using a free bit in the
best field of the structure. But I agree some kind of tracking is required.

> > * Possible automatic generation of (part) of schema in 2nd patch
> 
> I think this schema dump is very useful, I just don't know what level
> of maintenance it will require (not much I think). I'm not convinced
> that the amount of work needed to make it more automatic would make
> it more maintainable to be honnest. Probably we just have to add a
> comment next to the type definitions, mentionning that anyone adding
> a type there must also update the dump functions and the json schema.
> 
> I noticed various typos there, that I'll comment on in a reply to
> the patch.
> 
> > * Improved documentation
> > 
> > Some discussion of the size of JSON output is included as an appendix
> > to the changelog of the first patch.
> 
> OK so the size should be around 45kB for show stats on a listener
> with two servers, meaning roughly 10kB per line (I think the first
> "show info" output was in fact "show stat").

Yes, I think so, sorry for the mix-up.

> For now we cannot dump less than a line at once so we must be cautious,
> because many of us run with 8kB buffers, and even those running at 16kB
> might hit the limit once we increase the number of fields and they
> contain large values.
> 
> In theory we could have a sub-level corresponding to the field being
> dumped, but it could cause some inconsistencies on a line and is not
> desirable (at least for now). I'm just realizing that we could very
> well replace the buffer contents with "output buffer too short (%d),
> needs %d".

Ok, I had not considered such limits (yet).

-- 
Simon Horman  si...@horms.nl
Horms Solutions BV  www.horms.nl
Parnassusweg 819, 1082 LZ Amsterdam, Netherlands
Tel: +31 (0)20 800 6155Skype: horms7



[PATCH/RFC 1/2] MEDIUM: stats: Add JSON output option to show (info|stat)

2016-11-28 Thread Simon Horman
Add a json parameter to show (info|stat) which will output information
in JSON format. A follow-up patch will add a JSON schema which describes
the format of the JSON output of these commands.

The JSON output is without any extra whitespace in order to reduce the
volume of output. For human consumption passing the output through a
pretty printer may be helpful.

e.g.:
$ echo "show info json" | socat /tmp/haproxy.stat stdio | \
 python -m json.tool

STAT_STARTED has bee added in order to track if show output has begun or
not. This is used in order to allow the JSON output routines to only insert
a "," between elements when needed. I would value any feedback on how this
might be done better.

Signed-off-by: Simon Horman 
---

For the simple configuration below a comparison of the size of info
and stats output is as follows:

$ show info   =>  1654 bytes
$ show info typed =>  7081 bytes
$ show info json  => 45331 bytes
$ show info json (pretty printed[*]) => 113390 bytes

$ show info   =>  527 bytes
$ show info typed =>  937 bytes
$ show info json  => 5330 bytes
$ show info json (pretty printed[*]) => 11456 bytes

[*] pretty printed using python -m json.tool

--- begin config ---
global
daemon
stats socket /tmp/haproxy.stat mode 600 level admin
pidfile /tmp/haproxy.pid
log /dev/log local4
tune.bufsize 16384
tune.maxrewrite 1024

defaults
mode http
balance roundrobin
timeout connect 4000
timeout client 42000
timeout server 43000
log global

listen VIP_Name
bind 127.0.0.1:10080 transparent
mode http
balance leastconn
cookie SERVERID insert nocache indirect
server backup 127.0.0.1:9081 backup  non-stick
option http-keep-alive
option forwardfor
option redispatch
option abortonclose
maxconn 4
log global
option httplog
option log-health-checks
server RIP_Name 127.0.0.1  weight 100  cookie RIP_Name agent-check 
agent-port 12345 agent-inter 2000 check port 80 inter 2000 rise 2 fall 3  
minconn 0 maxconn 0s on-marked-down shutdown-sessions disabled
server RIP_Name 127.0.0.1  weight 100  cookie RIP_Name agent-check 
agent-port 12345 agent-inter 2000 check port 80 inter 2000 rise 2 fall 3  
minconn 0 maxconn 0s on-marked-down shutdown-sessions
--- end config ---

 src/stats.c | 231 
 1 file changed, 231 insertions(+)
---
 doc/management.txt|  45 ++---
 include/types/stats.h |   2 +
 src/stats.c   | 247 +-
 3 files changed, 278 insertions(+), 16 deletions(-)

diff --git a/doc/management.txt b/doc/management.txt
index c8ba0e379e4a..4934c575a543 100644
--- a/doc/management.txt
+++ b/doc/management.txt
@@ -1743,16 +1743,18 @@ show errors [|] [request|response]
 show backend
   Dump the list of backends available in the running process
 
-show info [typed]
+show info [typed|json]
   Dump info about haproxy status on current process. If "typed" is passed as an
   optional argument, field numbers, names and types are emitted as well so that
   external monitoring products can easily retrieve, possibly aggregate, then
   report information found in fields they don't know. Each field is dumped on
-  its own line. By default, the format contains only two columns delimited by a
-  colon (':'). The left one is the field name and the right one is the value.
-  It is very important to note that in typed output format, the dump for a
-  single object is contiguous so that there is no need for a consumer to store
-  everything at once.
+  its own line. If "json" is passed as an optional argument then
+  information provided by "typed" output is provided in JSON format as a
+  list of JSON objects. By default, the format contains only two columns
+  delimited by a colon (':'). The left one is the field name and the right
+  one is the value.  It is very important to note that in typed output
+  format, the dump for a single object is contiguous so that there is no
+  need for a consumer to store everything at once.
 
   When using the typed output format, each line is made of 4 columns delimited
   by colons (':'). The first column is a dot-delimited series of 3 elements. 
The
@@ -1829,6 +1831,16 @@ show info [typed]
   6.Uptime.2:MDP:str:0d 0h01m28s
   (...)
 
+  The format of JSON output is described in a schema which may be output
+  using "show schema json" (to be implemented).
+
+  The JSON output contains no extra whitespace in order to reduce the
+  volume of output. For human consumption passing the output through a
+  pretty printer 

[PATCH/RFC 2/2] MEDIUM: stats: Add show json schema

2016-11-28 Thread Simon Horman
This may be used to output the JSON schema which describes the output of
show info json and show stats json.

The JSON output is without any extra whitespace in order to reduce the
volume of output. For human consumption passing the output through a
pretty printer may be helpful.

e.g.:
$ echo "show schema json" | socat /tmp/haproxy.stat stdio | \
 python -m json.tool

The implementation does not generate the schema. Some consideration could
be given to integrating the output of the schema with the output of
typed and json info and stats. In particular the types (u32, s64, etc...)
and tags.

A sample verification of show info json and show stats json using
the schema is as follows. It uses the jsonschema python module:

cat > jschema.py <<  __EOF__
import json

from jsonschema import validate
from jsonschema.validators import Draft3Validator

with open('schema.txt', 'r') as f:
schema = json.load(f)
Draft3Validator.check_schema(schema)

with open('instance.txt', 'r') as f:
instance = json.load(f)
validate(instance, schema, Draft3Validator)
__EOF__

$ echo "show schema json" | socat /tmp/haproxy.stat stdio > schema.txt
$ echo "show info json" | socat /tmp/haproxy.stat stdio > instance.txt
python ./jschema.py
$ echo "show stats json" | socat /tmp/haproxy.stat stdio > instance.txt
python ./jschema.py

Signed-off-by: Simon Horman 
---

In this case the pretty printer increases the size of the output by
about 200% illustrating the value of output without whitespace.

  $ echo "show schema json" | socat /tmp/haproxy.stat stdio | wc -c
  2690
  $ echo "show schema json" | socat /tmp/haproxy.stat stdio | \
  python -m json.tool | wc -c
  8587
---
 doc/management.txt |  33 +++-
 src/stats.c| 231 +
 2 files changed, 262 insertions(+), 2 deletions(-)

diff --git a/doc/management.txt b/doc/management.txt
index 4934c575a543..0592fc2da287 100644
--- a/doc/management.txt
+++ b/doc/management.txt
@@ -1832,7 +1832,14 @@ show info [typed|json]
   (...)
 
   The format of JSON output is described in a schema which may be output
-  using "show schema json" (to be implemented).
+  using "show schema json".
+
+  The JSON output contains no extra whitespace in order to reduce the
+  volume of output. For human consumption passing the output through a
+  pretty printer may be helpful. Example :
+
+  $ echo "show info json" | socat /var/run/haproxy.sock stdio | \
+python -m json.tool
 
   The JSON output contains no extra whitespace in order to reduce the
   volume of output. For human consumption passing the output through a
@@ -2111,7 +2118,14 @@ show stat [{|}  ] [typed|json]
 (...)
 
   The format of JSON output is described in a schema which may be output
-  using "show schema json" (to be implemented).
+  using "show schema json".
+
+  The JSON output contains no extra whitespace in order to reduce the
+  volume of output. For human consumption passing the output through a
+  pretty printer may be helpful. Example :
+
+  $ echo "show stat json" | socat /var/run/haproxy.sock stdio | \
+python -m json.tool
 
   The JSON output contains no extra whitespace in order to reduce the
   volume of output. For human consumption passing the output through a
@@ -2220,6 +2234,21 @@ show tls-keys [id|*]
   specified as parameter, it will dump the tickets, using * it will dump every
   keys from every references.
 
+show schema json
+  Dump the schema used for the output of "show info json" and "show stat json".
+
+  The contains no extra whitespace in order to reduce the volume of output.
+  For human consumption passing the output through a pretty printer may be
+  helpful. Example :
+
+  $ echo "show schema json" | socat /var/run/haproxy.sock stdio | \
+python -m json.tool
+
+  The schema follows "JSON Svhema" (json-schema.org) and accordingly
+  verifiers may be used to verify the output of "show info json" and "show
+  stat json" against the schema.
+
+
 shutdown frontend 
   Completely delete the specified frontend. All the ports it was bound to will
   be released. It will not be possible to enable the frontend anymore after
diff --git a/src/stats.c b/src/stats.c
index 8c0e3d688cc8..808ac96da2d9 100644
--- a/src/stats.c
+++ b/src/stats.c
@@ -3262,6 +3262,218 @@ static int stats_dump_info_to_buffer(struct 
stream_interface *si)
return 1;
 }
 
+/* This function dumps the schema onto the stream interface's read buffer.
+ * It returns 0 as long as it does not complete, non-zero upon completion.
+ * No state is used.
+ */
+static int stats_dump_json_schema(struct chunk *out)
+{
+
+return chunk_strcat(out,
+   "{&q

[PATCH/RFC 0/2] MEDIUM: stats: Add JSON output option to show (info|stat)

2016-11-28 Thread Simon Horman
Hi,

this short series is an RFC implementation of adding JSON format
output to show (info|stat). It also adds a new show schema json
stats command to allow retreival of the schema which describes
the JSON output of show (info|stat).

Some areas for possible discussion:
* Use of STAT_STARTED in first patch
* Possible automatic generation of (part) of schema in 2nd patch
* Improved documentation

Some discussion of the size of JSON output is included as an appendix
to the changelog of the first patch.

Simon Horman (2):
  MEDIUM: stats: Add JSON output option to show (info|stat)
  MEDIUM: stats: Add show json schema

 doc/management.txt|  74 ++--
 include/types/stats.h |   2 +
 src/stats.c   | 478 +-
 3 files changed, 538 insertions(+), 16 deletions(-)

-- 
2.7.0.rc3.207.g0ac5344




[PATCH] MINOR: stats: correct documentation of process ID for typed output

2016-11-21 Thread Simon Horman
The process ID appears at the end of the first column rather than
the line.
---
 doc/management.txt | 12 +++-
 1 file changed, 7 insertions(+), 5 deletions(-)

diff --git a/doc/management.txt b/doc/management.txt
index 751621c0392b..f42c0712012e 100644
--- a/doc/management.txt
+++ b/doc/management.txt
@@ -1802,8 +1802,9 @@ show info [typed]
   11.PoolFailed.1:MCP:u32:0
   (...)
 
-  In the typed format, the presence of the process ID at the end of the line
-  makes it very easy to visually aggregate outputs from multiple processes.
+  In the typed format, the presence of the process ID at the end of the
+  first column makes it very easy to visually aggregate outputs from
+  multiple processes.
   Example :
 
   $ ( echo show info typed | socat /var/run/haproxy.sock1 ;\
@@ -2067,9 +2068,10 @@ show stat [  ] [typed]
 B.3.0.55.lastsess.1:MMP:s32:-1
 (...)
 
-  In the typed format, the presence of the process ID at the end of the line
-  makes it very easy to visually aggregate outputs from multiple processes, as
-  show in the example below where each line appears for each process :
+  In the typed format, the presence of the process ID at the end of the
+  first column makes it very easy to visually aggregate outputs from
+  multiple processes, as show in the example below where each line appears
+  for each process :
 
 $ ( echo show stat typed | socat /var/run/haproxy.sock1 - ; \
 echo show stat typed | socat /var/run/haproxy.sock2 - ) | \
-- 
2.7.0.rc3.207.g0ac5344




Re: Getting JSON encoded data from the stats socket.

2016-11-14 Thread Simon Horman
On Mon, Nov 14, 2016 at 08:50:54AM -0500, hapr...@stormcloud9.net wrote:
> Might help to see an example of what the results look like when using
> this schema, however I do have one comment below.

Yes, agreed. I plan to work on making that so.

> On 2016/11/14 03:09, Simon Horman wrote:
> > Hi Willy, Hi All,
> >
> > On Thu, Nov 10, 2016 at 04:52:56PM +0100, Willy Tarreau wrote:
> >> Hi Simon!
> >>
> >> On Thu, Nov 10, 2016 at 04:27:15PM +0100, Simon Horman wrote:
> >>> My preference is to take things calmly as TBH I am only just getting
> >>> started on this and I think the schema could take a little time to get
> >>> a consensus on.
> >> I totally agree with you. I think the most difficult thing is not to
> >> run over a few arrays and dump them but manage to make everyone agree
> >> on the schema. And that will take more than a few days I guess. Anyway
> >> I'm fine with being proven wrong :-)
> > I took a first pass at defining a schema.
> >
> > * The schema follows what is described on json-schema.org (or at least
> >   tries to). Is this a suitable approach?
> > * The schema only covers "show info" and "show stat" and the fields
> >   are based on the typed output variants of those commands.
> >   This leads me to several questions:
> >   - Is this field selection desirable? It seems to make sense to me
> > as presumably the intention of the JSON output is for it to
> > be machine readable.
> >   - Is such an approach appropriate for other show commands?
> >   - And more generally, which other show commands are desired to
> > support output in JSON (in the near term)?
> >
> > {
> > "$schema": "http://json-schema.org/draft-04/schema#";,
> > "oneOf": [
> > {
> > "title": "Info",
> > "description": "Info about HAProxy status",
> > "type": "array",
> > "items": {
> > "properties": {
> > "title": "Info Item",
> > "type": "object",
> > "field": { "$ref": "#/definitions/field" },
> > "processNum": { "$ref": "#/definitions/processNum" },
> > "tags": { "$ref": "#/definitions/tags" },
> > "value": { "$ref": "#/definitions/typedValue" }
> > },
> > "required": ["field", "processNum", "tags", "value"]
> > }
> > },
> > {
> > "title": "Stat",
> > "description": "HAProxy statistics",
> > "type": "array",
> > "items": {
> > "title": "Info Item",
> > "type": "object",
> > "properties": {
> > "objType": {
> > "enum": ["F", // Frontend
> >  "B", // Backend
> >  "L", // Listener
> >  "S"  // Server
> Do we really need to save a few bytes and abbreviate these? We're
> already far more chatty than the CSV output as you're outputting field
> names (e.g. "proxyId" and "processNum"), so abbreviating the values when
> you've got full field names seems rather contrary. And then as you've
> demonstrated, this requires defining a "sub-schema" for explaining what
> "F", "B", etc, are. Thus requiring anyone parsing the json to have to
> keep a mapping of the values (and do the translation) within their code.
> Ditto for all the other "enum" types down below.

Good point. I'm not sure why that didn't occur to me.
But it does seem like a good idea.

> > ]
> > },
> > "proxyId": {
> > "type": "integer",
> > "minimum": 0
> > },
> > "id": {
> > "

Re: Getting JSON encoded data from the stats socket.

2016-11-14 Thread Simon Horman
Hi Willy,

On Mon, Nov 14, 2016 at 03:10:18PM +0100, Willy Tarreau wrote:
> On Mon, Nov 14, 2016 at 11:34:18AM +0100, Simon Horman wrote:
> > > Sometimes a description like above appears in your example, is it just
> > > for a few fields or do you intend to describe all of them ? I'm asking
> > > because we don't have such descriptions right now, and while I won't
> > > deny that forcing contributors to add one when adding new stats could be
> > > reasonable (it's like doc), I fear that it would significantly inflate
> > > the output.
> > 
> > My understanding is that the description is part of the schema but would
> > not be included in a JSON instance. Or on other words, would not
> > be included in the output of a show command.
> 
> OK. So does this mean that a schema will have to be maintained by hand in
> parallel or will it be deduced from the dump ? I'm starting to be worried
> about something not being kept up to date if we have to maintain it, or
> causing a slow down in adoption of new stats entries.

I envisage the schema being maintained in the same way that documentation
is. In the draft schema I posted it should not be necessary to update each
time a new item is added to the output of show flow or show info. Rather,
the schema would need to be updated if the format of the data changes some
how: f.e. a new field is added which would be analagous to adding a new
column to the output of typed output, or a new type of value, such as u16,
was added.

> > My intention was to add descriptions for all fields. But in many case
> > the field name seemed to be sufficiently descriptive or at least I couldn't
> > think of a better description. And in such cases I omitted the description
> > to avoid being repetitive.
> 
> OK that's a good point. So we can possibly have a first implementation reusing
> the field name everywhere, and later make these descriptions mandatory in the
> code for new fields so that the output description becomes more readable.
> 
> > I do not feel strongly about the descriptions. I'm happy to remove some or
> > all of them if they are deemed unnecessary or otherwise undesirable; to add
> > them to every field for consistency; or something in between.
> 
> I think dumping only known descriptions and falling back to the name (or
> simply suggesting that the consumer just uses the same when there's no desc)
> sounds reasonable to me for now.
> 
> > > Also, do you have an idea about the verbosity of the dump here ? For
> > > example let's say you have 100 listeners with 4 servers each (which is
> > > an average sized config). I'm just looking for a rought order of 
> > > magnitude,
> > > ie closer to 10-100k or to 1-10M. The typed output is already quite heavy
> > > for large configs so it should not be a big deal, but it's something we
> > > have to keep in mind.
> > 
> > I don't think the type, description, etc... should be included in such
> > output as they can be supplied by the schema out-of-band. But the field
> > name and values along with syntactic elements (brackets, quotes, etc...) do
> > need to be included.
> 
> OK.
> 
> > I can try and come up with an estimate if it is
> > important but my guess is the result would be several times the size of the
> > typed output (mainly owing to the size of the field names in the output).
> 
> No, don't worry, this rough estimate is enough.

-- 
Simon Horman  si...@horms.nl
Horms Solutions BV  www.horms.nl
Parnassusweg 819, 1082 LZ Amsterdam, Netherlands
Tel: +31 (0)20 800 6155Skype: horms7



Re: Getting JSON encoded data from the stats socket.

2016-11-14 Thread Simon Horman
Hi Willy,

On Mon, Nov 14, 2016 at 11:11:12AM +0100, Willy Tarreau wrote:
> Hi Simon,
> 
> On Mon, Nov 14, 2016 at 09:09:21AM +0100, Simon Horman wrote:
> > I took a first pass at defining a schema.
> > 
> > * The schema follows what is described on json-schema.org (or at least
> >   tries to). Is this a suitable approach?
> 
> I'll let others respond as I have no idea since I never need nor use JSON :-)
> 
> > * The schema only covers "show info" and "show stat" and the fields
> >   are based on the typed output variants of those commands.
> >   This leads me to several questions:
> >   - Is this field selection desirable? It seems to make sense to me
> > as presumably the intention of the JSON output is for it to
> > be machine readable.
> 
> Yes in my opinion it's the goal. And these are the two only parts that
> were converted to typed output for this reason.
> 
> >   - Is such an approach appropriate for other show commands?
> 
> At the moment I don't think so because the other ones are more related
> to state management than statistics.

Nice, I'm quite happy to limit the scope of this work to info and stat
at least for now.

> >   - And more generally, which other show commands are desired to
> > support output in JSON (in the near term)?
> 
> I can't think of any right now.
> 
> However I have a question below :
> 
> > "id": {
> > "description": "Unique identifyier of object within 
> > proxy",
> > "type": "integer",
> > "minimum": 0
> > },
> 
> Sometimes a description like above appears in your example, is it just
> for a few fields or do you intend to describe all of them ? I'm asking
> because we don't have such descriptions right now, and while I won't
> deny that forcing contributors to add one when adding new stats could be
> reasonable (it's like doc), I fear that it would significantly inflate
> the output.

My understanding is that the description is part of the schema but would
not be included in a JSON instance. Or on other words, would not
be included in the output of a show command.

My intention was to add descriptions for all fields. But in many case
the field name seemed to be sufficiently descriptive or at least I couldn't
think of a better description. And in such cases I omitted the description
to avoid being repetitive.

I do not feel strongly about the descriptions. I'm happy to remove some or
all of them if they are deemed unnecessary or otherwise undesirable; to add
them to every field for consistency; or something in between.

> Also, do you have an idea about the verbosity of the dump here ? For
> example let's say you have 100 listeners with 4 servers each (which is
> an average sized config). I'm just looking for a rought order of magnitude,
> ie closer to 10-100k or to 1-10M. The typed output is already quite heavy
> for large configs so it should not be a big deal, but it's something we
> have to keep in mind.

I don't think the type, description, etc... should be included in such
output as they can be supplied by the schema out-of-band. But the field
name and values along with syntactic elements (brackets, quotes, etc...) do
need to be included. I can try and come up with an estimate if it is
important but my guess is the result would be several times the size of the
typed output (mainly owing to the size of the field names in the output).

> Oh BTW just to let you know, I'm working on a painful bug and possibly a
> small regression which will force me to revert some recent fixes, so you
> may still have a bit of time left :-)

Good luck with the bug and thanks for letting me know.



Re: Getting JSON encoded data from the stats socket.

2016-11-14 Thread Simon Horman
Hi Willy, Hi All,

On Thu, Nov 10, 2016 at 04:52:56PM +0100, Willy Tarreau wrote:
> Hi Simon!
> 
> On Thu, Nov 10, 2016 at 04:27:15PM +0100, Simon Horman wrote:
> > My preference is to take things calmly as TBH I am only just getting
> > started on this and I think the schema could take a little time to get
> > a consensus on.
> 
> I totally agree with you. I think the most difficult thing is not to
> run over a few arrays and dump them but manage to make everyone agree
> on the schema. And that will take more than a few days I guess. Anyway
> I'm fine with being proven wrong :-)

I took a first pass at defining a schema.

* The schema follows what is described on json-schema.org (or at least
  tries to). Is this a suitable approach?
* The schema only covers "show info" and "show stat" and the fields
  are based on the typed output variants of those commands.
  This leads me to several questions:
  - Is this field selection desirable? It seems to make sense to me
as presumably the intention of the JSON output is for it to
be machine readable.
  - Is such an approach appropriate for other show commands?
  - And more generally, which other show commands are desired to
support output in JSON (in the near term)?

{
"$schema": "http://json-schema.org/draft-04/schema#";,
"oneOf": [
{
"title": "Info",
"description": "Info about HAProxy status",
"type": "array",
"items": {
"properties": {
"title": "Info Item",
"type": "object",
"field": { "$ref": "#/definitions/field" },
"processNum": { "$ref": "#/definitions/processNum" },
"tags": { "$ref": "#/definitions/tags" },
"value": { "$ref": "#/definitions/typedValue" }
},
"required": ["field", "processNum", "tags", "value"]
}
},
{
"title": "Stat",
"description": "HAProxy statistics",
"type": "array",
"items": {
"title": "Info Item",
"type": "object",
"properties": {
"objType": {
"enum": ["F", // Frontend
 "B", // Backend
 "L", // Listener
 "S"  // Server
]
},
"proxyId": {
"type": "integer",
"minimum": 0
},
"id": {
"description": "Unique identifyier of object within 
proxy",
"type": "integer",
"minimum": 0
},
"field": { "$ref": "#/definitions/field" },
"processNum": { "$ref": "#/definitions/processNum" },
"tags": { "$ref": "#/definitions/tags" },
"typedValue": { "$ref": "#/definitions/typedValue" }
},
"required": ["objType", "proxyId", "id", "field", "processNum",
 "tags", "value"]
}
}
],
"definitions": {
"field": {
"type": "object",
"pos": {
"description": "Position of field",
"type": "integer",
"minimum": 0
},
"name": {
"description": "Name of field",
"type": "string"
},
"required": ["pos", "name"]
},
"processNum": {
"description": "Relative process number",
"type": "integer",
"minimum": 1
},
"tags": {
"ty

Re: Getting JSON encoded data from the stats socket.

2016-11-10 Thread Simon Horman
On Thu, Nov 10, 2016 at 04:12:31PM +0100, Willy Tarreau wrote:
> Hi Malcolm,
> 
> On Thu, Nov 10, 2016 at 12:53:13PM +, Malcolm Turnbull wrote:
> > Georg,
> > 
> > That's a timely reminder thanks:
> > I just had another chat with Simon Horman who has kindly offered to
> > take a look at this again.
> 
> That's cool!
> 
> The only thing is that I don't want to delay the release only for this,
> and at the same time I'm pretty sure it's possible to do something which
> will not impact existing code within a reasonable time frame. I just
> don't know how long it takes to make everyone agree on the schema. My
> intent is to release 1.7 by the end of next week *if we don't discover
> new scary bugs*. So if you think it's doable by then, that's fine. Or
> if you want to buy more time, you need to discover a big bug which will
> keep me busy and cause the release to be delayed ;-) Otherwise I think
> it will have to be in 1.8.
> 
> Note, to be clear, if many people insist on having this, we don't have an
> emergency to release by the end of next week, but it's just a policy we
> cannot pursue forever, at least by respect for those who were pressured
> to send their stuff in time. So I think that we can negociate one extra
> week if we're sure to have something completed, but only if people here
> insist on having it in 1.7.
> 
> Thus the first one who has a word to say is obviously Simon : if you
> think that even two weeks are not achievable, let's calmly postpone and
> avoid any stress.

My preference is to take things calmly as TBH I am only just getting
started on this and I think the schema could take a little time to get
a consensus on.



Re: HAProxy reloads lets old and outdated processes

2016-10-24 Thread Simon Dick
On 24 October 2016 at 13:46, Pierre Cheynier  wrote:

> Hi,
>
> Sorry, wrong order in the answers.
>
> > Yes it has something to do with it because it's the systemd-wrapper which
> > delivers the signal to the old processes in this mode, while in the
> normal
> > mode the processes get the signal directly from the new process. Another
> > important point is that exactly *all* users having problem with zombie
> > processes are systemd users, with no exception. And this problem has
> never
> > existed over the first 15 years where systems were using a sane init
> > instead and still do not exist on non-systemd OSes.
>
> Unfortunately, I remember we had the same issue (but less frequently) on
> CentOS6 which is init-based.
> I tried to reproduce, but didn't succeed... So let's ignore that for now,
> it was maybe related to something else.
>
>
I had similar problems in my last job when I was reloading haproxy pretty
frequently using standard init.d scripts from the consul-template program.
I even updated it to the latest 1.6 at the time without noticeable
improvements.


Re: haproxy does not correctly handle MSS on Freebsd

2016-08-21 Thread k simon

Thank you, Lukas. I would investigate it a bit more.

Simon
20160821



Re: haproxy does not correctly handle MSS on Freebsd

2016-08-19 Thread k simon

Hi Lukas,




Hi Simon,



Am 19.08.2016 um 12:41 schrieb k simon:

Hi,List:
  Haproxy's throughput is much less than nginx or squid on FreeBSD and
it's high cpu usage often. When I investigate it a bit more, I found
haproxy does not correctly handle MSS on FreeBSD.


Your kernel decides the segment size of a TCP packet. An TCP application
such as haproxy can give hints, like limiting the MSS further, but it
definitely does not segment TCP payload. I think your investigation goes
in the wrong direction ...




1. When haproxy bind to a physical interface and change
net.inet.tcp.mssdflt to a large value. Haproxy will use this value as
effective size of outgoing segments and ignore the advertised value.


Do you have tcpdump to show that? If your TCP segments are larger than
the negotiated MSS, then its a freebsd kernel bug, not a haproxy one.
Its not the applications job to segment TCP packets.




2.When haproxy bind to a loopback interface. The advertised value is
16344, it's correct. But haproxy send irregular segment size.


What's irregular? In your loopback tcpdump capture, I don't see any
packets with a segment size larger than 16344, so no irregularity there.



Packets's segment should be 16344 as the advertised value. I saw other 
applicaption worked as expected.





3. MSS option is invalid on FreeBSD.


Again, can you elaborate? What does "invalid" mean?




I have tested it with MSS 1200 and found haproxy advertised value have 
not changed. The value is equal to client's advertised value, eg. 1460.





When path_mtu_discovery=1, it worked as expected.


Haproxy is not aware of this parameter. Your kernel is. Is your CPU
usage problem gone with this setting, or do your just don't see any "MSS
irregularities" anymore?



Please do elaborate what you think its wrong with haproxy behavior
*exactly*, because just saying "invalid/irregular MSS behavior" without
specifying what exactly you mean isn't helpful.



Lukas





Re: haproxy does not correctly handle MSS on Freebsd

2016-08-19 Thread k simon





Hi,List:
  Haproxy's throughput is much less than nginx or squid on FreeBSD and
it's high cpu usage often. When I investigate it a bit more, I found
haproxy does not correctly handle MSS on FreeBSD.
1. When haproxy bind to a physical interface and change
net.inet.tcp.mssdflt to a large value. Haproxy will use this value as
effective size of outgoing segments and ignore the advertised value.

When path_mtu_discovery=1, it worked as expected.


2.When haproxy bind to a loopback interface. The advertised value is
16344, it's correct. But haproxy send irregular segment size.

Whenerver path_mtu_discovery set to 0 or 1, it worked weird .


3. MSS option is invalid on FreeBSD.
  I'm running haproxy instance inside a vimage jail, and it should act
the same as runing on bare box. It's really a serious problem and easily
to reproduced.


Regards
Simon






P.S.
1.
FreeBSD ha-l0-j2 10.3-STABLE FreeBSD 10.3-STABLE #0 r303988: Fri Aug 12
16:48:21 CST 2016
root@cache-farm-n2:/usr/obj/usr/src/sys/10-stable-r303988  amd64

2.
HA-Proxy version 1.6.8 2016/08/14
Copyright 2000-2016 Willy Tarreau 

Build options :
  TARGET  = freebsd
  CPU = generic
  CC  = clang37
  CFLAGS  = -O2 -pipe -fno-omit-frame-pointer -march=corei7
-fno-strict-aliasing -DFREEBSD_PORTS
  OPTIONS = USE_TPROXY=1 USE_GETADDRINFO=1 USE_ZLIB=1 USE_CPU_AFFINITY=1
USE_REGPARM=1 USE_OPENSSL=1 USE_STATIC_PCRE=1 USE_PCRE_JIT=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Encrypted password support via crypt(3): yes
Built with zlib version : 1.2.8
Compression algorithms supported : identity("identity"),
deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with OpenSSL version : OpenSSL 1.0.2h  3 May 2016
Running on OpenSSL version : OpenSSL 1.0.2h  3 May 2016
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes
Built with PCRE version : 8.39 2016-06-14
PCRE library supports JIT : yes
Built without Lua support
Built with transparent proxy support using: IP_BINDANY IPV6_BINDANY

Available polling systems :
 kqueue : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use kqueue.



3.
frontend tcp-in
mode tcp
bind :1301

frontend virtual-frontend
mode http
bind 127.0.0.1:1000 accept-proxy





4.
17:41:36.515924 IP 127.0.0.1.12558 > 127.0.0.1.1000: Flags [S], seq
1769628266, win 65535, options [mss 16344], length 0
17:41:36.515954 IP 127.0.0.1.1000 > 127.0.0.1.12558: Flags [S.], seq
360367860, ack 1769628267, win 65535, options [mss 16344], length 0
17:41:36.515957 IP 127.0.0.1.1000 > 127.0.0.1.12522: Flags [P.], seq
773322:777418, ack 211, win 65535, length 4096
17:41:36.515985 IP 127.0.0.1.12558 > 127.0.0.1.1000: Flags [.], ack 1,
win 65535, length 0
17:41:36.515994 IP 127.0.0.1.12558 > 127.0.0.1.1000: Flags [P.], seq
1:49, ack 1, win 65535, length 48
17:41:36.516001 IP 127.0.0.1.12558 > 127.0.0.1.1000: Flags [P.], seq
49:914, ack 1, win 65535, length 865
17:41:36.516085 IP 127.0.0.1.1000 > 127.0.0.1.12558: Flags [.], ack 914,
win 65535, length 0
17:41:36.516095 IP 127.0.0.1.1000 > 127.0.0.1.12522: Flags [P.], seq
777418:778878, ack 211, win 65535, length 1460
17:41:36.516203 IP 127.0.0.1.12522 > 127.0.0.1.1000: Flags [.], ack
778878, win 65535, length 0
17:41:36.516403 IP 127.0.0.1.1000 > 127.0.0.1.12522: Flags [P.], seq
778878:784978, ack 211, win 65535, length 6100
17:41:36.516424 IP 127.0.0.1.12556 > 127.0.0.1.1000: Flags [F.], seq
477, ack 274, win 65535, length 0
17:41:36.516435 IP 127.0.0.1.1000 > 127.0.0.1.12556: Flags [.], ack 478,
win 65535, length 0
17:41:36.516466 IP 127.0.0.1.1000 > 127.0.0.1.12556: Flags [F.], seq
274, ack 478, win 65535, length 0
17:41:36.516487 IP 127.0.0.1.12556 > 127.0.0.1.1000: Flags [.], ack 275,
win 65534, length 0
17:41:36.516515 IP 127.0.0.1.1000 > 127.0.0.1.12522: Flags [P.], seq
784978:789074, ack 211, win 65535, length 4096
17:41:36.516532 IP 127.0.0.1.12522 > 127.0.0.1.1000: Flags [.], ack
789074, win 65535, length 0
17:41:36.516922 IP 127.0.0.1.1000 > 127.0.0.1.12522: Flags [P.], seq
789074:790534, ack 211, win 65535, length 1460
17:41:36.516960 IP 127.0.0.1.1000 > 127.0.0.1.12522: Flags [P.], seq
790534:793170, ack 211, win 65535, length 2636
17:41:36.516971 IP 127.0.0.1.12522 > 127.0.0.1.1000: Flags [.], ack
793170, win 65535, length 0
17:41:36.517270 IP 127.0.0.1.1000 > 127.0.0.1.12522: Flags [P.], seq
793170:796942, ack 211, win 65535, length 3772
17:41:36.517351 IP 127.0.0.1.1000 > 127.0.0.1.12522: Flags [P.], seq
796942:798402, ack 211, win 65535, length 1460
17:41:36.517368 IP 127.0.0.1.12522 > 127.0.0.1.1000: Flags [.], ack
798402, win 65535, length 0
17:41:36.517529 IP 127.0.0.1.1000 > 127.0.0.1.12405: Fla

haproxy does not correctly handle MSS on Freebsd

2016-08-19 Thread k simon

Hi,List:
  Haproxy's throughput is much less than nginx or squid on FreeBSD and 
it's high cpu usage often. When I investigate it a bit more, I found 
haproxy does not correctly handle MSS on FreeBSD.
1. When haproxy bind to a physical interface and change 
net.inet.tcp.mssdflt to a large value. Haproxy will use this value as 
effective size of outgoing segments and ignore the advertised value.
2.When haproxy bind to a loopback interface. The advertised value is 
16344, it's correct. But haproxy send irregular segment size.

3. MSS option is invalid on FreeBSD.
  I'm running haproxy instance inside a vimage jail, and it should act 
the same as runing on bare box. It's really a serious problem and easily 
to reproduced.



Regards
Simon






P.S.
1.
FreeBSD ha-l0-j2 10.3-STABLE FreeBSD 10.3-STABLE #0 r303988: Fri Aug 12 
16:48:21 CST 2016 
root@cache-farm-n2:/usr/obj/usr/src/sys/10-stable-r303988  amd64


2.
HA-Proxy version 1.6.8 2016/08/14
Copyright 2000-2016 Willy Tarreau 

Build options :
  TARGET  = freebsd
  CPU = generic
  CC  = clang37
  CFLAGS  = -O2 -pipe -fno-omit-frame-pointer -march=corei7 
-fno-strict-aliasing -DFREEBSD_PORTS
  OPTIONS = USE_TPROXY=1 USE_GETADDRINFO=1 USE_ZLIB=1 
USE_CPU_AFFINITY=1 USE_REGPARM=1 USE_OPENSSL=1 USE_STATIC_PCRE=1 
USE_PCRE_JIT=1


Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Encrypted password support via crypt(3): yes
Built with zlib version : 1.2.8
Compression algorithms supported : identity("identity"), 
deflate("deflate"), raw-deflate("deflate"), gzip("gzip")

Built with OpenSSL version : OpenSSL 1.0.2h  3 May 2016
Running on OpenSSL version : OpenSSL 1.0.2h  3 May 2016
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes
Built with PCRE version : 8.39 2016-06-14
PCRE library supports JIT : yes
Built without Lua support
Built with transparent proxy support using: IP_BINDANY IPV6_BINDANY

Available polling systems :
 kqueue : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use kqueue.



3.
frontend tcp-in
mode tcp
bind :1301

frontend virtual-frontend
mode http
bind 127.0.0.1:1000 accept-proxy





4.
17:41:36.515924 IP 127.0.0.1.12558 > 127.0.0.1.1000: Flags [S], seq 
1769628266, win 65535, options [mss 16344], length 0
17:41:36.515954 IP 127.0.0.1.1000 > 127.0.0.1.12558: Flags [S.], seq 
360367860, ack 1769628267, win 65535, options [mss 16344], length 0
17:41:36.515957 IP 127.0.0.1.1000 > 127.0.0.1.12522: Flags [P.], seq 
773322:777418, ack 211, win 65535, length 4096
17:41:36.515985 IP 127.0.0.1.12558 > 127.0.0.1.1000: Flags [.], ack 1, 
win 65535, length 0
17:41:36.515994 IP 127.0.0.1.12558 > 127.0.0.1.1000: Flags [P.], seq 
1:49, ack 1, win 65535, length 48
17:41:36.516001 IP 127.0.0.1.12558 > 127.0.0.1.1000: Flags [P.], seq 
49:914, ack 1, win 65535, length 865
17:41:36.516085 IP 127.0.0.1.1000 > 127.0.0.1.12558: Flags [.], ack 914, 
win 65535, length 0
17:41:36.516095 IP 127.0.0.1.1000 > 127.0.0.1.12522: Flags [P.], seq 
777418:778878, ack 211, win 65535, length 1460
17:41:36.516203 IP 127.0.0.1.12522 > 127.0.0.1.1000: Flags [.], ack 
778878, win 65535, length 0
17:41:36.516403 IP 127.0.0.1.1000 > 127.0.0.1.12522: Flags [P.], seq 
778878:784978, ack 211, win 65535, length 6100
17:41:36.516424 IP 127.0.0.1.12556 > 127.0.0.1.1000: Flags [F.], seq 
477, ack 274, win 65535, length 0
17:41:36.516435 IP 127.0.0.1.1000 > 127.0.0.1.12556: Flags [.], ack 478, 
win 65535, length 0
17:41:36.516466 IP 127.0.0.1.1000 > 127.0.0.1.12556: Flags [F.], seq 
274, ack 478, win 65535, length 0
17:41:36.516487 IP 127.0.0.1.12556 > 127.0.0.1.1000: Flags [.], ack 275, 
win 65534, length 0
17:41:36.516515 IP 127.0.0.1.1000 > 127.0.0.1.12522: Flags [P.], seq 
784978:789074, ack 211, win 65535, length 4096
17:41:36.516532 IP 127.0.0.1.12522 > 127.0.0.1.1000: Flags [.], ack 
789074, win 65535, length 0
17:41:36.516922 IP 127.0.0.1.1000 > 127.0.0.1.12522: Flags [P.], seq 
789074:790534, ack 211, win 65535, length 1460
17:41:36.516960 IP 127.0.0.1.1000 > 127.0.0.1.12522: Flags [P.], seq 
790534:793170, ack 211, win 65535, length 2636
17:41:36.516971 IP 127.0.0.1.12522 > 127.0.0.1.1000: Flags [.], ack 
793170, win 65535, length 0
17:41:36.517270 IP 127.0.0.1.1000 > 127.0.0.1.12522: Flags [P.], seq 
793170:796942, ack 211, win 65535, length 3772
17:41:36.517351 IP 127.0.0.1.1000 > 127.0.0.1.12522: Flags [P.], seq 
796942:798402, ack 211, win 65535, length 1460
17:41:36.517368 IP 127.0.0.1.12522 > 127.0.0.1.1000: Flags [.], ack 
798402, win 65535, length 0
17:41:36.517529 IP 127.0.0.1.1000 > 127.0.0.1.12405: Flags [P.], seq 
482640:483712, ack 401, win 65535, length 1072
17:41:36.517536 IP 127.0

Does haproxy use regex for balance url_param lookup?

2016-06-26 Thread k simon
Hi, lists,
   I noticed that haproxy 1.6.5 hog the cpu periodiclly on FreeBSD 10 
with 800K-1M syscalls. I change the balance algo to "uri" and delete all 
the regular expressions can work around it. There maybe some bug with 
PCRE on FreeBSD or some bug in haproxy, but I can't confirm it.
   And does haproxy support wildcard in acl string match ? I can rewrite 
my acls to avoid the pcre lib totally.


Simon
20160626


subscribe

2016-06-26 Thread k simon


Re: external-check stdout ends up in load-balanced traffic, destroying tcp sessions

2016-06-07 Thread Simon Horman
On Tue, Jun 07, 2016 at 08:18:21PM +0200, Willy Tarreau wrote:
> On Tue, Jun 07, 2016 at 12:01:31PM +0200, Benoit Garnier wrote:
> > You can always open /dev/null before chrooting and dup() it into FD 0 and 1 
> > after chroot() has been called.
> 
> I'd be more tempted to simply close those FDs after the fork(). That
> may improve the ability to detect faulty scripts which try to dump
> GBs of data.
> 
> A very long time ago I've seen a health check perform an LDAP search
> retrieving all the hundreds of thousands of members of a group, and
> the people in charge for the server were complaining that the health
> checks were hurting the server... Better have the script fail with a
> broken pipe in this case.
> 
> Just a suggestion.

Thanks, I think that is reasonable. I particularly like its simplicity.

Lukas, could you try this?

diff --git a/src/checks.c b/src/checks.c
index c4ac947b6051..e65d28f7c3c6 100644
--- a/src/checks.c
+++ b/src/checks.c
@@ -1836,6 +1836,12 @@ static int connect_proc_chk(struct task *t)
if (pid == 0) {
/* Child */
extern char **environ;
+
+   if ((global.mode & MODE_QUIET) && !(global.mode & 
MODE_VERBOSE)) {
+   close(0);
+   close(1);
+   }
+
environ = check->envp;
extchk_setenv(check, EXTCHK_HAPROXY_SERVER_CURCONN, 
ultoa_r(s->cur_sess, buf, sizeof(buf)));
execvp(px->check_command, check->argv);



Re: external-check stdout ends up in load-balanced traffic, destroying tcp sessions

2016-06-06 Thread Simon Horman
Hi Cyril, Hi Lukas,

On Mon, Jun 06, 2016 at 08:21:46PM +0200, Cyril Bonté wrote:
> Hi Lukas,
> 
> I add Malcolm and Simon to the thread.
> 
> Le 06/06/2016 à 08:36, Lukas Erlacher a écrit :
> >Additional info: The output only ends up in the *first* client connection.
> >
> >Talking about this with some colleagues we're now theorizing that stdout 
> >goes to fd 1 and fd 1 is also the first client connection socket. Might be 
> >helpful for tracking this down.
> 
> After doing a quick test, I can confirm I can reproduce the issue (once
> working in daemon mode).
> 
> Simon, do you have time to work on a fix ? or should someone else do ? (I
> think I'll be available to work on this, but only by the end of the week)
> 
> Also Malcolm, as I remember you are using this feature, be aware that you
> may hit the issue too.

looking over the code a little the theory regarding fd 1 seems entirely
plausible as stdout, along with stdin and stdin, may be be closed 
in haproxy.c:main()

It seems to me that if the case where they were closed, and thus fd 1 and 2
may be used for other purposes, it would be prudent to redirect fd 1 and 2
to "/dev/null".

One problem I see with this is that if haproxy is running in a chroot
and /dev/null is not present then open() will fail.

Lukas, would it be possible for you to test the following
(I have only compile tested it) ?


diff --git a/src/checks.c b/src/checks.c
index c4ac947b6051..eca7df62522f 100644
--- a/src/checks.c
+++ b/src/checks.c
@@ -1836,6 +1836,16 @@ static int connect_proc_chk(struct task *t)
if (pid == 0) {
/* Child */
extern char **environ;
+
+   if ((global.mode & MODE_QUIET) && !(global.mode & 
MODE_VERBOSE)) {
+   close(0);
+   close(1);
+
+   if (open("/dev/null", 0, O_WRONLY) || open("/dev/null", 
0, O_WRONLY)) {
+   exit (-1);
+   }
+   }
+
environ = check->envp;
extchk_setenv(check, EXTCHK_HAPROXY_SERVER_CURCONN, 
ultoa_r(s->cur_sess, buf, sizeof(buf)));
execvp(px->check_command, check->argv);




Re: [PATCH 1/4] mailer: increase default timeout to 10 seconds this allows the TCP connection to retry connecting when a packet is lost on the network

2016-02-16 Thread Simon Horman
Hi Pieter, Hi Willy,

On Tue, Feb 16, 2016 at 10:11:34PM +0100, P.Baauw wrote:
> Hi Willy, Simon,
> 
> Op 16-2-2016 om 21:56 schreef Willy Tarreau:
> >Simon,
> >
> >are you OK with this series from Pieter ?

Yes, they look good to me.
Thanks Pieter for the fixes/enhancements.

Acked-by: Simon Horman 

My only request, which you are free to ignore,
would be to flesh out the changelog bodies a bit and shorten
the changelog subjects in cases where they are rather long.

> >Should we backport them to 1.6 ? They look like fixes but I'm uncertain.
> Patches 1 and 4 i think should be backported to 1.6.

I agree.

> 1- allows the tcp connection to send multiple syn packets, so 1 lost packet
> does not cause the mail to be lost. It changes the socket timeout from 2 to
> 10 seconds, this allows for 3 syn packets to be send and waiting a little
> for their reply.
> 2- makes above time configurable, but i think the 10 seconds is a ok default
> for most people.
> 4- changes the line endings send in a email DATA part to  this seems
> to comply better with the rfc documentation. And works better with Exchange
> 2013.
> 
> p.s.
> 3- (was a unrelated patch to resolver config parsing, discussed separately)
> 
> Regards,
> Pieter
> 



Re: [PATCH] MEDIUM: mailer: try sending a mail up to 3 times

2015-11-20 Thread Simon Horman
On Fri, Nov 20, 2015 at 11:58:19PM +0100, PiBa-NL wrote:
> Hi Willy,
> 
> Op 16-11-2015 om 19:57 schreef Willy Tarreau:
> >I agree with you since we don't know the timeout value nor what it applies
> >to (connection or anything). Thus I think that we should first find and
> >change that value, and maybe after that take your patch to permit real
> >retries in case of connection failures.
> >
> >Thanks,
> >Willy
> Alright found the timeout, which was actually marked with a rather clear
> 'warning message'.
> Perhaps attached patch could be applied to increase that timeout to 10
> seconds? Should a similar 'warning' still be added to say this allows for
> some retransmits? (In my test it sends 4 SYN packets if it cannot connect at
> all.)
> 
> Or should it be approached  in a different way? Perhaps as a configuration
> option on the mailers section?

I think it should be configurable, the mailers section sounds logical to me.

> Thanks,
> PiBa-NL

> >From eaf95bea0af6aa3b553a6e038997b5d339b507da Mon Sep 17 00:00:00 2001
> From: Pieter Baauw 
> Date: Fri, 20 Nov 2015 23:39:48 +0100
> Subject: [PATCH] mailer: set longer timeout on mail alert to allow for a few
>  tcp retransmits
> 
> ---
>  include/common/defaults.h | 9 +
>  src/checks.c  | 4 +---
>  2 files changed, 6 insertions(+), 7 deletions(-)
> 
> diff --git a/include/common/defaults.h b/include/common/defaults.h
> index d1994e8..b0d7c07 100644
> --- a/include/common/defaults.h
> +++ b/include/common/defaults.h
> @@ -144,10 +144,11 @@
>  
>  #define CONN_RETRIES3
>  
> -#define  CHK_CONNTIME2000
> -#define  DEF_CHKINTR 2000
> -#define DEF_FALLTIME3
> -#define DEF_RISETIME2
> +#define CHK_CONNTIME  2000
> +#define DEF_CHKINTR   2000

> +#define DEF_MAILALERTTIME 1
> +#define DEF_FALLTIME  3
> +#define DEF_RISETIME  2
>  #define DEF_AGENT_FALLTIME1
>  #define DEF_AGENT_RISETIME1
>  #define DEF_CHECK_REQ   "OPTIONS / HTTP/1.0\r\n"

I would prefer if the whitespace changes, that is all
changes other than the line that adds DEF_MAILALERTTIME,
were not part of this patch.

> diff --git a/src/checks.c b/src/checks.c
> index e77926a..cecd7ed 100644
> --- a/src/checks.c
> +++ b/src/checks.c
> @@ -3108,9 +3108,7 @@ static int init_email_alert_checks(struct server *s)
>  
>   LIST_INIT(&q->email_alerts);
>  
> - check->inter = DEF_CHKINTR; /* XXX: Would like to Skip to the 
> next alert, if any, ASAP.
> -  * But need enough time so that 
> timeouts don't occur
> -  * during tcp check procssing. For 
> now just us an arbitrary default. */
> + check->inter = DEF_MAILALERTTIME;

I would prefer if the comment was retained, it is still valid.

>   check->rise = DEF_AGENT_RISETIME;
>   check->fall = DEF_AGENT_FALLTIME;
>   err_str = init_check(check, PR_O2_TCPCHK_CHK);
> -- 
> 1.9.5.msysgit.1
> 




[SPAM] Water Retaining Gel-Reduce Irrigation And Increase Yield

2015-11-16 Thread Simon
Good Day!Glad to know you replied to my message and I am writing this email to give you more details about our products.SOCO Water Gel--A amazing material absorbing water for plants!Advantages:–Improve seed germination and emergence to give plants an early, health start.–Save the irrigation, increase crops and fruit yield.–It contains Potassium, Phosphorus Nitrogen and release the fertilizer efficiency slowly.Hope hear your opinion and surely 
welcome to our web: www.socochem.com .


	
	



	
	



	
	



	
	



	
	


Re: [PATCH] MEDIUM: mailer: try sending a mail up to 3 times

2015-11-09 Thread Simon Horman
On Mon, Nov 09, 2015 at 11:11:53AM +0100, Willy Tarreau wrote:
> Hi Pieter,
> 
> > Hi Ben, Willy, Simon,
> > 
> > Ben, thanks for the review.
> > Hoping 'release pressure' has cleared for Willy i'm resending the 
> > patch now, with with your comments incorporated.
> > 
> > CC, to Simon as maintainer of mailers part so he can give approval (or 
> > not).
> > 
> > The original reservations i had when sending this patch still apply. 
> > See the "HOWEVER." part in the bottom mail.
> > 
> > Hoping it might get merged to improve mailer reliability. So no 
> > 'server down' email gets lost..
> > Thanks everyone for your time :) .
> 
> Looks good to me. Just waiting for Simon's approval.

I would slightly prefer if there was a more substantial comment in
process_email_alert() noting that retry occurs 3 times. But regardless:

Acked-by: Simon Horman 




Restarted backend not picked up

2015-06-21 Thread Xu (Simon) Chen
Hi,

I am running haproxy 1.5.4 with numproc=8 (because a single process is
exhausting one core). For a service with 6 back-ends, 4 of the backend
services were restarted but but never picked up by haproxy - they were
still considered down, even I could clearly reach them from the haproxy
node with ping, telnet, etc. After restarting haproxy, those backends were
recognized as up again.

I looked briefly at the 1.5.x release notes after 1.5.4, and didn't see
anything obviously related. Has anyone had the same problem?

Thanks!
-Simon


Re: [PATCH v2 0/3] MEDIUM: Change verbosity of email alerts

2015-04-29 Thread Simon Horman
On Thu, Apr 30, 2015 at 07:31:28AM +0200, Willy Tarreau wrote:
> Hi Simon,
> 
> On Thu, Apr 30, 2015 at 01:10:32PM +0900, Simon Horman wrote:
> > Hi,
> > 
> > the aim of this series is to make the send more email alerts when
> > they are likely to be useful and less when they are likely to be
> > unwanted.
> (...)
> 
> Whole series applied, thank you very much!

Thanks!



[PATCH v2 1/3] MEDIUM: Lower priority of email alerts for log-health-checks messages

2015-04-29 Thread Simon Horman
Lower the priority of email alerts for log-health-checks messages from
LOG_NOTICE to LOG_INFO.

This is to allow set-ups with log-health-checks enabled to disable email
for health check state changes while leaving other email alerts enabled.

In order for email alerts to be sent for health check state changes
"log-health-checks" needs to be set and "email-alert level" needs to be 'info'
or lower. "email-alert mailers" and "email-alert to" settings are also
required in order for any email alerts to be sent.

A follow-up patch will document the above.

Signed-off-by: Simon Horman 
---
 src/checks.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/checks.c b/src/checks.c
index 8a0231deb0a8..32c992195ec1 100644
--- a/src/checks.c
+++ b/src/checks.c
@@ -316,7 +316,7 @@ static void set_server_check_status(struct check *check, 
short status, const cha
 
Warning("%s.\n", trash.str);
send_log(s->proxy, LOG_NOTICE, "%s.\n", trash.str);
-   send_email_alert(s, LOG_NOTICE, "%s", trash.str);
+   send_email_alert(s, LOG_INFO, "%s", trash.str);
}
 }
 
-- 
2.1.4




[PATCH v2 3/3] MEDIUM: Document when email-alerts are sent

2015-04-29 Thread Simon Horman
Document the influence of email-alert level and other configuration
parameters on when email-alerts are sent.

Signed-off-by: Simon Horman 
---
 doc/configuration.txt | 9 +
 1 file changed, 9 insertions(+)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index f72339a04588..780d6b505408 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -2786,6 +2786,15 @@ email-alert level 
   "email-alert to" to be set and if so sending email alerts is enabled
   for the proxy.
 
+  Alerts are sent when :
+
+  * An un-paused server is marked as down and  is alert or lower
+  * A paused server is marked as down and  is notice or lower
+  * A server is marked as up or enters the drain state and 
+is notice or lower
+  * "option log-health-checks" is enabled,  is info or lower,
+ and a health check status update occurs
+
   See also : "email-alert from", "email-alert mailers",
  "email-alert myhostname", "email-alert to",
  section 3.6 about mailers.
-- 
2.1.4




[PATCH v2 2/3] MEDIUM: Send email alerts when servers are marked as UP or enter the drain state

2015-04-29 Thread Simon Horman
This is similar to the way email alerts are sent when servers are marked as
DOWN.

Like the log messages corresponding to these state changes the messages
have log level notice. Thus they are suppressed by the default email-alert
level of 'alert'. To allow these messages the email-alert level should
be set to 'notice', 'info' or 'debug'. e.g:

email-alert level notice

"email-alert mailers" and "email-alert to" settings are also required in
order for any email alerts to be sent.

A follow-up patch will document the above.

Signed-off-by: Simon Horman 
---
 src/server.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/src/server.c b/src/server.c
index a50f9e123741..ee6b8508dac0 100644
--- a/src/server.c
+++ b/src/server.c
@@ -332,6 +332,7 @@ void srv_set_running(struct server *s, const char *reason)
srv_append_status(&trash, s, reason, xferred, 0);
Warning("%s.\n", trash.str);
send_log(s->proxy, LOG_NOTICE, "%s.\n", trash.str);
+   send_email_alert(s, LOG_NOTICE, "%s", trash.str);
 
for (srv = s->trackers; srv; srv = srv->tracknext)
srv_set_running(srv, NULL);
@@ -484,6 +485,7 @@ void srv_set_admin_flag(struct server *s, enum srv_admin 
mode)
 
Warning("%s.\n", trash.str);
send_log(s->proxy, LOG_NOTICE, "%s.\n", trash.str);
+   send_email_alert(s, LOG_NOTICE, "%s", trash.str);
 
if (prev_srv_count && s->proxy->srv_bck == 0 && 
s->proxy->srv_act == 0)
set_backend_down(s->proxy);
-- 
2.1.4




[PATCH v2 0/3] MEDIUM: Change verbosity of email alerts

2015-04-29 Thread Simon Horman
Hi,

the aim of this series is to make the send more email alerts when
they are likely to be useful and less when they are likely to be
unwanted.

Changes in v2:

* As suggested by Willy Tarreau, lower the priority at which email alerts
  for of log-health-checks messages are sent rather never sending them
* Added documentation patch

Simon Horman (3):
  MEDIUM: Lower priority of email alerts for log-health-checks messages
  MEDIUM: Send email alerts when servers are marked as UP or enter the
drain state
  MEDIUM: Document when email-alerts are sent

 doc/configuration.txt | 9 +
 src/checks.c  | 2 +-
 src/server.c  | 2 ++
 3 files changed, 12 insertions(+), 1 deletion(-)

-- 
2.1.4




Re: [PATCH 1/2] MEDIUM: Do not send email alerts corresponding to log-health-checks messages

2015-04-29 Thread Simon Horman
On Tue, Apr 28, 2015 at 09:24:42AM +0200, Willy Tarreau wrote:
> On Tue, Apr 28, 2015 at 02:25:02PM +0900, Simon Horman wrote:
> > On Tue, Apr 28, 2015 at 06:43:38AM +0200, Willy Tarreau wrote:
> > > Hi Simon,
> > > 
> > > On Tue, Apr 28, 2015 at 10:58:56AM +0900, Simon Horman wrote:
> > > > This seems only to lead to excessive verbosity which seems
> > > > much more appropriate for logs than email.
> > > > 
> > > > Signed-off-by: Simon Horman 
> > > > ---
> > > >  src/checks.c | 1 -
> > > >  1 file changed, 1 deletion(-)
> > > > 
> > > > diff --git a/src/checks.c b/src/checks.c
> > > > index 3702d9a4b0fe..efcaff20219b 100644
> > > > --- a/src/checks.c
> > > > +++ b/src/checks.c
> > > > @@ -316,7 +316,6 @@ static void set_server_check_status(struct check 
> > > > *check, short status, const cha
> > > >  
> > > > Warning("%s.\n", trash.str);
> > > > send_log(s->proxy, LOG_NOTICE, "%s.\n", trash.str);
> > > > -   send_email_alert(s, LOG_NOTICE, "%s", trash.str);
> > > 
> > > Just a question, shouldn't we keep it and send it as LOG_INFO instead ?
> > > That way users can choose whether to have them or not. Just a suggestion,
> > > otherwise I'm fine with this as well.
> > 
> > Good idea, I'll re-spin.
> > 
> > In the mean time could you look at the second patch of the series?
> > It is (currently) independent of this one.
> 
> Sorry, I wasn't clear, I did so and found it fine. I can merge it
> if you want but just like you I know that merging only parts of
> series causes more trouble than they solve.

Understood, I'll resubmit the entire series.



Re: [PATCH 1/2] MEDIUM: Do not send email alerts corresponding to log-health-checks messages

2015-04-27 Thread Simon Horman
On Tue, Apr 28, 2015 at 06:43:38AM +0200, Willy Tarreau wrote:
> Hi Simon,
> 
> On Tue, Apr 28, 2015 at 10:58:56AM +0900, Simon Horman wrote:
> > This seems only to lead to excessive verbosity which seems
> > much more appropriate for logs than email.
> > 
> > Signed-off-by: Simon Horman 
> > ---
> >  src/checks.c | 1 -
> >  1 file changed, 1 deletion(-)
> > 
> > diff --git a/src/checks.c b/src/checks.c
> > index 3702d9a4b0fe..efcaff20219b 100644
> > --- a/src/checks.c
> > +++ b/src/checks.c
> > @@ -316,7 +316,6 @@ static void set_server_check_status(struct check 
> > *check, short status, const cha
> >  
> > Warning("%s.\n", trash.str);
> > send_log(s->proxy, LOG_NOTICE, "%s.\n", trash.str);
> > -   send_email_alert(s, LOG_NOTICE, "%s", trash.str);
> 
> Just a question, shouldn't we keep it and send it as LOG_INFO instead ?
> That way users can choose whether to have them or not. Just a suggestion,
> otherwise I'm fine with this as well.

Good idea, I'll re-spin.

In the mean time could you look at the second patch of the series?
It is (currently) independent of this one.



[PATCH 0/2] MEDIUM: Change verbosity of email alerts

2015-04-27 Thread Simon Horman
Hi,

the aim of this series is to make the send more email alerts when
they are likely to be useful and less when they are likely to be
unwanted.

Simon Horman (2):
  MEDIUM: Do not send email alerts corresponding to log-health-checks
messages
  MEDIUM: Send email alerts when servers are marked as UP or enter the
drain state

 src/checks.c | 1 -
 src/server.c | 2 ++
 2 files changed, 2 insertions(+), 1 deletion(-)

-- 
2.1.4




[PATCH 2/2] MEDIUM: Send email alerts when servers are marked as UP or enter the drain state

2015-04-27 Thread Simon Horman
This is similar to the way email alerts are sent when servers are marked as
DOWN.

Like the log messages corresponding to these state changes the messages
have log level notice. Thus they are suppressed by the default email-alert
level of 'alert'. To allow these messages the email-alert level should
be set to 'notice', 'info' or 'debug'. e.g:

email-alert level notice

"email-alert mailers" and "email-alert to" settings are also required in
order for any email alerts to be sent.

Signed-off-by: Simon Horman 
---
 src/server.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/src/server.c b/src/server.c
index a50f9e123741..ee6b8508dac0 100644
--- a/src/server.c
+++ b/src/server.c
@@ -332,6 +332,7 @@ void srv_set_running(struct server *s, const char *reason)
srv_append_status(&trash, s, reason, xferred, 0);
Warning("%s.\n", trash.str);
send_log(s->proxy, LOG_NOTICE, "%s.\n", trash.str);
+   send_email_alert(s, LOG_NOTICE, "%s", trash.str);
 
for (srv = s->trackers; srv; srv = srv->tracknext)
srv_set_running(srv, NULL);
@@ -484,6 +485,7 @@ void srv_set_admin_flag(struct server *s, enum srv_admin 
mode)
 
Warning("%s.\n", trash.str);
send_log(s->proxy, LOG_NOTICE, "%s.\n", trash.str);
+   send_email_alert(s, LOG_NOTICE, "%s", trash.str);
 
if (prev_srv_count && s->proxy->srv_bck == 0 && 
s->proxy->srv_act == 0)
set_backend_down(s->proxy);
-- 
2.1.4




[PATCH 1/2] MEDIUM: Do not send email alerts corresponding to log-health-checks messages

2015-04-27 Thread Simon Horman
This seems only to lead to excessive verbosity which seems
much more appropriate for logs than email.

Signed-off-by: Simon Horman 
---
 src/checks.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/src/checks.c b/src/checks.c
index 3702d9a4b0fe..efcaff20219b 100644
--- a/src/checks.c
+++ b/src/checks.c
@@ -316,7 +316,6 @@ static void set_server_check_status(struct check *check, 
short status, const cha
 
Warning("%s.\n", trash.str);
send_log(s->proxy, LOG_NOTICE, "%s.\n", trash.str);
-   send_email_alert(s, LOG_NOTICE, "%s", trash.str);
}
 }
 
-- 
2.1.4




Re: [PATCH v2 0/4] MEDIUM: Enhancements to reporting of drain in stats

2015-04-23 Thread Simon Horman
On Thu, Apr 23, 2015 at 10:01:19AM +0200, Willy Tarreau wrote:
> Hi Simon,
> 
> On Thu, Apr 23, 2015 at 02:51:25PM +0900, Simon Horman wrote:
> (...)
> > This series attempts to address that problem by first disentangling the
> > state and colour of servers in the first two patches, which are new -
> > arguably a worthwhile clean-up in its own right. The remaining two patches
> > implement the changes to reporting of the drain state, as described above.
> > 
> > Patches have been lightly tested.
> 
> I reviewed the whole series and it looked pretty good, so I've merged it.
> Ran a few tests as well, no problem identified so I'm pretty confident.
> I'm glad you got rid of this correlation between colors and states, now
> it will be easier to extend this if needed.
> 
> Thanks for this work!

Great, thanks.



[PATCH v2 3/4] MEDIUM: Only report drain state in stats if server has SRV_ADMF_DRAIN set

2015-04-22 Thread Simon Horman
There are some similarities between a weight of zero and the
administratively set drain state: both allow existing connections
to continue while not accepting any new ones.

However, when reporting a server state generally a distinction is made
between state=UP,weight=0 and state=DRAIN,weight=*. This patch makes
stats reporting consistent in this regard.

This patch does not alter the behaviour that if a server's weight
is zero then its stats row is blue when accessed via HTML. This remains
the case regardless of if the state is UP or DRAIN.

Signed-off-by: Simon Horman 

---
v2
* Reworked to use SRV_*
* Keep blue for zero weight regardless of state
---
 src/dumpstats.c | 11 ++-
 1 file changed, 6 insertions(+), 5 deletions(-)

diff --git a/src/dumpstats.c b/src/dumpstats.c
index b505d4e2e172..5704fe4fdcf4 100644
--- a/src/dumpstats.c
+++ b/src/dumpstats.c
@@ -3906,13 +3906,14 @@ static int stats_dump_proxy_to_buffer(struct 
stream_interface *si, struct proxy
sv_colour = SRV_STATS_COLOUR_UP;
}
 
-   if (server_is_draining(sv)) {
-   if (sv_state == 
SRV_STATS_STATE_UP_GOING_DOWN) {
+   if (sv_state == SRV_STATS_STATE_UP && 
!svs->uweight)
+   sv_colour = SRV_STATS_COLOUR_DRAINING;
+
+   if (sv->admin & SRV_ADMF_DRAIN) {
+   if (sv_state == 
SRV_STATS_STATE_UP_GOING_DOWN)
sv_state = 
SRV_STATS_STATE_DRAIN_GOING_DOWN;
-   } else {
+   else
sv_state = 
SRV_STATS_STATE_DRAIN;
-   sv_colour = 
SRV_STATS_COLOUR_DRAINING;
-   }
}
 
if (sv_state == SRV_STATS_STATE_UP && 
!(svs->check.state & CHK_ST_ENABLED)) {
-- 
2.1.4




[PATCH v2 2/4] MEDIUM: Separate server state and colour in stats

2015-04-22 Thread Simon Horman
There is a relationship between the state and colour of a server in
stats, however, it is not a one-to-one relationship and the current
implementation has proved fragile.

This patch attempts to address that problem by clearly separating
state and colour.

A follow-up patch will further distinguish between DRAIN states
and DRAINING colours.

Signed-off-by: Simon Horman 

---
v2
* First post
---
 src/dumpstats.c | 134 +++-
 1 file changed, 83 insertions(+), 51 deletions(-)

diff --git a/src/dumpstats.c b/src/dumpstats.c
index 402fb0ae98ad..b505d4e2e172 100644
--- a/src/dumpstats.c
+++ b/src/dumpstats.c
@@ -2934,13 +2934,35 @@ enum srv_stats_state {
SRV_STATS_STATE_COUNT, /* Must be last */
 };
 
+enum srv_stats_colour {
+   SRV_STATS_COLOUR_DOWN = 0,
+   SRV_STATS_COLOUR_GOING_UP,
+   SRV_STATS_COLOUR_GOING_DOWN,
+   SRV_STATS_COLOUR_UP,
+   SRV_STATS_COLOUR_NOLB,
+   SRV_STATS_COLOUR_DRAINING,
+   SRV_STATS_COLOUR_NO_CHECK,
+
+   SRV_STATS_COLOUR_COUNT, /* Must be last */
+};
+
+static const char *srv_stats_colour_st[SRV_STATS_COLOUR_COUNT] = {
+   [SRV_STATS_COLOUR_DOWN] = "down",
+   [SRV_STATS_COLOUR_GOING_UP] = "going_up",
+   [SRV_STATS_COLOUR_GOING_DOWN]   = "going_down",
+   [SRV_STATS_COLOUR_UP]   = "up",
+   [SRV_STATS_COLOUR_NOLB] = "nolb",
+   [SRV_STATS_COLOUR_DRAINING] = "draining",
+   [SRV_STATS_COLOUR_NO_CHECK] = "no_check",
+};
+
 /* Dumps a line for server  and proxy  to the trash and uses the state
  * from stream interface , stats flags , and server state .
  * The caller is responsible for clearing the trash if needed. Returns non-zero
  * if it emits anything, zero otherwise.
  */
 static int stats_dump_sv_stats(struct stream_interface *si, struct proxy *px, 
int flags, struct server *sv,
-  enum srv_stats_state state)
+  enum srv_stats_state state, enum 
srv_stats_colour colour)
 {
struct appctx *appctx = __objt_appctx(si->end);
struct server *via, *ref;
@@ -2974,8 +2996,8 @@ static int stats_dump_sv_stats(struct stream_interface 
*si, struct proxy *px, in
chunk_appendf(&trash, "");
else
chunk_appendf(&trash,
- "",
- (sv->flags & SRV_F_BACKUP) ? "backup" : 
"active", state);
+ "",
+ (sv->flags & SRV_F_BACKUP) ? "backup" : 
"active", srv_stats_colour_st[colour]);
 
if ((px->cap & PR_CAP_BE) && px->srv && 
(appctx->ctx.stats.flags & STAT_ADMIN))
chunk_appendf(&trash,
@@ -3853,6 +3875,7 @@ static int stats_dump_proxy_to_buffer(struct 
stream_interface *si, struct proxy
/* stats.sv has been initialized above */
for (; appctx->ctx.stats.sv != NULL; appctx->ctx.stats.sv = 
sv->next) {
enum srv_stats_state sv_state;
+   enum srv_stats_colour sv_colour;
 
if (buffer_almost_full(rep->buf)) {
si->flags |= SI_FL_WAIT_ROOM;
@@ -3875,37 +3898,52 @@ static int stats_dump_proxy_to_buffer(struct 
stream_interface *si, struct proxy
 
if (sv->state == SRV_ST_RUNNING || sv->state == 
SRV_ST_STARTING) {
if ((svs->check.state & CHK_ST_ENABLED) &&
-   (svs->check.health < svs->check.rise + 
svs->check.fall - 1))
+   (svs->check.health < svs->check.rise + 
svs->check.fall - 1)) {
sv_state = 
SRV_STATS_STATE_UP_GOING_DOWN;
-   else
+   sv_colour = SRV_STATS_COLOUR_GOING_DOWN;
+   } else {
sv_state = SRV_STATS_STATE_UP;
+   sv_colour = SRV_STATS_COLOUR_UP;
+   }
 
if (server_is_draining(sv)) {
-   if (sv_state == 
SRV_STATS_STATE_UP_GOING_DOWN)
+   if (sv_state == 
SRV_STATS_STATE_UP_GOING_DOWN) {
sv_state = 
SRV_STATS_STATE_DRAIN_GOING_DOWN;
-   else
+   } else {
sv_state = 
SRV_STATS_STATE_DRAIN;
+ 

[PATCH v2 1/4] MEDIUM: Add enum srv_stats_state

2015-04-22 Thread Simon Horman
Add an enumeration to make the handling of the states of servers
in status messages somewhat clearer.

This is the first of a two-step attempt to disentangle the state and
colour of status information. A subsequent patch will separate state
colours from the states themselves.

This patch should not make any functional changes.

Signed-off-by: Simon Horman 

---
v2
* First post
---
 src/dumpstats.c | 104 +++-
 1 file changed, 57 insertions(+), 47 deletions(-)

diff --git a/src/dumpstats.c b/src/dumpstats.c
index d82ce8538841..402fb0ae98ad 100644
--- a/src/dumpstats.c
+++ b/src/dumpstats.c
@@ -2919,14 +2919,28 @@ static int stats_dump_li_stats(struct stream_interface 
*si, struct proxy *px, st
return 1;
 }
 
+enum srv_stats_state {
+   SRV_STATS_STATE_DOWN = 0,
+   SRV_STATS_STATE_DOWN_AGENT,
+   SRV_STATS_STATE_GOING_UP,
+   SRV_STATS_STATE_UP_GOING_DOWN,
+   SRV_STATS_STATE_UP,
+   SRV_STATS_STATE_NOLB_GOING_DOWN,
+   SRV_STATS_STATE_NOLB,
+   SRV_STATS_STATE_DRAIN_GOING_DOWN,
+   SRV_STATS_STATE_DRAIN,
+   SRV_STATS_STATE_NO_CHECK,
+
+   SRV_STATS_STATE_COUNT, /* Must be last */
+};
+
 /* Dumps a line for server  and proxy  to the trash and uses the state
  * from stream interface , stats flags , and server state .
  * The caller is responsible for clearing the trash if needed. Returns non-zero
- * if it emits anything, zero otherwise. The  parameter can take the
- * following values : 0=DOWN, 1=DOWN(agent) 2=going up, 3=going down, 4=UP, 
5,6=NOLB,
- * 7,8=DRAIN, 9=unchecked.
+ * if it emits anything, zero otherwise.
  */
-static int stats_dump_sv_stats(struct stream_interface *si, struct proxy *px, 
int flags, struct server *sv, int state)
+static int stats_dump_sv_stats(struct stream_interface *si, struct proxy *px, 
int flags, struct server *sv,
+  enum srv_stats_state state)
 {
struct appctx *appctx = __objt_appctx(si->end);
struct server *via, *ref;
@@ -2943,17 +2957,17 @@ static int stats_dump_sv_stats(struct stream_interface 
*si, struct proxy *px, in
ref = ref->track;
 
if (appctx->ctx.stats.flags & STAT_FMT_HTML) {
-   static char *srv_hlt_st[10] = {
-   "DOWN",
-   "DOWN (agent)",
-   "DN %d/%d ↑",
-   "UP %d/%d ↓",
-   "UP",
-   "NOLB %d/%d ↓",
-   "NOLB",
-   "DRAIN %d/%d ↓",
-   "DRAIN",
-   "no check"
+   static char *srv_hlt_st[SRV_STATS_STATE_COUNT] = {
+   [SRV_STATS_STATE_DOWN]  = "DOWN",
+   [SRV_STATS_STATE_DOWN_AGENT]= "DOWN 
(agent)",
+   [SRV_STATS_STATE_GOING_UP]  = "DN %d/%d 
↑",
+   [SRV_STATS_STATE_UP_GOING_DOWN] = "UP %d/%d 
↓",
+   [SRV_STATS_STATE_UP]= "UP",
+   [SRV_STATS_STATE_NOLB_GOING_DOWN]   = "NOLB %d/%d 
↓",
+   [SRV_STATS_STATE_NOLB]  = "NOLB",
+   [SRV_STATS_STATE_DRAIN_GOING_DOWN]  = "DRAIN %d/%d 
↓",
+   [SRV_STATS_STATE_DRAIN] = "DRAIN",
+   [SRV_STATS_STATE_NO_CHECK]  = "no 
check",
};
 
if (sv->admin & SRV_ADMF_MAINT)
@@ -3197,17 +3211,17 @@ static int stats_dump_sv_stats(struct stream_interface 
*si, struct proxy *px, in
chunk_appendf(&trash, "-\n");
}
else { /* CSV mode */
-   static char *srv_hlt_st[10] = {
-   "DOWN,",
-   "DOWN (agent),",
-   "DOWN %d/%d,",
-   "UP %d/%d,",
-   "UP,",
-   "NOLB %d/%d,",
-   "NOLB,",
-   "DRAIN %d/%d,",
-   "DRAIN,",
-   "no check,"
+   static char *srv_hlt_st[SRV_STATS_STATE_COUNT] = {
+   [SRV_STATS_STATE_DOWN]  = "DOWN,",
+   [SRV_STATS_STATE_DOWN_AGENT]= "DOWN 
(agent),",
+   [SRV_STATS_STATE_GOING_UP]  = "DOWN %d/%d,",
+   [SRV_STATS_STATE_UP_GOING_DOWN] = "UP %d/%d,",
+   [SRV_STATS_STATE_UP]   

[PATCH v2 0/4] MEDIUM: Enhancements to reporting of drain in stats

2015-04-22 Thread Simon Horman
Hi,

the motivation for this series is to enhance reporting of the drain state
to:

1. Only report drain state in stats if server has SRV_ADMF_DRAIN set

   The motivation is to consistently differentiate between
   between state=UP,weight=0 and state=DRAIN,weight=* when reporting stats.

2. Differentiate between DRAIN and DRAIN (agent)

   The motivation here is to make DRAIN consistent with DOWN.

A simpler version of this series was previously posted as "[PATCH 0/2]
Minor enhancements to drain state".  Thanks to Willy it became apparent to
me that series had some side effects in relation to the colour used to
report servers using HTML.

This series attempts to address that problem by first disentangling the
state and colour of servers in the first two patches, which are new -
arguably a worthwhile clean-up in its own right. The remaining two patches
implement the changes to reporting of the drain state, as described above.

Patches have been lightly tested.

Simon Horman (4):
  MEDIUM: Add enum srv_stats_state
  MEDIUM: Separate server state and colour in stats
  MEDIUM: Only report drain state in stats if server has SRV_ADMF_DRAIN
set
  MEDIUM: Differentiate between DRAIN and DRAIN (agent)

 src/dumpstats.c | 236 ++--
 1 file changed, 142 insertions(+), 94 deletions(-)

-- 
2.1.4




[PATCH v2 4/4] MEDIUM: Differentiate between DRAIN and DRAIN (agent)

2015-04-22 Thread Simon Horman
Differentiate between DRAIN and DRAIN (agent) when reporting stats.
This is consistent with the distinction made between DOWN and DOWN (agent).

Signed-off-by: Simon Horman 

---
v2
* Reworked to use SRV_STATS_STATE_*
---
 src/dumpstats.c | 7 ++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/src/dumpstats.c b/src/dumpstats.c
index 5704fe4fdcf4..095371fd4ae3 100644
--- a/src/dumpstats.c
+++ b/src/dumpstats.c
@@ -2929,6 +2929,7 @@ enum srv_stats_state {
SRV_STATS_STATE_NOLB,
SRV_STATS_STATE_DRAIN_GOING_DOWN,
SRV_STATS_STATE_DRAIN,
+   SRV_STATS_STATE_DRAIN_AGENT,
SRV_STATS_STATE_NO_CHECK,
 
SRV_STATS_STATE_COUNT, /* Must be last */
@@ -2989,6 +2990,7 @@ static int stats_dump_sv_stats(struct stream_interface 
*si, struct proxy *px, in
[SRV_STATS_STATE_NOLB]  = "NOLB",
[SRV_STATS_STATE_DRAIN_GOING_DOWN]  = "DRAIN %d/%d 
↓",
[SRV_STATS_STATE_DRAIN] = "DRAIN",
+   [SRV_STATS_STATE_DRAIN_AGENT]   = "DRAIN 
(agent)",
[SRV_STATS_STATE_NO_CHECK]  = "no 
check",
};
 
@@ -3243,6 +3245,7 @@ static int stats_dump_sv_stats(struct stream_interface 
*si, struct proxy *px, in
[SRV_STATS_STATE_NOLB]  = "NOLB,",
[SRV_STATS_STATE_DRAIN_GOING_DOWN]  = "DRAIN 
%d/%d,",
[SRV_STATS_STATE_DRAIN] = "DRAIN,",
+   [SRV_STATS_STATE_DRAIN_AGENT]   = "DRAIN 
(agent)",
[SRV_STATS_STATE_NO_CHECK]  = "no check,"
};
 
@@ -3910,7 +3913,9 @@ static int stats_dump_proxy_to_buffer(struct 
stream_interface *si, struct proxy
sv_colour = SRV_STATS_COLOUR_DRAINING;
 
if (sv->admin & SRV_ADMF_DRAIN) {
-   if (sv_state == 
SRV_STATS_STATE_UP_GOING_DOWN)
+   if (svs->agent.state & CHK_ST_ENABLED)
+   sv_state = 
SRV_STATS_STATE_DRAIN_AGENT;
+   else if (sv_state == 
SRV_STATS_STATE_UP_GOING_DOWN)
sv_state = 
SRV_STATS_STATE_DRAIN_GOING_DOWN;
else
sv_state = 
SRV_STATS_STATE_DRAIN;
-- 
2.1.4




[PATCH 2/2] MEDIUM: Differentiate between DRAIN and DRAIN (agent)

2015-04-08 Thread Simon Horman
Differentiate between DRAIN and DRAIN (agent) when reporting stats.
This is consistent with the distinction made between DOWN and DOWN (agent).

Signed-off-by: Simon Horman 
---
 src/dumpstats.c | 21 ++---
 1 file changed, 14 insertions(+), 7 deletions(-)

diff --git a/src/dumpstats.c b/src/dumpstats.c
index 9f5a4349d231..efc08ab38764 100644
--- a/src/dumpstats.c
+++ b/src/dumpstats.c
@@ -2924,7 +2924,7 @@ static int stats_dump_li_stats(struct stream_interface 
*si, struct proxy *px, st
  * The caller is responsible for clearing the trash if needed. Returns non-zero
  * if it emits anything, zero otherwise. The  parameter can take the
  * following values : 0=DOWN, 1=DOWN(agent) 2=going up, 3=going down, 4=UP, 
5,6=NOLB,
- * 7,8=DRAIN, 9=unchecked.
+ * 7,8=DRAIN, 9=DRAIN(agent), 10=unchecked.
  */
 static int stats_dump_sv_stats(struct stream_interface *si, struct proxy *px, 
int flags, struct server *sv, int state)
 {
@@ -2943,7 +2943,7 @@ static int stats_dump_sv_stats(struct stream_interface 
*si, struct proxy *px, in
ref = ref->track;
 
if (appctx->ctx.stats.flags & STAT_FMT_HTML) {
-   static char *srv_hlt_st[10] = {
+   static char *srv_hlt_st[11] = {
"DOWN",
"DOWN (agent)",
"DN %d/%d ↑",
@@ -2953,6 +2953,7 @@ static int stats_dump_sv_stats(struct stream_interface 
*si, struct proxy *px, in
"NOLB",
"DRAIN %d/%d ↓",
"DRAIN",
+   "DRAIN (agent)",
"no check"
};
 
@@ -3197,7 +3198,7 @@ static int stats_dump_sv_stats(struct stream_interface 
*si, struct proxy *px, in
chunk_appendf(&trash, "-\n");
}
else { /* CSV mode */
-   static char *srv_hlt_st[10] = {
+   static char *srv_hlt_st[11] = {
"DOWN,",
"DOWN (agent),",
"DOWN %d/%d,",
@@ -3207,6 +3208,7 @@ static int stats_dump_sv_stats(struct stream_interface 
*si, struct proxy *px, in
"NOLB,",
"DRAIN %d/%d,",
"DRAIN,",
+   "DRAIN (agent),",
"no check,"
};
 
@@ -3864,8 +3866,9 @@ static int stats_dump_proxy_to_buffer(struct 
stream_interface *si, struct proxy
 *   - UP, draining, going down=> state = 7
 *   - UP, going down  => state = 3
 *   - UP, draining=> state = 8
+*   - UP, draining (agent)=> state = 9
 *   - UP, checked => state = 4
-*   - UP, not checked nor tracked => state = 9
+*   - UP, not checked nor tracked => state = 10
 */
 
if ((svs->check.state & CHK_ST_ENABLED) &&
@@ -3874,11 +3877,15 @@ static int stats_dump_proxy_to_buffer(struct 
stream_interface *si, struct proxy
else
sv_state = 4;
 
-   if (sv->admin & SRV_ADMF_DRAIN)
-   sv_state += 4;
+   if (sv->admin & SRV_ADMF_DRAIN) {
+   if (svs->agent.state & CHK_ST_ENABLED)
+   sv_state = 9;
+   else
+   sv_state += 4;
+   }
 
if (sv_state == 4 && !(svs->check.state & 
CHK_ST_ENABLED))
-   sv_state = 9; /* unchecked UP */
+   sv_state = 10; /* unchecked UP */
}
else if (sv->state == SRV_ST_STOPPING) {
if ((!(sv->check.state & CHK_ST_ENABLED) && 
!sv->track) ||
-- 
2.1.4




[PATCH 1/2] MEDIUM: Only report drain state in stats if server has SRV_ADMF_DRAIN set

2015-04-08 Thread Simon Horman
There are some similarities between a weight of zero and the
administratively set drain state: both allow existing connections
to continue while not accepting any new ones.

However, when reporting a server state generally a distinction is made
between state=UP,weight=0 and state=DRAIN,weight=*. This patch makes
stats reporting consistent in this regard.

Signed-off-by: Simon Horman 
---
 src/dumpstats.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/dumpstats.c b/src/dumpstats.c
index d82ce8538841..9f5a4349d231 100644
--- a/src/dumpstats.c
+++ b/src/dumpstats.c
@@ -3874,7 +3874,7 @@ static int stats_dump_proxy_to_buffer(struct 
stream_interface *si, struct proxy
else
sv_state = 4;
 
-   if (server_is_draining(sv))
+   if (sv->admin & SRV_ADMF_DRAIN)
sv_state += 4;
 
if (sv_state == 4 && !(svs->check.state & 
CHK_ST_ENABLED))
-- 
2.1.4




[PATCH 0/2] Minor enhancements to drain state

2015-04-08 Thread Simon Horman
Hi,

these two patches suggest the following enhancements to Drain.
The second patch has context dependencies on the first one.

1. Only report drain state in stats if server has SRV_ADMF_DRAIN set

   The motivation is to consistently differentiate between
   between state=UP,weight=0 and state=DRAIN,weight=* when reporting stats.

2. Differentiate between DRAIN and DRAIN (agent)

   The motivation here is to make DRAIN consistent with DOWN.


Patches have been lightly tested.


Simon Horman (2):
  MEDIUM: Only report drain state in stats if server has SRV_ADMF_DRAIN
set
  MEDIUM: Differentiate between DRAIN and DRAIN (agent)

 src/dumpstats.c | 21 ++---
 1 file changed, 14 insertions(+), 7 deletions(-)

-- 
2.1.4




Re: Availability of HAProxy on Windows Server

2015-03-27 Thread Simon Dick
I'm afraid Windows isn't a supported platform, please see
http://www.haproxy.org/#plat

On 26 March 2015 at 21:38, Abhijit Damle  wrote:
> Hi,
>
>
>
> Do you have any version of HAProxy supported on Windows Server
> editions (server 2008, server 2012 etc). if so from where can I download it?
>
>
>
> Thanks and regards,
>
> Abhijit Damle
> Senior Software Engineer
> Beca
> www.beca.com
>
>
>
>
> ---
>
> NOTICE: This email, if it relates to a specific contract, is sent on behalf
> of the Beca company which entered into the contract. Please contact the
> sender if you are unsure of the contracting Beca company or visit our web
> page http://www.beca.com for further information on the Beca Group. If this
> email relates to a specific contract, by responding you agree that,
> regardless of its terms, this email and the response by you will be a valid
> communication for the purposes of that contract, and may bind the parties
> accordingly.
> This e-mail together with any attachments is confidential, may be subject to
> legal privilege and may contain proprietary information, including
> information protected by copyright. If you are not the intended recipient,
> please do not copy, use or disclose this e-mail; please notify us
> immediately by return e-mail and then delete this e-mail.
>
> ---



Re: Public Key for free Open souce install

2015-02-28 Thread Simon Dick
On 28 February 2015 at 18:28,   wrote:
> Hi,
> Where can I get a Key for free Open source installation, please let me have
> a link to it.
> (Of course, replace the tag  by the key that has been delivered
> to you.)
>
> HAProxy Technologies HAPEE public key import
>
> Packages provided by HAProxy Technologies are signed. In order to be able to
> install them, you first must import the public key:
>
> rpm --import
> http://www.haproxy.com/download/hapee/key/-common/RPM-GPG-KEY-HAProxy

At a rough guess I'd say that you get that key once you pay for the
commercial HAPEE licence, if you want free open source then you use
the haproxy from http://www.haproxy.org/ insted.



Re: [PATCH] BUG/MEDIUM: Do not consider an agent check as failed on L7 error

2015-02-25 Thread Simon Horman
On Thu, Feb 26, 2015 at 07:09:25AM +0100, Willy Tarreau wrote:
> Hi Simon,
> 
> On Thu, Feb 26, 2015 at 11:26:17AM +0900, Simon Horman wrote:
> > As failure to connect to the agent check is not sufficient to mark it as
> > failed it stands to reason that an L7 error shouldn't either.
> > 
> > Without this fix if an L7 error occurs, for example of connectivity to the
> > agent is lost immediately after establishing a connection to it, then the
> > agent check will be considered to have failed and thus may end up with zero
> > health. Once this has occurred if the primary health check also reaches
> > zero health, which is likely if connectivity to the server is lost, then
> > the server will be marked as down and not be marked as up again until a
> > successful agent check occurs regardless of the success of any primary
> > health checks.
> > 
> > This behaviour is not correct as a failed agent check should never cause a
> > server to be marked as down or by extension continue to be marked as down.
> > 
> > Signed-off-by: Simon Horman 
> 
> Makes sense, thanks. Applied to both 1.6 and 1.5.

Thanks. It seems that at the very least this problem was reproducible
by rebooting backend servers.



[PATCH] BUG/MEDIUM: Do not consider an agent check as failed on L7 error

2015-02-25 Thread Simon Horman
As failure to connect to the agent check is not sufficient to mark it as
failed it stands to reason that an L7 error shouldn't either.

Without this fix if an L7 error occurs, for example of connectivity to the
agent is lost immediately after establishing a connection to it, then the
agent check will be considered to have failed and thus may end up with zero
health. Once this has occurred if the primary health check also reaches
zero health, which is likely if connectivity to the server is lost, then
the server will be marked as down and not be marked as up again until a
successful agent check occurs regardless of the success of any primary
health checks.

This behaviour is not correct as a failed agent check should never cause a
server to be marked as down or by extension continue to be marked as down.

Signed-off-by: Simon Horman 

---

I believe that an easy way to reproduce this problem is using a
a badly behaved agent that disconnects once a connection is established.
e.g. while true; do nc -l -p 12345 -q 0; done &
---
 src/checks.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/checks.c b/src/checks.c
index ea167de..f592ee5 100644
--- a/src/checks.c
+++ b/src/checks.c
@@ -268,7 +268,7 @@ static void set_server_check_status(struct check *check, 
short status, const cha
 * cause the server to be marked down.
 */
if ((!(check->state & CHK_ST_AGENT) ||
-   (check->status >= HCHK_STATUS_L7TOUT)) &&
+   (check->status >= HCHK_STATUS_L57DATA)) &&
(check->health >= check->rise)) {
s->counters.failed_checks++;
report = 1;
-- 
2.1.4




[PATCH 0/2] Allow suppression of email alerts by log level

2015-02-05 Thread Simon Horman
Hi,

This series adds a new option which allows configuration of the maximum
log level of messages for which email alerts will be sent.

The default is alert which is more restrictive than
the current code which sends email alerts for all priorities.
That behaviour may be configured using the new configuration
option to set the maximum level to notice or greater.

email-alert level notice


The first patch in the series provides a contextual dependency
of the second patch.

Simon Horman (2):
  LOW: Remove trailing '.' from email alert messages
  MEDIUM: Allow suppression of email alerts by log level

 doc/configuration.txt  | 36 ++--
 include/proto/checks.h |  4 ++--
 include/types/proxy.h  |  3 +++
 src/cfgparse.c | 20 +---
 src/checks.c   |  7 ---
 src/server.c   |  6 --
 6 files changed, 60 insertions(+), 16 deletions(-)

-- 
2.1.4




[PATCH 1/2] LOW: Remove trailing '.' from email alert messages

2015-02-05 Thread Simon Horman
This removes the trailing '.' from both the header and the body of email
alerts.

The main motivation for this change is to make the format of email alerts
generated from srv_set_stopped() consistent with those generated from
set_server_check_status().

Signed-off-by: Simon Horman 
---
 src/server.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/server.c b/src/server.c
index 6acd332..4458642 100644
--- a/src/server.c
+++ b/src/server.c
@@ -255,7 +255,7 @@ void srv_set_stopped(struct server *s, const char *reason)
 "%sServer %s/%s is DOWN", s->flags & SRV_F_BACKUP ? 
"Backup " : "",
 s->proxy->id, s->id);
 
-   send_email_alert(s, "%s.", trash.str);
+   send_email_alert(s, "%s", trash.str);
srv_append_status(&trash, s, reason, xferred, 0);
Warning("%s.\n", trash.str);
 
-- 
2.1.4




[PATCH 2/2] MEDIUM: Allow suppression of email alerts by log level

2015-02-05 Thread Simon Horman
This patch adds a new option which allows configuration of the maximum
log level of messages for which email alerts will be sent.

The default is alert which is more restrictive than
the current code which sends email alerts for all priorities.
That behaviour may be configured using the new configuration
option to set the maximum level to notice or greater.

email-alert level notice

Signed-off-by: Simon Horman 
---
 doc/configuration.txt  | 36 ++--
 include/proto/checks.h |  4 ++--
 include/types/proxy.h  |  3 +++
 src/cfgparse.c | 20 +---
 src/checks.c   |  7 ---
 src/server.c   |  6 --
 6 files changed, 60 insertions(+), 16 deletions(-)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index bd2de33..9a50ef4 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -1375,6 +1375,7 @@ description   -  X
 X X
 disabled  X  X X X
 dispatch  -  - X X
 email-alert from  X  X X X
+email-alert level X  X X X
 email-alert mailers   X  X X X
 email-alert myhostnameX  X X X
 email-alert toX  X X X
@@ -2697,7 +2698,30 @@ email-alert from 
   Also requires "email-alert mailers" and "email-alert to" to be set
   and if so sending email alerts is enabled for the proxy.
 
-  See also : "email-alert mailers", "email-alert myhostname", "email-alert to",
+  See also : "email-alert level", "email-alert mailers",
+ "email-alert myhostname", "email-alert to", section 3.6 about 
mailers.
+
+
+email-alert level 
+  Declare the maximum log level of messages for which email alerts will be
+  sent. This acts as a filter on the sending of email alerts.
+  May be used in sections:defaults | frontend | listen | backend
+ yes   |yes   |   yes  |   yes
+
+  Arguments :
+
+ One of the 8 syslog levels:
+  emerg alert crit err warning notice info  debug
+The above syslog levels are ordered from lowest to highest.
+
+  By default level is alert
+
+  Also requires "email-alert from", "email-alert mailers" and
+  "email-alert to" to be set and if so sending email alerts is enabled
+  for the proxy.
+
+  See also : "email-alert from", "email-alert mailers",
+ "email-alert myhostname", "email-alert to",
  section 3.6 about mailers.
 
 
@@ -2713,8 +2737,8 @@ email-alert mailers 
   Also requires "email-alert from" and "email-alert to" to be set
   and if so sending email alerts is enabled for the proxy.
 
-  See also : "email-alert from", "email-alert myhostname", "email-alert to",
- section 3.6 about mailers.
+  See also : "email-alert from", "email-alert level", "email-alert myhostname",
+ "email-alert to", section 3.6 about mailers.
 
 
 email-alert myhostname 
@@ -2733,8 +2757,8 @@ email-alert myhostname 
   "email-alert to" to be set and if so sending email alerts is enabled
   for the proxy.
 
-  See also : "email-alert from", "email-alert mailers", "email-alert to",
- section 3.6 about mailers.
+  See also : "email-alert from", "email-alert level", "email-alert mailers",
+ "email-alert to", section 3.6 about mailers.
 
 
 email-alert to 
@@ -2750,7 +2774,7 @@ email-alert to 
   Also requires "email-alert mailers" and "email-alert to" to be set
   and if so sending email alerts is enabled for the proxy.
 
-  See also : "email-alert from", "email-alert mailers",
+  See also : "email-alert from", "email-alert level", "email-alert mailers",
  "email-alert myhostname", section 3.6 about mailers.
 
 
diff --git a/include/proto/checks.h b/include/proto/checks.h
index b4faed0..67d659f 100644
--- a/include/proto/checks.h
+++ b/include/proto/checks.h
@@ -47,8 +47,8 @@ static inline void health_adjust(struct server *s, short 
status)
 const char *init_check(struct check *check, int type);
 void free_check(struct check *check);
 
-void send_email_alert(struct server *s, const char *format, ...)
-   __attribute__ ((format(printf, 2, 3)));
+void send_email_alert(struct server *s, int priority, const char *format, ...)
+   __attribute__ ((format(printf, 3, 4)));
 #endif /* _PROTO_

Re: [PATCH/RFC 0/8] Email Alerts

2015-02-03 Thread Simon Horman
On Tue, Feb 03, 2015 at 05:13:02PM +0100, Baptiste wrote:
> On Tue, Feb 3, 2015 at 4:59 PM, Pavlos Parissis
>  wrote:
> > On 01/02/2015 03:15 μμ, Willy Tarreau wrote:
> >> Hi Simon,
> >>
> >> On Fri, Jan 30, 2015 at 11:22:52AM +0900, Simon Horman wrote:
> >>> Hi Willy, Hi All,
> >>>
> >>> the purpose of this email is to solicit feedback on an implementation
> >>> of email alerts for haproxy the design of which is based on a discussion
> >>> in this forum some months ago.
> >
> >
> > It would be great if we could use something like this
> > acl low_capacity nbsrv(foo_backend) lt 2
> > mail alert if low_capacity
> >
> > In some environments you only care to wake up the on-call sysadmin if you 
> > are
> > real troubles and not because 1-2 servers failed.
> >
> > Nice work,
> > Pavlos
> >
> 
> 
> 
> This might be doable using monitor-uri and monitor fail directives in
> a dedicated listen section which would fail if number of server in a
> monitored farm goes below a threshold.
> 
> That said, this is a dirty hack.

A agree entirely that there is a lot to be said for providing a facility
for alert suppression and escalation. To my mind the current implementation,
which internally works with a queue, lends itself to these kinds of
extensions. The key question in mind is how to design advanced such
as the one you have suggested in such a way that they can be useful in a
wide range of use-cases.

So far there seem to be three semi-related ideas circulating
on this list. I have added a fourth:

1. Suppressing alerts based on priority.
   e.g. Only send alerts for events whose priority is > x.

2. Combining alerts into a single message.
   e.g. If n alerts are queued up to be sent within time t
then send them in one message rather than n.

3. Escalate alerts
   e.g. Only send alerts of priority x if more than n have occurred within
time t.
   This seems to be a combination of 1 and 2.
   This may or not involve raising the priority of the resulting combined
   alert (internally or otherwise)

   An extra qualification may be that the events need to relate to something
   common:
   e.g. servers of the same proxy
Loosing one may not be bad, loosing all of them I may wish
to get out of bed for

4. Suppressing transient alerts
   e.g. I may not care if server s goes down then comes back up again
within time t.
   But I may if it keeps happening. This part seems like a variant of 3.


I expect we can grow this list of use-cases. I also think things
may become quite complex quite quickly. But it would be nice to implement
something not overly convoluted yet useful.



[PATCH] MEDIUM: Document email alerts

2015-02-02 Thread Simon Horman
Signed-off-by: Simon Horman 
---
 doc/configuration.txt | 104 ++
 1 file changed, 104 insertions(+)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index c829590..aa3f30f 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -1224,6 +1224,36 @@ peer  :
 server srv2 192.168.0.31:80
 
 
+3.6. Mailers
+
+It is possible to send email alerts when the state of servers changes.
+If configured email alerts are sent to each mailer that is configured
+in a mailers section. Email is sent to mailers using SMTP.
+
+mailer 
+  Creates a new mailer list with the name . It is an
+  independent section which is referenced by one or more proxies.
+
+mailer  :
+  Defines a mailer inside a mailers section.
+
+  Example:
+mailers mymailers
+mailer smtp1 192.168.0.1:587
+mailer smtp2 192.168.0.2:587
+
+backend mybackend
+mode tcp
+balance roundrobin
+
+email-alert mailers mymailers
+email-alert from te...@horms.org
+email-alert to te...@horms.org
+
+server srv1 192.168.0.30:80
+server srv2 192.168.0.31:80
+
+
 4. Proxies
 --
 
@@ -1344,6 +1374,10 @@ default_backend   X  X   
  X -
 description   -  X X X
 disabled  X  X X X
 dispatch  -  - X X
+email-alert from  X  X X X
+email-alert mailers   X  X X X
+email-alert myhostnameX  X X X
+email-alert toX  X X X
 enabled   X  X X X
 errorfile X  X X X
 errorloc  X  X X X
@@ -2650,6 +2684,76 @@ errorloc303  
   See also : "errorfile", "errorloc", "errorloc302"
 
 
+email-alert from 
+  Declare the from email address to be used in both the envelope and header
+  of email alerts.  This is the address that email alerts are sent from.
+  May be used in sections:defaults | frontend | listen | backend
+ yes   |yes   |   yes  |   yes
+
+  Arguments :
+
+ is the from email address to use when sending email alerts
+
+  Also requires "email-alert mailers" and "email-alert to" to be set
+  and if so sending email alerts is enabled for the proxy.
+
+  See also : "email-alert mailers", "email-alert myhostname", "email-alert to",
+ section 3.6 about mailers.
+
+
+email-alert mailers 
+  Declare the mailers to be used when sending email alerts
+  May be used in sections:defaults | frontend | listen | backend
+ yes   |yes   |   yes  |   yes
+
+  Arguments :
+
+ is the name of the mailers section to send email alerts.
+
+  Also requires "email-alert from" and "email-alert to" to be set
+  and if so sending email alerts is enabled for the proxy.
+
+  See also : "email-alert from", "email-alert myhostname", "email-alert to",
+ section 3.6 about mailers.
+
+
+email-alert myhostname 
+  Declare the to hostname address to be used when communicating with
+  mailers.
+  May be used in sections:defaults | frontend | listen | backend
+ yes   |yes   |   yes  |   yes
+
+  Arguments :
+
+ is the to email address to use when sending email alerts
+
+  By default the systems hostname is used.
+
+  Also requires "email-alert from", "email-alert mailers" and
+  "email-alert to" to be set and if so sending email alerts is enabled
+  for the proxy.
+
+  See also : "email-alert from", "email-alert mailers", "email-alert to",
+ section 3.6 about mailers.
+
+
+email-alert to 
+  Declare both the recipent address in the envelope and to address in the
+  header of email alerts. This is the address that email alerts are sent to.
+  May be used in sections:defaults | frontend | listen | backend
+ yes   |yes   |   yes  |   yes
+
+  Arguments :
+
+ is the to email address to use when sending email alerts
+
+  Also requires "email-alert mailers" and "email-alert to" to be set
+  and if so sending email alerts is enabled for the proxy.
+
+  See also : "email-alert from", "email-alert mailers",
+ "email-alert myhostname", section 3.6 about mailers.
+
+
 force-persist { if | unless } 
   Declare a condition to force persistence on down servers
   May be used in sections:defaults | frontend | listen | backend
-- 
2.1.4




Re: [PATCH/RFC 0/8] Email Alerts

2015-02-01 Thread Simon Horman
Hi Willy,

On Sun, Feb 01, 2015 at 03:15:27PM +0100, Willy Tarreau wrote:
> Hi Simon,
> 
> On Fri, Jan 30, 2015 at 11:22:52AM +0900, Simon Horman wrote:
> > Hi Willy, Hi All,
> > 
> > the purpose of this email is to solicit feedback on an implementation
> > of email alerts for haproxy the design of which is based on a discussion
> > in this forum some months ago.
> 
> Thanks for working on this!
> 
> (...)
> > Internally the code makes use of a dummy check using the tcpcheck
> > code to set up a simple sequence of expect/send rules. This,
> > along with the notion of mailers, is my interpretation of the
> > earlier discussion in this forum.
> 
> Yes I have some memories about this discussion.
> 
> > The current code has only been lightly tested and has a number of
> > limitations including:
> > 
> > * No options to control sending email alerts based on the event
> >   that has occurred. That is, its probably way to noisy for most
> >   people's tastes.
> 
> Most likely, yes. I think that having a minimum level, just like
> we do for logs, would solve that problem, given that messages are
> arranged by log levels to avoid that verbosity as well. Thus maybe
> having a statement like "email-alert level " would
> elegantly solve this.

Yes, I agree. That seems like a nice solution.

> > * No options to configure the format of the email alerts
> 
> You know, even if we make this format very flexible, some users
> will complain that they cannot send it in html and attach graphs :-)

Haha, yes indeed.

One reason I choose not to tackle that problem at all was
that it seems to have some similarity to Pandora's box.

The best idea I have had so far is to allow a template, with some
meta-fields that are filled in at run-time, E.g. %p might mean "insert the
name of the proxy here". And for all other text to be treated as literals.
This may be flexible enough for many use-cases. Though as you point out,
its hard to cover all the bases.

I'd also be happy to let this be and see what requests people have
to enhance or configure the messages.

> > * No options to configure delays associated with the checks
> >   used to send alerts.
> 
> Maybe an improvement could be to later implement a small queue
> and group messages that are too close together. It would require
> a timeout as well so that queued messages are sent once the max
> aggregation delay is elapsed.

Do you mean include multiple alerts in a single email message?

If so, yes that sounds reasonable to me and it is something that
had crossed my mind while working on this series. I suspect it would
not be terribly difficult to implement on top of this series.

> > * No Documentation

This one is not so difficult to fix and I will see about doing so.

> > * No support for STLS. This one will be a little tricky.
> 
> I don't think it is a big problem. Users may very well route to
> the local relay if they absolutely don't want to send a clear-text
> alert over a shared infrastructure. But given that they are the
> same people who read their e-mails from their smartphone without
> ever entering a password, I'd claim that there are a lot of things
> to improve on their side before STLS becomes their first reasonable
> concern.

Yes, I agree. I think it would be nice to have. But I also think
it is best classified as future work.

> > Again the purpose is to solicit feedback on the code so far,
> > in particular the design, before delving into further implementation 
> > details.
> 
> I've reviewed the whole patch set and have nothing to say about
> it, it's clean and does what you explained here. I feel like you're
> not very satisfied with abusing the check infrastructure to send
> an e-mail, but I wouldn't worry about that now given that nowhere
> in the configuration these checks are visible, so once we find it
> easier to handle the mailers using a regular task managing its own
> session, it shouldn't be that hard to convert the code.

I was a little apprehensive about reusing checks in this way.
But the code turned out to be a lot cleaner than I expected.
And I am not comfortable with this approach.

> So for now I'm totally positive about your work. Please tell me
> if you want me to merge it now in case that makes things easier
> for you to continue. Some breakage is expected to happen soon on
> the buffer management, requiring some painful rebases, so at least
> we'll have to sync on this to limit the painful work.

Thanks for your positive review.

I would like to take your offer and ask for the code to be merged.
I'll see about working on some of the lower hanging fruit discussed
above. Especially providing some documentation.



[PATCH/RFC 8/8] MEDIUM: Support sending email alerts

2015-01-31 Thread Simon Horman
Signed-off-by: Simon Horman 
---
 include/proto/checks.h |   2 +
 include/types/checks.h |   2 +-
 include/types/proxy.h  |  18 ++-
 src/cfgparse.c |  26 ++--
 src/checks.c   | 321 +
 src/server.c   |   1 +
 6 files changed, 356 insertions(+), 14 deletions(-)

diff --git a/include/proto/checks.h b/include/proto/checks.h
index 24dec79..b4faed0 100644
--- a/include/proto/checks.h
+++ b/include/proto/checks.h
@@ -47,6 +47,8 @@ static inline void health_adjust(struct server *s, short 
status)
 const char *init_check(struct check *check, int type);
 void free_check(struct check *check);
 
+void send_email_alert(struct server *s, const char *format, ...)
+   __attribute__ ((format(printf, 2, 3)));
 #endif /* _PROTO_CHECKS_H */
 
 /*
diff --git a/include/types/checks.h b/include/types/checks.h
index 8162a06..4b35d30 100644
--- a/include/types/checks.h
+++ b/include/types/checks.h
@@ -181,7 +181,7 @@ struct check {
char **envp;/* the environment to use if 
running a process-based check */
struct pid_list *curpid;/* entry in pid_list used for 
current process-based test, or -1 if not in test */
struct protocol *proto; /* server address protocol for 
health checks */
-   struct sockaddr_storage addr;   /* the address to check, if 
different from  */
+   struct sockaddr_storage addr;   /* the address to check */
 };
 
 struct check_status {
diff --git a/include/types/proxy.h b/include/types/proxy.h
index 72d1024..230b804 100644
--- a/include/types/proxy.h
+++ b/include/types/proxy.h
@@ -208,6 +208,19 @@ struct error_snapshot {
char buf[BUFSIZE];  /* copy of the beginning of the message 
*/
 };
 
+struct email_alert {
+   struct list list;
+   struct list tcpcheck_rules;
+};
+
+struct email_alertq {
+   struct list email_alerts;
+   struct check check; /* Email alerts are implemented using 
existing check
+* code even though they are not 
checks. This structure
+* is as a parameter to the check code.
+* Each check corresponds to a mailer */
+};
+
 struct proxy {
enum obj_type obj_type; /* object type == 
OBJ_TYPE_PROXY */
enum pr_state state;/* proxy state, one of PR_* */
@@ -386,9 +399,10 @@ struct proxy {
struct mailers *m;  /* Mailer to send email alerts 
via */
char *name;
} mailers;
-   char *from; /* Address to send email 
allerts from */
-   char *to;   /* Address(es) to send email 
allerts to */
+   char *from; /* Address to send email alerts 
from */
+   char *to;   /* Address(es) to send email 
alerts to */
char *myhostname;   /* Identity to use in HELO 
command sent to mailer */
+   struct email_alertq *queues;/* per-mailer alerts queues */
} email_alert;
 };
 
diff --git a/src/cfgparse.c b/src/cfgparse.c
index de94074..3af0449 100644
--- a/src/cfgparse.c
+++ b/src/cfgparse.c
@@ -2019,8 +2019,8 @@ int cfg_parse_mailers(const char *file, int linenum, char 
**args, int kwm)
}
 
proto = protocol_by_family(sk->ss_family);
-   if (!proto || !proto->connect) {
-   Alert("parsing [%s:%d] : '%s %s' : connect() not 
supported for this address family.\n",
+   if (!proto || !proto->connect || proto->sock_prot != 
IPPROTO_TCP) {
+   Alert("parsing [%s:%d] : '%s %s' : TCP not supported 
for this address family.\n",
  file, linenum, args[0], args[1]);
err_code |= ERR_ALERT | ERR_FATAL;
goto out;
@@ -6607,15 +6607,19 @@ int check_config_validity()
}
}
 
-   if (
-   (curproxy->email_alert.mailers.name || 
curproxy->email_alert.from || curproxy->email_alert.myhostname || 
curproxy->email_alert.to) &&
-   !(curproxy->email_alert.mailers.name && 
curproxy->email_alert.from && curproxy->email_alert.to)) {
-   Warning("config : 'email-alert' will be ignored for %s 
'%s' (the presence any of "
-   "'email-alert from', 'email-alert mailer', 
'email-alert hostname' or 'email-alert to' requrires each of"
-   "'email-alert from', 'email-alert mailer' and 


[PATCH/RFC 0/8] Email Alerts

2015-01-31 Thread Simon Horman
Hi Willy, Hi All,

the purpose of this email is to solicit feedback on an implementation
of email alerts for haproxy the design of which is based on a discussion
in this forum some months ago.


This patchset allows configuration of mailers. These are like
the existing haproxy concept of peers. But for sending email alerts.

It also allows proxies to configure sending email alerts to mailers.

An example haproxy.cf snippet is as follows:

listen VIP_Name
...
server RIP_Name ...
email-alert mailers some_mailers
email-alert from te...@horms.org
email-alert to te...@horms.org

mailers some_mailers
mailer us 127.0.0.1:25
mailer them 192.168.0.1:587


And an example email alert is as follows:

 start 
Date: Wed, 28 Jan 2015 17:23:21 +0900 (JST)
From: te...@horms.org
To: te...@horms.org
Subject: [HAproxy Alert] Server VIP_Name/RIP_Name is DOWN.

Server VIP_Name/RIP_Name is DOWN.
 end 


Internally the code makes use of a dummy check using the tcpcheck
code to set up a simple sequence of expect/send rules. This,
along with the notion of mailers, is my interpretation of the
earlier discussion in this forum.

The current code has only been lightly tested and has a number of
limitations including:

* No options to control sending email alerts based on the event
  that has occurred. That is, its probably way to noisy for most
  people's tastes.
* No options to configure the format of the email alerts
* No options to configure delays associated with the checks
  used to send alerts.
* No Documentation
* No support for STLS. This one will be a little tricky.

Again the purpose is to solicit feedback on the code so far,
in particular the design, before delving into further implementation details.


For reference this patchset is available in git
https://github.com/horms/haproxy devel/email-alert

This patchset is based on the current mainline master branch:
the 1.6 development branch. The current head commit there is
32602d2 ("BUG/MINOR: checks: prevent http keep-alive with http-check expect").


Simon Horman (8):
  MEDIUM: Remove connect_chk
  MEDIUM: Refactor init_check and move to checks.c
  MEDIUM: Add free_check() helper
  MEDIUM: Move proto and addr fields struct check
  MEDIUM: Attach tcpcheck_rules to check
  MEDIUM: Add parsing of mailers section
  MEDIUM: Allow configuration of email alerts
  MEDIUM: Support sending email alerts

 Makefile|   2 +-
 include/proto/checks.h  |   5 +
 include/types/checks.h  |   3 +
 include/types/mailers.h |  65 
 include/types/proxy.h   |  24 +++
 include/types/server.h  |   5 -
 src/cfgparse.c  | 290 
 src/checks.c| 427 ++--
 src/mailers.c   |  17 ++
 src/server.c|  60 ++-
 10 files changed, 803 insertions(+), 95 deletions(-)
 create mode 100644 include/types/mailers.h
 create mode 100644 src/mailers.c

-- 
2.1.4




[PATCH/RFC 1/8] MEDIUM: Remove connect_chk

2015-01-31 Thread Simon Horman
Remove connect_chk and instead call connect_proc_chk()
and connect_conn_chk(). There no longer seems to be any
value in having a wrapper function here.

Signed-off-by: Simon Horman 
---
 src/checks.c | 25 ++---
 1 file changed, 2 insertions(+), 23 deletions(-)

diff --git a/src/checks.c b/src/checks.c
index 1b5b731..321fe34 100644
--- a/src/checks.c
+++ b/src/checks.c
@@ -1824,27 +1824,6 @@ out:
 }
 
 /*
- * establish a server health-check.
- *
- * It can return one of :
- *  - SN_ERR_NONE if everything's OK
- *  - SN_ERR_SRVTO if there are no more servers
- *  - SN_ERR_SRVCL if the connection was refused by the server
- *  - SN_ERR_PRXCOND if the connection has been limited by the proxy (maxconn)
- *  - SN_ERR_RESOURCE if a system resource is lacking (eg: fd limits, ports, 
...)
- *  - SN_ERR_INTERNAL for any other purely internal errors
- * Additionnally, in the case of SN_ERR_RESOURCE, an emergency log will be 
emitted.
- */
-static int connect_chk(struct task *t)
-{
-   struct check *check = t->context;
-
-   if (check->type == PR_O2_EXT_CHK)
-   return connect_proc_chk(t);
-   return connect_conn_chk(t);
-}
-
-/*
  * manages a server health-check that uses a process. Returns
  * the time the task accepts to wait, or TIME_ETERNITY for infinity.
  */
@@ -1875,7 +1854,7 @@ static struct task *process_chk_proc(struct task *t)
 
check->state |= CHK_ST_INPROGRESS;
 
-   ret = connect_chk(t);
+   ret = connect_proc_chk(t);
switch (ret) {
case SN_ERR_UP:
return t;
@@ -2018,7 +1997,7 @@ static struct task *process_chk_conn(struct task *t)
check->bo->p = check->bo->data;
check->bo->o = 0;
 
-   ret = connect_chk(t);
+   ret = connect_conn_chk(t);
switch (ret) {
case SN_ERR_UP:
return t;
-- 
2.1.4




[PATCH/RFC 3/8] MEDIUM: Add free_check() helper

2015-01-31 Thread Simon Horman
Add free_check() helper to free the memory allocated by init_check().

Signed-off-by: Simon Horman 
---
 include/proto/checks.h | 1 +
 src/checks.c   | 7 +++
 2 files changed, 8 insertions(+)

diff --git a/include/proto/checks.h b/include/proto/checks.h
index 1e65652..24dec79 100644
--- a/include/proto/checks.h
+++ b/include/proto/checks.h
@@ -45,6 +45,7 @@ static inline void health_adjust(struct server *s, short 
status)
 }
 
 const char *init_check(struct check *check, int type);
+void free_check(struct check *check);
 
 #endif /* _PROTO_CHECKS_H */
 
diff --git a/src/checks.c b/src/checks.c
index ae981f8..b2f89a5 100644
--- a/src/checks.c
+++ b/src/checks.c
@@ -2807,6 +2807,13 @@ const char *init_check(struct check *check, int type)
return NULL;
 }
 
+void free_check(struct check *check)
+{
+   free(check->bi);
+   free(check->bo);
+   free(check->conn);
+}
+
 
 /*
  * Local variables:
-- 
2.1.4




[PATCH/RFC 4/8] MEDIUM: Move proto and addr fields struct check

2015-01-31 Thread Simon Horman
The motivation for this is to make checks more independent of each
other to allow further reuse of their infrastructure.

For nowserver->check and server->agent still always use the same values
for the addr and proto fields so this patch should not introduce any
behavioural changes.

Signed-off-by: Simon Horman 
---
 include/types/checks.h |  2 ++
 include/types/server.h |  5 -
 src/checks.c   | 14 +++---
 src/server.c   | 14 +++---
 4 files changed, 16 insertions(+), 19 deletions(-)

diff --git a/include/types/checks.h b/include/types/checks.h
index 831166e..04d79c4 100644
--- a/include/types/checks.h
+++ b/include/types/checks.h
@@ -179,6 +179,8 @@ struct check {
char **argv;/* the arguments to use if 
running a process-based check */
char **envp;/* the environment to use if 
running a process-based check */
struct pid_list *curpid;/* entry in pid_list used for 
current process-based test, or -1 if not in test */
+   struct protocol *proto; /* server address protocol for 
health checks */
+   struct sockaddr_storage addr;   /* the address to check, if 
different from  */
 };
 
 struct check_status {
diff --git a/include/types/server.h b/include/types/server.h
index 1cabb83..4f97e17 100644
--- a/include/types/server.h
+++ b/include/types/server.h
@@ -202,11 +202,6 @@ struct server {
 
int puid;   /* proxy-unique server ID, used 
for SNMP, and "first" LB algo */
 
-   struct {/* configuration  used by 
health-check and agent-check */
-   struct protocol *proto; /* server address protocol for 
health checks */
-   struct sockaddr_storage addr;   /* the address to check, if 
different from  */
-   } check_common;
-
struct check check; /* health-check specific 
configuration */
struct check agent; /* agent specific configuration 
*/
 
diff --git a/src/checks.c b/src/checks.c
index b2f89a5..6624714 100644
--- a/src/checks.c
+++ b/src/checks.c
@@ -1437,18 +1437,18 @@ static int connect_conn_chk(struct task *t)
 
/* prepare a new connection */
conn_init(conn);
-   conn_prepare(conn, s->check_common.proto, check->xprt);
+   conn_prepare(conn, check->proto, check->xprt);
conn_attach(conn, check, &check_conn_cb);
conn->target = &s->obj_type;
 
/* no client address */
clear_addr(&conn->addr.from);
 
-   if (is_addr(&s->check_common.addr)) {
+   if (is_addr(&check->addr)) {
 
/* we'll connect to the check addr specified on the server */
-   conn->addr.to = s->check_common.addr;
-   proto = s->check_common.proto;
+   conn->addr.to = check->addr;
+   proto = check->proto;
}
else {
/* we'll connect to the addr on the server */
@@ -2498,10 +2498,10 @@ static void tcpcheck_main(struct connection *conn)
/* no client address */
clear_addr(&conn->addr.from);
 
-   if (is_addr(&s->check_common.addr)) {
+   if (is_addr(&check->addr)) {
/* we'll connect to the check addr specified on 
the server */
-   conn->addr.to = s->check_common.addr;
-   proto = s->check_common.proto;
+   conn->addr.to = check->addr;
+   proto = check->proto;
}
else {
/* we'll connect to the addr on the server */
diff --git a/src/server.c b/src/server.c
index 8554e75..31f319b 100644
--- a/src/server.c
+++ b/src/server.c
@@ -901,7 +901,7 @@ int parse_server(const char *file, int linenum, char 
**args, struct proxy *curpr
}
 
newsrv->addr = *sk;
-   newsrv->proto = newsrv->check_common.proto = 
protocol_by_family(newsrv->addr.ss_family);
+   newsrv->proto = newsrv->check.proto = 
newsrv->agent.proto = protocol_by_family(newsrv->addr.ss_family);
newsrv->xprt  = newsrv->check.xprt = newsrv->agent.xprt 
= &raw_sock;
 
if (!newsrv->proto) {
@@ -1109,8 +1109,8 @@ int parse_server(const char *file, int linenum, char 
**args, struct proxy *curpr
goto out;
}
 
-   newsrv->check_common.addr = *sk;
-   newsrv->check_common.proto = 
p

[PATCH/RFC 6/8] MEDIUM: Add parsing of mailers section

2015-01-31 Thread Simon Horman
As mailer and mailers structures and allow parsing of
a mailers section into those structures.

These structures will subsequently be freed as it is
not yet possible to use reference them in the configuration.

Signed-off-by: Simon Horman 
---
 Makefile|   2 +-
 include/types/mailers.h |  65 ++
 src/cfgparse.c  | 177 
 src/mailers.c   |  17 +
 4 files changed, 260 insertions(+), 1 deletion(-)
 create mode 100644 include/types/mailers.h
 create mode 100644 src/mailers.c

diff --git a/Makefile b/Makefile
index 4671759..4e3e166 100644
--- a/Makefile
+++ b/Makefile
@@ -664,7 +664,7 @@ OBJS = src/haproxy.o src/sessionhash.o src/base64.o 
src/protocol.o \
src/session.o src/hdr_idx.o src/ev_select.o src/signal.o \
src/acl.o src/sample.o src/memory.o src/freq_ctr.o src/auth.o \
src/compression.o src/payload.o src/hash.o src/pattern.o src/map.o \
-   src/namespace.o
+   src/namespace.o src/mailers.o
 
 EBTREE_OBJS = $(EBTREE_DIR)/ebtree.o \
   $(EBTREE_DIR)/eb32tree.o $(EBTREE_DIR)/eb64tree.o \
diff --git a/include/types/mailers.h b/include/types/mailers.h
new file mode 100644
index 000..582bb94
--- /dev/null
+++ b/include/types/mailers.h
@@ -0,0 +1,65 @@
+/*
+ * include/types/mailer.h
+ * This file defines everything related to mailer.
+ *
+ * Copyright 2015 Horms Solutions Ltd., Simon Horman 
+ *
+ * Based on include/types/peers.h
+ *
+ * Copyright 2010 EXCELIANCE, Emeric Brun 
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation, version 2.1
+ * exclusively.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301  
USA
+ */
+
+#ifndef _TYPES_EMAIL_ALERT_H
+#define _TYPES_EMAIL_ALERT_H
+
+#include 
+#include 
+#include 
+#include 
+
+struct mailer {
+   char *id;
+   struct mailers *mailers;
+   struct {
+   const char *file;   /* file where the section appears */
+   int line;   /* line where the section appears */
+   } conf; /* config information */
+   struct sockaddr_storage addr;   /* SMTP server address */
+   struct protocol *proto; /* SMTP server address's protocol */
+   struct xprt_ops *xprt;  /* SMTP server socket operations at 
transport layer */
+   void *sock_init_arg;/* socket operations's opaque init 
argument if needed */
+   struct mailer *next;/* next mailer in the list */
+};
+
+
+struct mailers {
+   char *id;   /* mailers section name */
+   struct mailer *mailer_list; /* mailers in this mailers section */
+   struct {
+   const char *file;   /* file where the section appears */
+   int line;   /* line where the section appears */
+   } conf; /* config information */
+   struct mailers *next;   /* next mailers section */
+   int count;  /* total number of mailers in this 
mailers section */
+   int users;  /* number of users of this mailers 
section */
+};
+
+
+extern struct mailers *mailers;
+
+#endif /* _TYPES_EMAIL_ALERT_H */
+
diff --git a/src/cfgparse.c b/src/cfgparse.c
index c5f20a3..2db5ed1 100644
--- a/src/cfgparse.c
+++ b/src/cfgparse.c
@@ -48,6 +48,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -1916,6 +1917,145 @@ out:
return err_code;
 }
 
+
+/*
+ * Parse a line in a , ,  or  section.
+ * Returns the error code, 0 if OK, or any combination of :
+ *  - ERR_ABORT: must abort ASAP
+ *  - ERR_FATAL: we can continue parsing but not start the service
+ *  - ERR_WARN: a warning has been emitted
+ *  - ERR_ALERT: an alert has been emitted
+ * Only the two first ones can stop processing, the two others are just
+ * indicators.
+ */
+int cfg_parse_mailers(const char *file, int linenum, char **args, int kwm)
+{
+   static struct mailers *curmailers = NULL;
+   struct mailer *newmailer = NULL;
+   const char *err;
+   int err_code = 0;
+   char *errmsg = NULL;
+
+   if (strcmp(args[0], "mailers") == 0) { /* new mailers section */
+   if (!*args[1]) {
+   Alert("parsing [%s:%d] : missing name for mailers 
section.\n", file, linenum);
+   err

[PATCH/RFC 5/8] MEDIUM: Attach tcpcheck_rules to check

2015-01-31 Thread Simon Horman
This is to allow checks to be established whose tcpcheck_rules
are not those of its proxy.

Signed-off-by: Simon Horman 
---
 include/types/checks.h |  1 +
 src/checks.c   | 34 +-
 src/server.c   |  1 +
 3 files changed, 19 insertions(+), 17 deletions(-)

diff --git a/include/types/checks.h b/include/types/checks.h
index 04d79c4..8162a06 100644
--- a/include/types/checks.h
+++ b/include/types/checks.h
@@ -166,6 +166,7 @@ struct check {
char desc[HCHK_DESC_LEN];   /* health check description */
int use_ssl;/* use SSL for health checks */
int send_proxy; /* send a PROXY protocol header 
with checks */
+   struct list *tcpcheck_rules;/* tcp-check send / expect 
rules */
struct tcpcheck_rule *current_step; /* current step when using 
tcpcheck */
struct tcpcheck_rule *last_started_step;/* pointer to latest tcpcheck 
rule started */
int inter, fastinter, downinter;/* checks: time in milliseconds 
*/
diff --git a/src/checks.c b/src/checks.c
index 6624714..0f99d47 100644
--- a/src/checks.c
+++ b/src/checks.c
@@ -60,7 +60,7 @@
 #include 
 
 static int httpchk_expect(struct server *s, int done);
-static int tcpcheck_get_step_id(struct server *);
+static int tcpcheck_get_step_id(struct check *);
 static void tcpcheck_main(struct connection *);
 
 static const struct check_status check_statuses[HCHK_STATUS_SIZE] = {
@@ -620,7 +620,7 @@ static void chk_report_conn_err(struct connection *conn, 
int errno_bck, int expi
chk = get_trash_chunk();
 
if (check->type == PR_O2_TCPCHK_CHK) {
-   step = tcpcheck_get_step_id(check->server);
+   step = tcpcheck_get_step_id(check);
if (!step)
chunk_printf(chk, " at initial connection step of 
tcp-check");
else {
@@ -1463,8 +1463,8 @@ static int connect_conn_chk(struct task *t)
/* only plain tcp-check supports quick ACK */
quickack = check->type == 0 || check->type == PR_O2_TCPCHK_CHK;
 
-   if (check->type == PR_O2_TCPCHK_CHK && 
!LIST_ISEMPTY(&s->proxy->tcpcheck_rules)) {
-   struct tcpcheck_rule *r = (struct tcpcheck_rule *) 
s->proxy->tcpcheck_rules.n;
+   if (check->type == PR_O2_TCPCHK_CHK && 
!LIST_ISEMPTY(check->tcpcheck_rules)) {
+   struct tcpcheck_rule *r = (struct tcpcheck_rule *) 
check->tcpcheck_rules->n;
/* if first step is a 'connect', then tcpcheck_main must run it 
*/
if (r->action == TCPCHK_ACT_CONNECT) {
tcpcheck_main(conn);
@@ -2351,23 +2351,23 @@ static int httpchk_expect(struct server *s, int done)
 /*
  * return the id of a step in a send/expect session
  */
-static int tcpcheck_get_step_id(struct server *s)
+static int tcpcheck_get_step_id(struct check *check)
 {
struct tcpcheck_rule *cur = NULL, *next = NULL;
int i = 0;
 
/* not even started anything yet => step 0 = initial connect */
-   if (!s->check.current_step)
+   if (check->current_step)
return 0;
 
-   cur = s->check.last_started_step;
+   cur = check->last_started_step;
 
/* no step => first step */
if (cur == NULL)
return 1;
 
/* increment i until current step */
-   list_for_each_entry(next, &s->proxy->tcpcheck_rules, list) {
+   list_for_each_entry(next, check->tcpcheck_rules, list) {
if (next->list.p == &cur->list)
break;
++i;
@@ -2384,7 +2384,7 @@ static void tcpcheck_main(struct connection *conn)
struct check *check = conn->owner;
struct server *s = check->server;
struct task *t = check->task;
-   struct list *head = &s->proxy->tcpcheck_rules;
+   struct list *head = check->tcpcheck_rules;
 
/* here, we know that the check is complete or that it failed */
if (check->result != CHK_RES_UNKNOWN)
@@ -2565,14 +2565,14 @@ static void tcpcheck_main(struct connection *conn)
case SN_ERR_SRVTO: /* ETIMEDOUT */
case SN_ERR_SRVCL: /* ECONNREFUSED, ENETUNREACH, ... */
chunk_printf(&trash, "TCPCHK error establishing 
connection at step %d: %s",
-   tcpcheck_get_step_id(s), 
strerror(errno));
+   tcpcheck_get_step_id(check), 
strerror(errno));
set_server_check_status(check, 
HCHK_STATUS_L4CON, trash.str);
goto out_end_tcpcheck;
case SN_ERR_PRXCOND:
case SN_

[PATCH/RFC 2/8] MEDIUM: Refactor init_check and move to checks.c

2015-01-31 Thread Simon Horman
Refactor init_check so that an error string is returned
rather than alerts being printed by it. Also
init_check to checks.c and provide a prototype to allow
it to be used from multiple C files.

Signed-off-by: Simon Horman 
---
 include/proto/checks.h |  2 ++
 src/checks.c   | 26 ++
 src/server.c   | 44 +---
 3 files changed, 37 insertions(+), 35 deletions(-)

diff --git a/include/proto/checks.h b/include/proto/checks.h
index f3d4fa6..1e65652 100644
--- a/include/proto/checks.h
+++ b/include/proto/checks.h
@@ -44,6 +44,8 @@ static inline void health_adjust(struct server *s, short 
status)
return __health_adjust(s, status);
 }
 
+const char *init_check(struct check *check, int type);
+
 #endif /* _PROTO_CHECKS_H */
 
 /*
diff --git a/src/checks.c b/src/checks.c
index 321fe34..ae981f8 100644
--- a/src/checks.c
+++ b/src/checks.c
@@ -2781,6 +2781,32 @@ static void tcpcheck_main(struct connection *conn)
return;
 }
 
+const char *init_check(struct check *check, int type)
+{
+   check->type = type;
+
+   /* Allocate buffer for requests... */
+   if ((check->bi = calloc(sizeof(struct buffer) + global.tune.chksize, 
sizeof(char))) == NULL) {
+   return "out of memory while allocating check buffer";
+   }
+   check->bi->size = global.tune.chksize;
+
+   /* Allocate buffer for responses... */
+   if ((check->bo = calloc(sizeof(struct buffer) + global.tune.chksize, 
sizeof(char))) == NULL) {
+   return "out of memory while allocating check buffer";
+   }
+   check->bo->size = global.tune.chksize;
+
+   /* Allocate buffer for partial results... */
+   if ((check->conn = calloc(1, sizeof(struct connection))) == NULL) {
+   return "out of memory while allocating check connection";
+   }
+
+   check->conn->t.sock.fd = -1; /* no agent in progress yet */
+
+   return NULL;
+}
+
 
 /*
  * Local variables:
diff --git a/src/server.c b/src/server.c
index b19ebbe..8554e75 100644
--- a/src/server.c
+++ b/src/server.c
@@ -21,6 +21,7 @@
 
 #include 
 
+#include 
 #include 
 #include 
 #include 
@@ -796,35 +797,6 @@ const char *server_parse_weight_change_request(struct 
server *sv,
return NULL;
 }
 
-static int init_check(struct check *check, int type, const char * file, int 
linenum)
-{
-   check->type = type;
-
-   /* Allocate buffer for requests... */
-   if ((check->bi = calloc(sizeof(struct buffer) + global.tune.chksize, 
sizeof(char))) == NULL) {
-   Alert("parsing [%s:%d] : out of memory while allocating check 
buffer.\n", file, linenum);
-   return ERR_ALERT | ERR_ABORT;
-   }
-   check->bi->size = global.tune.chksize;
-
-   /* Allocate buffer for responses... */
-   if ((check->bo = calloc(sizeof(struct buffer) + global.tune.chksize, 
sizeof(char))) == NULL) {
-   Alert("parsing [%s:%d] : out of memory while allocating check 
buffer.\n", file, linenum);
-   return ERR_ALERT | ERR_ABORT;
-   }
-   check->bo->size = global.tune.chksize;
-
-   /* Allocate buffer for partial results... */
-   if ((check->conn = calloc(1, sizeof(struct connection))) == NULL) {
-   Alert("parsing [%s:%d] : out of memory while allocating check 
connection.\n", file, linenum);
-   return ERR_ALERT | ERR_ABORT;
-   }
-
-   check->conn->t.sock.fd = -1; /* no agent in progress yet */
-
-   return 0;
-}
-
 int parse_server(const char *file, int linenum, char **args, struct proxy 
*curproxy, struct proxy *defproxy)
 {
struct server *newsrv = NULL;
@@ -1592,7 +1564,7 @@ int parse_server(const char *file, int linenum, char 
**args, struct proxy *curpr
}
 
if (do_check) {
-   int ret;
+   const char *ret;
 
if (newsrv->trackit) {
Alert("parsing [%s:%d]: unable to enable checks 
and tracking at the same time!\n",
@@ -1671,9 +1643,10 @@ int parse_server(const char *file, int linenum, char 
**args, struct proxy *curpr
}
 
/* note: check type will be set during the config 
review phase */
-   ret = init_check(&newsrv->check, 0, file, linenum);
+   ret = init_check(&newsrv->check, 0);
if (ret) {
-   err_code |= ret;
+   Alert("parsing [%s:%d] : %s.\n", file, linenum, 
ret);
+   err_code |= ERR_ALERT | ERR_ABORT;
goto out;
}
 
@@ -1681,7 +1654,7 @@ int parse_server(const char *file, int linenum, c

[PATCH/RFC 7/8] MEDIUM: Allow configuration of email alerts

2015-01-31 Thread Simon Horman
This currently does nothing beyond parsing the configuration
and storing in the proxy as there is no implementation of email alerts.

Signed-off-by: Simon Horman 
---
 include/types/mailers.h |   6 +--
 include/types/proxy.h   |  10 +
 src/cfgparse.c  | 109 
 3 files changed, 122 insertions(+), 3 deletions(-)

diff --git a/include/types/mailers.h b/include/types/mailers.h
index 582bb94..07374a7 100644
--- a/include/types/mailers.h
+++ b/include/types/mailers.h
@@ -23,8 +23,8 @@
  * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301  
USA
  */
 
-#ifndef _TYPES_EMAIL_ALERT_H
-#define _TYPES_EMAIL_ALERT_H
+#ifndef _TYPES_MAILERS_H
+#define _TYPES_MAILERS_H
 
 #include 
 #include 
@@ -61,5 +61,5 @@ struct mailers {
 
 extern struct mailers *mailers;
 
-#endif /* _TYPES_EMAIL_ALERT_H */
+#endif /* _TYPES_MAILERS_H */
 
diff --git a/include/types/proxy.h b/include/types/proxy.h
index d67fe88..72d1024 100644
--- a/include/types/proxy.h
+++ b/include/types/proxy.h
@@ -380,6 +380,16 @@ struct proxy {
} conf; /* config information */
void *parent;   /* parent of the proxy when 
applicable */
struct comp *comp;  /* http compression */
+
+   struct {
+   union {
+   struct mailers *m;  /* Mailer to send email alerts 
via */
+   char *name;
+   } mailers;
+   char *from; /* Address to send email 
allerts from */
+   char *to;   /* Address(es) to send email 
allerts to */
+   char *myhostname;   /* Identity to use in HELO 
command sent to mailer */
+   } email_alert;
 };
 
 struct switching_rule {
diff --git a/src/cfgparse.c b/src/cfgparse.c
index 2db5ed1..de94074 100644
--- a/src/cfgparse.c
+++ b/src/cfgparse.c
@@ -2056,6 +2056,18 @@ out:
return err_code;
 }
 
+static void free_email_alert(struct proxy *p)
+{
+   free(p->email_alert.mailers.name);
+   p->email_alert.mailers.name = NULL;
+   free(p->email_alert.from);
+   p->email_alert.from = NULL;
+   free(p->email_alert.to);
+   p->email_alert.to = NULL;
+   free(p->email_alert.myhostname);
+   p->email_alert.myhostname = NULL;
+}
+
 int cfg_parse_listen(const char *file, int linenum, char **args, int kwm)
 {
static struct proxy *curproxy = NULL;
@@ -2352,6 +2364,15 @@ int cfg_parse_listen(const char *file, int linenum, char 
**args, int kwm)
if (defproxy.check_command)
curproxy->check_command = 
strdup(defproxy.check_command);
 
+   if (defproxy.email_alert.mailers.name)
+   curproxy->email_alert.mailers.name = 
strdup(defproxy.email_alert.mailers.name);
+   if (defproxy.email_alert.from)
+   curproxy->email_alert.from = 
strdup(defproxy.email_alert.from);
+   if (defproxy.email_alert.to)
+   curproxy->email_alert.to = 
strdup(defproxy.email_alert.to);
+   if (defproxy.email_alert.myhostname)
+   curproxy->email_alert.myhostname = 
strdup(defproxy.email_alert.myhostname);
+
goto out;
}
else if (!strcmp(args[0], "defaults")) {  /* use this one to assign 
default values */
@@ -2393,6 +2414,7 @@ int cfg_parse_listen(const char *file, int linenum, char 
**args, int kwm)
free(defproxy.conf.lfs_file);
free(defproxy.conf.uif_file);
free(defproxy.log_tag);
+   free_email_alert(&defproxy);
 
for (rc = 0; rc < HTTP_ERR_SIZE; rc++)
chunk_destroy(&defproxy.errmsg[rc]);
@@ -2870,6 +2892,61 @@ int cfg_parse_listen(const char *file, int linenum, char 
**args, int kwm)
err_code |= ERR_ALERT | ERR_FATAL;
}
}/* end else if (!strcmp(args[0], "cookie"))  */
+   else if (!strcmp(args[0], "email-alert")) {
+   if (*(args[1]) == 0) {
+   Alert("parsing [%s:%d] : missing argument after 
'%s'.\n",
+ file, linenum, args[0]);
+   err_code |= ERR_ALERT | ERR_FATAL;
+   goto out;
+}
+
+   if (!strcmp(args[1], "from")) {
+   if (*(args[1]) == 0) {
+   Alert("parsing [%s:%d] : missing argument after 
'%s'.\n",
+ file, linenum, args[1]);
+   err_code |= ERR_ALERT | ERR_FATAL;
+   goto out;
+   }
+   free(curproxy->email_alert.f

Re: [PATCH 1/2] BUG/MEDIUM: Do not set agent health to zero if server is disabled in config

2015-01-22 Thread Simon Horman
On Thu, Jan 22, 2015 at 08:24:57PM +0100, Willy Tarreau wrote:
> Hi Simon,
> 
> On Tue, Jan 20, 2015 at 01:22:38PM +0900, Simon Horman wrote:
> (...)
> > This seems to have slipped through the cracks.
> > I'm wondering if we could revisit it and the other patch in this series.
> 
> Hmmm sorry for that. You previously said that both patches were
> right, do you simply want me to pick them for 1.5 and 1.6 ?
> Just let me know and I'll do it.

>From my point of view that would be ideal.



Re: [PATCH 1/2] BUG/MEDIUM: Do not set agent health to zero if server is disabled in config

2015-01-19 Thread Simon Horman
Hi Willy,

On Mon, Dec 01, 2014 at 09:18:05AM +0900, Simon Horman wrote:
> On Wed, Nov 12, 2014 at 05:11:27PM +0900, Simon Horman wrote:
> > On Wed, Nov 12, 2014 at 08:22:05AM +0100, Willy Tarreau wrote:
> > > Hi Simon,
> > > 
> > > On Wed, Nov 12, 2014 at 03:55:53PM +0900, Simon Horman wrote:
> > > > disable starts a server in the disabled state, however setting the 
> > > > health
> > > > of an agent implies that the agent is disabled as well as the server.
> > > > 
> > > > This is a problem because the state of the agent is not restored if
> > > > the state of the server is subsequently updated leading to an
> > > > unexpected state.
> > > > 
> > > > For example, if a server is started disabled and then the server
> > > > state is set to ready then without this change show stat indicates
> > > > that the server is "DOWN (agent)" when it is expected that the server
> > > > would be UP if its (non-agent) health check passes.
> > > 
> > > Interesting case. I believe I caused it myself while trying to address
> > > a different case : health checks are disabled, only agent checks are
> > > enabled, and the server is disabled in the configuration. Could you
> > > please check that this use case still works properly with your patch ?
> > > I'd rather avoid to see the server continue to show up!
> > 
> > Thanks, will do.
> > 
> > I was aware you had done some work in this area but I wasn't entirely
> > sure what case you were trying to fix. Thanks for filling in that gap
> > in my knowledge.
> 
> Hi Willy,
> 
> I have tested the following scenario which I hope matches the one that you
> describe:
> 
> 1. Start haproxy with server disabled in config
> 2. Disable health checks using:
>echo "disable health VIP_Name/RIP_Name" | socat ..
> 2. Enable server using:
>echo "set server VIP_Name/RIP_Name state ready" | socat ...
> 
> 
> The results are as follows:
> 
> 1. With this patch applied and agent-check enabled
> 
>Server status is reported as ""
> 
> 2. With this patch applied and agent-check not configured in config
> 
>Server status is reported as ""
> 
> 3. Without this patch applied and agent-check enabled
> 
>Server status is reported as "DOWN (agent)"
> 
> 4. With this patch applied and agent-check not configured in config
> 
>Server status is reported as ""
> 
> 
> My working assumption is that unless the agent-check explicitly
> marks a backend as down then the backend should not be considered
> down due to the agent-check. This includes this scenario when the
> agent-check is not responding. This seems to be reflected in
> the implementation other than the area of HTTP statistics.
> 
> As both 1) and 2) are consistent with 4) it seems to me that this
> patch is correct in the context of the scenario you describe (assuming
> my test is correct).

This seems to have slipped through the cracks.
I'm wondering if we could revisit it and the other patch in this series.



Re: [PATCH 1/2] BUG/MEDIUM: Do not set agent health to zero if server is disabled in config

2014-11-30 Thread Simon Horman
On Wed, Nov 12, 2014 at 05:11:27PM +0900, Simon Horman wrote:
> On Wed, Nov 12, 2014 at 08:22:05AM +0100, Willy Tarreau wrote:
> > Hi Simon,
> > 
> > On Wed, Nov 12, 2014 at 03:55:53PM +0900, Simon Horman wrote:
> > > disable starts a server in the disabled state, however setting the health
> > > of an agent implies that the agent is disabled as well as the server.
> > > 
> > > This is a problem because the state of the agent is not restored if
> > > the state of the server is subsequently updated leading to an
> > > unexpected state.
> > > 
> > > For example, if a server is started disabled and then the server
> > > state is set to ready then without this change show stat indicates
> > > that the server is "DOWN (agent)" when it is expected that the server
> > > would be UP if its (non-agent) health check passes.
> > 
> > Interesting case. I believe I caused it myself while trying to address
> > a different case : health checks are disabled, only agent checks are
> > enabled, and the server is disabled in the configuration. Could you
> > please check that this use case still works properly with your patch ?
> > I'd rather avoid to see the server continue to show up!
> 
> Thanks, will do.
> 
> I was aware you had done some work in this area but I wasn't entirely
> sure what case you were trying to fix. Thanks for filling in that gap
> in my knowledge.

Hi Willy,

I have tested the following scenario which I hope matches the one that you
describe:

1. Start haproxy with server disabled in config
2. Disable health checks using:
   echo "disable health VIP_Name/RIP_Name" | socat ..
2. Enable server using:
   echo "set server VIP_Name/RIP_Name state ready" | socat ...


The results are as follows:

1. With this patch applied and agent-check enabled

   Server status is reported as ""

2. With this patch applied and agent-check not configured in config

   Server status is reported as ""

3. Without this patch applied and agent-check enabled

   Server status is reported as "DOWN (agent)"

4. With this patch applied and agent-check not configured in config

   Server status is reported as ""


My working assumption is that unless the agent-check explicitly
marks a backend as down then the backend should not be considered
down due to the agent-check. This includes this scenario when the
agent-check is not responding. This seems to be reflected in
the implementation other than the area of HTTP statistics.

As both 1) and 2) are consistent with 4) it seems to me that this
patch is correct in the context of the scenario you describe (assuming
my test is correct).



Re: [PATCH 1/2] BUG/MEDIUM: Do not set agent health to zero if server is disabled in config

2014-11-12 Thread Simon Horman
On Wed, Nov 12, 2014 at 08:22:05AM +0100, Willy Tarreau wrote:
> Hi Simon,
> 
> On Wed, Nov 12, 2014 at 03:55:53PM +0900, Simon Horman wrote:
> > disable starts a server in the disabled state, however setting the health
> > of an agent implies that the agent is disabled as well as the server.
> > 
> > This is a problem because the state of the agent is not restored if
> > the state of the server is subsequently updated leading to an
> > unexpected state.
> > 
> > For example, if a server is started disabled and then the server
> > state is set to ready then without this change show stat indicates
> > that the server is "DOWN (agent)" when it is expected that the server
> > would be UP if its (non-agent) health check passes.
> 
> Interesting case. I believe I caused it myself while trying to address
> a different case : health checks are disabled, only agent checks are
> enabled, and the server is disabled in the configuration. Could you
> please check that this use case still works properly with your patch ?
> I'd rather avoid to see the server continue to show up!

Thanks, will do.

I was aware you had done some work in this area but I wasn't entirely
sure what case you were trying to fix. Thanks for filling in that gap
in my knowledge.



Re: Testing with servers disabled in config and feedback agent check enabled 1.6dev0

2014-11-11 Thread Simon Horman
Hi Mark, Hi All,

On Tue, Sep 02, 2014 at 02:59:32PM +0100, Mark Brooks wrote:
> Hello All,
> 
> Im testing starting the backend servers as disabled in the config
> according to the manual -
> "disabled
>   The "disabled" keyword starts the server in the "disabled" state. That means
>   that it is marked down in maintenance mode, and no connection other than the
>   ones allowed by persist mode will reach it. It is very well suited to setup
>   new servers, because normal traffic will never reach them, while it is still
>   possible to test the service by making use of the force-persist mechanism."
> 
> my config line looks like
> server RIP_Name 192.168.66.50  weight 100  cookie RIP_Name
> agent-check agent-port 12345 check port 80 inter 2000  rise 2  fall 3
> minconn 0  maxconn 0  on-marked-down shutdown-sessions disabled
> 
> The server comes up in maintenance mode as you would expect, but then
> using the unix socket commands I am unable to get the server into the
> up state.
> 
> I have tried
> 
> 'set server VIP_Name/RIP_Name state ready'
> 
> and
> 
>  'enable server VIP_Name/RIP_Name'
> 
> and the stats page reports the server as 'DOWN(agent)'
> 
> If I remove the feedback agent configuration so my server line looks
> like the following -
> 
> server RIP_Name 192.168.66.50  weight 100  cookie RIP_Name check port
> 80 inter 2000  rise 2  fall 3  minconn 0  maxconn 0  on-marked-down
> shutdown-sessions disabled
> 
> its possible to get the service to come up using the aforementioned
> socket commands.

For the record the following patch, which I posted a few moments ago,
it intended to address this problem.

[PATCH 1/2] BUG/MEDIUM: Do not set agent health to zero if server is disabled 
in config




[PATCH 1/2] BUG/MEDIUM: Do not set agent health to zero if server is disabled in config

2014-11-11 Thread Simon Horman
disable starts a server in the disabled state, however setting the health
of an agent implies that the agent is disabled as well as the server.

This is a problem because the state of the agent is not restored if
the state of the server is subsequently updated leading to an
unexpected state.

For example, if a server is started disabled and then the server
state is set to ready then without this change show stat indicates
that the server is "DOWN (agent)" when it is expected that the server
would be UP if its (non-agent) health check passes.

Reported-by: Mark Brooks 
Signed-off-by: Simon Horman 
---
 src/cfgparse.c | 1 -
 src/server.c   | 1 -
 2 files changed, 2 deletions(-)

diff --git a/src/cfgparse.c b/src/cfgparse.c
index 09a0fd3..820fbf4 100644
--- a/src/cfgparse.c
+++ b/src/cfgparse.c
@@ -6982,7 +6982,6 @@ out_uri_auth_compat:
newsrv->admin |= SRV_ADMF_IMAINT;
newsrv->state = SRV_ST_STOPPED;
newsrv->check.health = 0;
-   newsrv->agent.health = 0;
}
 
newsrv->track = srv;
diff --git a/src/server.c b/src/server.c
index fdb63cc..ea6e3aa 100644
--- a/src/server.c
+++ b/src/server.c
@@ -1223,7 +1223,6 @@ int parse_server(const char *file, int linenum, char 
**args, struct proxy *curpr
newsrv->state = SRV_ST_STOPPED;
newsrv->check.state |= CHK_ST_PAUSED;
newsrv->check.health = 0;
-   newsrv->agent.health = 0;
cur_arg += 1;
}
else if (!defsrv && !strcmp(args[cur_arg], "observe")) {
-- 
2.1.1




[PATCH 2/2] MEDIUM/BUG: Only explicitly report "DOWN (agent)" if the agent health is zero

2014-11-11 Thread Simon Horman
Make check check used to report explicitly report "DOWN (agent)" slightly
more restrictive such that it only triggers if the agent health is zero.

This avoids the following problem.

1. Backend is started disabled, agent check is is enabled
2. Backend is stabled using set server vip/rip state ready
3. Health is marked as down using set server vip/rip health down

   At this point the http stats page will report "DOWN (agent)"
   but the backend being down has nothing to do with the agent check

This problem appears to have been introduced by cf2924bc2537bb08c
("MEDIUM: stats: report down caused by agent prior to reporting up").

Note that "DOWN (agent)" may also be reported by a more generic conditional
which immediately follows the code changed by this patch.

Reported-by: Mark Brooks 
Signed-off-by: Simon Horman 
---
 src/dumpstats.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/dumpstats.c b/src/dumpstats.c
index 26b0a9f..b110fcc 100644
--- a/src/dumpstats.c
+++ b/src/dumpstats.c
@@ -3110,7 +3110,7 @@ static int stats_dump_sv_stats(struct stream_interface 
*si, struct proxy *px, in
chunk_appendf(&trash, "%s ", human_time(now.tv_sec - 
sv->last_change, 1));
chunk_appendf(&trash, "MAINT");
}
-   else if ((ref->agent.state & CHK_ST_ENABLED) && (ref->state == 
SRV_ST_STOPPED)) {
+   else if ((ref->agent.state & CHK_ST_ENABLED) && 
!(sv->agent.health) && (ref->state == SRV_ST_STOPPED)) {
chunk_appendf(&trash, "%s ", human_time(now.tv_sec - 
ref->last_change, 1));
/* DOWN (agent) */
chunk_appendf(&trash, srv_hlt_st[1], "GCC: your 
-Werror=format-security is bogus, annoying, and hides real bugs, I don't thank 
you, really!");
-- 
2.1.1




[PATCH 0/2] Do not incorrectly report status as "DOWN (agent)"

2014-11-11 Thread Simon Horman
Hi,

this short series is intended to address to cases where the
status of a backend is incorrectly reported as "DOWN (agent)".

Simon Horman (2):
  BUG/MEDIUM: Do not set agent health to zero if server is disabled in
config
  MEDIUM/BUG: Only explicitly report "DOWN (agent)" if the agent health
is zero

 src/cfgparse.c  | 1 -
 src/dumpstats.c | 2 +-
 src/server.c| 1 -
 3 files changed, 1 insertion(+), 3 deletions(-)

-- 
2.1.1




Re: Problems with agent check

2014-11-11 Thread Simon Horman
On Mon, Nov 10, 2014 at 01:45:45PM +0100, Cyril Bonté wrote:
> Hi,
> 
> Le 10/11/2014 12:54, Lasse Birnbaum Jensen a écrit :
> >Hi all
> >
> >I have a problem with agent checks on a ssl backend (i cannot change the 
> >backend to http). This configuration forces the agent-check port expect ssl, 
> >which seems like a bug.
> >
> >backend test
> >  server b1 1.2.3.4:443 ssl check verify none agent-check agent-port
> >
> >
> >Agent check fails with:
> >
> >Agent check for server test/b1 failed, reason: Layer6 invalid response, 
> >info: "Connection error during SSL handshake (Connection reset by peer)", 
> >check duration: 20ms, status: 1/1 UP
> >
> >Agent-check is defined to do a cleartext request to agent-port and parse the 
> >result.
> >
> >I cannot find anything in the doc about overwriting such that ssl only 
> >applies to health check and not agent check. Can anyone help ?
> 
> Indeed, I think there's a bug introduced by commit 6618300e13 [1]
> Both checks and agent checks are using a common transport layer (xprt in
> struct check_common from this commit).
> 
> IMHO, we should split the transport layer for each check, and enforce
> "raw_sock" for agent checks.
> 
> I add Simon to the discussion, as he worked on it.
> Simon, do you agree with that ?
> 
> [1] 
> http://www.haproxy.org/git?p=haproxy.git;a=commit;h=6618300e13587cac63c34b974cd54d52fc129fde

Yes, that sounds reasonable to me.



Re: Problem with external healthchecks and haproxy-ss-20140720

2014-08-15 Thread Simon Horman
On Fri, Aug 15, 2014 at 09:12:56AM +0100, Malcolm Turnbull wrote:
> I agree as well.. :-).
> 
> Our original specification was to match the way that ldirectord does its
> external health checks (so that the customer scripts are compatible).
> We could just change ldirectord to be compatible with the new style as it
> sounds more extensible (leaving it backwards compatible as well would be
> nice.)

That makes sense. I suppose it was me who came up with the original API as
used by ldirectord.

Yes, I agree ldirectord could be updated to support this new method.



Re: Problem with external healthchecks and haproxy-ss-20140720

2014-08-15 Thread Simon Horman
[Cc Malcolm Turnbull]

On Fri, Aug 15, 2014 at 12:29:36AM +0200, Willy Tarreau wrote:
> Hi Cyril!
> 
> On Thu, Aug 14, 2014 at 10:30:52PM +0200, Cyril Bonté wrote:
> > Hi all,
> > 
> > Le 07/08/2014 01:16, Cyril Bonté a écrit :
> > >Hi Bjoern,
> > >
> > >Le 06/08/2014 22:16, bjun...@gmail.com a écrit :
> > (...)
> > >>[ALERT] 216/205611 (1316) : Starting [be_test:node01] check: no listener.
> > >>Segmentation fault (core dumped)
> > >
> > >OK, I could reproduce it. This is happening for several reasons :
> > >1. External checks can only be used in "listen" sections.
> > >This is not clearly documented but it can be guessed by the arguments
> > >passed to the command : the proxy address and port are required (see [1]).
> > >I think this is annoying because it's only usable in some specific use
> > >cases. Maybe we should rework this part of the implementation : I see
> > >that for unix sockets, the port argument is set to "NOT_USED", we could
> > >do the same for checks in a backend section. Willy, Simon, is it OK for
> > >you ?
> > 
> > After some thoughts, I'd like to suggest a new implementation.
> > 
> > I tried to imagine in which use case sysadmins in my company would need 
> > the first listener address/port of the proxy but I couldn't find one, or 
> > when I almost found one, other information could be more useful.
> > It's really too specific to be used by everyone, and I'm afraid it will 
> > be error-prone in a configuration lifecycle.
> > Based from the current implementation, it imposes an artificial order in 
> > the bind list, which didn't exist before. If the external healthcheck 
> > checks thoses values but someone modifies the bind order (by 
> > adding/removing one, or because a automated script generates the 
> > configuration without any guaranty on the order, ...), it will 
> > mysteriously fail until someone remembers to re-read the healthcheck script.
> > 
> > Also, some years ago, I developed a tool which called external scripts 
> > which were (supposed to be) simple with only 2-3 arguments. Over the 
> > time, users wanted new arguments, some of them were optional, some 
> > others not. After several years, some calls got nearly 20 arguments, I 
> > let you imagine how a nightmare it is to maintain this. I fear this will 
> > happen to the haproxy external healthcheck (someone will want to 
> > retrieve the backend name, someone else will want to retrieve the 
> > current number of sessions, ...).
> > 
> > So, I came to another solution : let's use environment variables 
> > instead. This will permit to add new ones easily in the future.
> 
> That's a really clever idea!
> 
> > And currently, instead of providing the bind address/port, we could pass 
> > the backend name and id.
> > 
> > For example, as a current minimal implementation :
> > - Backend variables
> > HAPROXY_BACKEND_NAME=my_backend
> > HAPROXY_BACKEND_ID=2
> > - Server variables
> > HAPROXY_SERVER_ADDR=203.0.113.1
> > HAPROXY_SERVER_PORT=80
> > or maybe in a single variable, such as HAPROXY_SERVER=203.0.113.1:80
> > 
> > And if we still want to provide the listener address, why not adding 
> > optional variables :
> > HAPROXY_BIND_ADDR=127.0.0.1
> > HAPROXY_BIND_PORT=80
> > or as the server address, we could use a single variable (HAPROXY_BIND)
> > 
> > For an unix socket :
> > HAPROXY_BIND_ADDR=/path/to/haproxy.socket (without HAPROXY_BIND_PORT 
> > variable)
> > 
> > If we want to provide all the listeners, we can add a new environment 
> > variable HAPROXY_BIND_COUNT= > and list the listeners in environement variables suffixed by  where 
> >  = [1..number of listeners].
> > Example :
> > HAPROXY_BIND_COUNT=2
> > HAPROXY_BIND_ADDR_1=127.0.0.1
> > HAPROXY_BIND_PORT_1=80
> > HAPROXY_BIND_ADDR_2=/path/to/haproxy.socket
> > 
> > Any suggestion ?
> > If it's ok, I can work on a patch.
> 
> I have to say I like it. It's by far much more extensible and will not
> require users to upgrade their scripts each time a new variable is added.

I agree that is quite a clever idea, and I have no objections to it.

However, I'd like to allow Malcolm Turnbull to comment as I wrote
the current code to meet his needs.



single or many haproxy instances

2014-06-30 Thread Xu (Simon) Chen
Hi folks,

I am writing a simple load balancer as a service to automate haproxy
configuration while providing a simple API to users, who only need to give
a few simple specifications of the load balancer they want.

I am trying to decide whether to run multiple haproxy instances or a single
instance on a particular node. I currently use jinja2 template to combine
all services into a single haproxy configuration file and run a single
instance of haproxy. Every time, when a service spec is changed, I run
check config mode, and only reload the config if the test passes. But I
fear that a single incorrect service spec would prevent everyone else from
updating their services, unless I maintain some last-known good config for
every service.

Managing one haproxy instance for every service solves this problem, but I
might end up with too many processes on a single box.

Any recommendations on which way to go? Is there a recommended max number
of haproxy instances per node/core?

Thanks.
-Simon


Re: Email Alert Proposal

2014-06-24 Thread Simon Horman
On Tue, Jun 24, 2014 at 07:29:15AM +0200, Willy Tarreau wrote:
> Hi Simon,
> 
> On Tue, Jun 24, 2014 at 09:15:13AM +0900, Simon Horman wrote:
> > Hi Willy,
> > 
> > Malcolm has asked me to open a discussion with you regarding adding
> > email alerts to haproxy and that is the purpose of this email.
> > 
> > In essence the motivation is to provide a lightweight email alert
> > feature that may be used in situations where a full-blown monitoring
> > system is not in use.
> > 
> > There is some discussion of this topic and several solutions,
> > including patches to haproxy, on the loadbalancer.org log.
> > 
> > http://blog.loadbalancer.org/3-ways-to-send-haproxy-health-check-email-alerts/
> > 
> > Would you be open to including such a feature in haproxy?
> > 
> > If so I had it in mind to have haproxy send emails using the sendmail 
> > command,
> > a variation of the mailx implementation at the link above, avoiding the
> > need to implement an SMTP client.
> > 
> > I was thinking it could be configured using directives like the following,
> > borrowing ideas from my recent external-agent patch.
> > 
> > global
> > email-alert
> > 
> > listen ...
> > 
> > option email-alert
> > email-alert command sendmail
> > email-alert path/usr/sbin:/usr/lib
> > email-alert fromfrom@x.y.z
> > email-alert to  to@x.y.z
> > email-alert cc  cc1@x.y.z, cc2@x.y.z
> > email-alert bcc bcc@x.y.z
> > email-alert subject Loadbalancer alert
> > email-alert custom-header X-Custom: foo
> > 
> > It might be nice to allow the use of printf style directives in
> > the subject to allow it to include the name of the proxy and other
> > useful information. I expect that different users have different needs
> > there.
> 
> We had such an idea in the past, however the principle was to use the
> address of a smart relay host. We cannot use a command because the process
> is supposed to be chrooted.

Thanks, if that is the direction you wish to take things then I'm happy to
do so. I guess a simple SMTP client is not an insurmountable challenge. But
I wonder if there is any infrastructure in haproxy that might make such an
implementation easier. If so, could you point me at it?

> Also, in my opinion the SMTP relay should be
> per section (ie: supported in the defaults section) because in shared
> environments, customers want to use a different gateway and e-mail
> settings.

Yes, I agree that sounds like a good idea.

> In fact in the ALOHA we have implemented a daemon which watches
> the unix socket to send e-mails because by then it was too much work to
> implement it natively. Now it should be much simpler.

I'm clad to hear it will be simpler though I'm not sure that I understand
why this is so.

> I was just wondering whether we should not go slightly further and support
> "mailer" sections just like we have peers. I'd like to do the same for DNS
> resolving later and I think it makes the configs more understandable, and
> more flexible (eg: later we can think about having multiple mail relays).

I also agree that sounds reasonable. I'll see about making it so.

> We also want to have the ability to define what events can result in an
> e-mail alert. For example, I know some hosting providers who absolutely
> want to know when a service is operational again (so that they can drive
> back home), while others absolutely refuse to be spammed with such info.

Thanks. I meant to include that in my proposal. I agree it is important.

> Sections also make it possible to add extra settings for the mail relays,
> for example SSL + a certificate. Or maybe some default from/to/...

Agreed.

I would prefer to only handle plain-text to start with.
To allow a working prototype to be slightly closer to hand.
But I agree that SSL support, I assume in the form of STLS,
is an important feature.



Email Alert Proposal

2014-06-23 Thread Simon Horman
Hi Willy,

Malcolm has asked me to open a discussion with you regarding adding
email alerts to haproxy and that is the purpose of this email.

In essence the motivation is to provide a lightweight email alert
feature that may be used in situations where a full-blown monitoring
system is not in use.

There is some discussion of this topic and several solutions,
including patches to haproxy, on the loadbalancer.org log.

http://blog.loadbalancer.org/3-ways-to-send-haproxy-health-check-email-alerts/

Would you be open to including such a feature in haproxy?

If so I had it in mind to have haproxy send emails using the sendmail command,
a variation of the mailx implementation at the link above, avoiding the
need to implement an SMTP client.

I was thinking it could be configured using directives like the following,
borrowing ideas from my recent external-agent patch.

global
email-alert

listen ...

option email-alert
email-alert command sendmail
email-alert path/usr/sbin:/usr/lib
email-alert fromfrom@x.y.z
email-alert to  to@x.y.z
email-alert cc  cc1@x.y.z, cc2@x.y.z
email-alert bcc bcc@x.y.z
email-alert subject Loadbalancer alert
email-alert custom-header X-Custom: foo

It might be nice to allow the use of printf style directives in
the subject to allow it to include the name of the proxy and other
useful information. I expect that different users have different needs
there.



  1   2   3   4   5   >