Re: [RFC][PATCHES] seamless reload

2017-04-11 Thread Olivier Houchard
On Tue, Apr 11, 2017 at 08:16:48PM +0200, Willy Tarreau wrote:
> Hi guys,
> 
> On Mon, Apr 10, 2017 at 11:08:56PM +0200, Pavlos Parissis wrote:
> > IMHO: a better name would be 'stats nounsedsockets', as it is referring to a
> > generic functionality of UNIX stats socket, rather to a very specific 
> > functionality.
> 
> Just two things on this :
>   - it's not directly related to the stats socket but it's a global
> behaviour of the process so it should not be under "stats" ;
> 
>   - please guys, don't stick words without a '-' as a delimitor
> anymore, we've made this mistake in the past with "forceclose",
> "httpclose" or "dontlognull" or whatever and some of them are
> really hard to read. That's why we now split the words using
> dashes, so that would be "no-unused-sockets" or I don't remember
> the other name Olivier proposed, but please use the same principle
> which leaves no ambiguity on how to parse it.

Fine, just beacuse it's you, I'll change that.

Olivier



Re: simgle ?

2017-04-11 Thread Willy Tarreau
On Mon, Apr 10, 2017 at 03:48:45PM -0600, Jim Freeman wrote:
> https://github.com/haproxy/haproxy/search?q=simgle
> 
> single ?
> simple ?

Looks like "single". Care to send a patch ? Please keep in mind that this
github project is neither up to date nor official so please make your patch
on top of the official tree.

Thanks,
Willy



Re: [RFC][PATCHES] seamless reload

2017-04-11 Thread Willy Tarreau
Hi guys,

On Mon, Apr 10, 2017 at 11:08:56PM +0200, Pavlos Parissis wrote:
> IMHO: a better name would be 'stats nounsedsockets', as it is referring to a
> generic functionality of UNIX stats socket, rather to a very specific 
> functionality.

Just two things on this :
  - it's not directly related to the stats socket but it's a global
behaviour of the process so it should not be under "stats" ;

  - please guys, don't stick words without a '-' as a delimitor
anymore, we've made this mistake in the past with "forceclose",
"httpclose" or "dontlognull" or whatever and some of them are
really hard to read. That's why we now split the words using
dashes, so that would be "no-unused-sockets" or I don't remember
the other name Olivier proposed, but please use the same principle
which leaves no ambiguity on how to parse it.

Thanks!
Willy



Re: Admin socket server state and MAINT flag issues

2017-04-11 Thread Willy Tarreau
Hi Dennis,

On Mon, Apr 10, 2017 at 09:05:13PM +0200, Dennis Jacobfeuerborn wrote:
> Hi,
> i'm currently playing with the values that the admin socket return when
> the "show servers state" command is issued and I noticed to things:
> 
> 1. When using and abstract namespace socket as address on a server line
> then the srv_addr "field" will be empty which technically isn't a
> problem but the documentation doesn't seem to mention how exactly fields
> are delimited. Example:
> 
> ...
> 5 sites-front-ssl 1 clear  2 0 1 1 63797 1 0 2 0 0 0 0
> ...
> (notice the two spaces after "clear")

Hmmm that's not cool, we need to address this.

> In my first attempt at parsing the output i used white-space as a
> delimiter which failed for this particular server line. Once I used one
> single space character as a delimiter the parsing worked fine.
> I would probably be good to document this explicitly to make parsers
> more robust. Alternatively one could use white-space as a delimiter and
> maybe output fields that have not content as "-".

Yes I agree, I was thinking the same. You may have the same problem when
the address uses the unix family. There are some places in the code (logs
and/or "show sess") where we simply report "unix@" or "abns@", I think it
would be even better since it would allow to better know the current
family and still be compatible with the format that needs to be reinjected.

> 2. The server administrative state flags are defined like this:
> 
> enum srv_admin {
>   SRV_ADMF_FMAINT= 0x01,
>   SRV_ADMF_IMAINT= 0x02,
>   SRV_ADMF_MAINT = 0x23,
>   SRV_ADMF_CMAINT= 0x04,
>   SRV_ADMF_FDRAIN= 0x08,
>   SRV_ADMF_IDRAIN= 0x10,
>   SRV_ADMF_DRAIN = 0x18,
>   SRV_ADMF_RMAINT= 0x20,
> };
> 
> Shouldn't the SRV_ADMF_MAINT value include the CMAINT flag and thus
> actually have a value of 0x27?

I don't know, it depends on your config, and I don't know the output format
by memory to be honnest :-)  CMAINT normally stands for "set to maintenance
by configuration" which is the "disabled" directive on the server. Some of
these flags are needed to let the new process detect what part of the config
has changed and what part differs between the old config and the state file
in order to decide which one to use (the last config change has precedence
since you want to support reloads to update your config). I seem to remember
that not all flags are dumped, but I could say bullshit here.

Regards,
Willy



Re: IPv6 resolvers seems not works

2017-04-11 Thread Willy Tarreau
On Tue, Apr 11, 2017 at 03:55:25PM +0200, Baptiste wrote:
> Hi all,
> 
> Thank you Frededic!!!
> Willy, you can merge (and backport to 1.6) Frederic's patch please?

Done, thanks guys!
Willy



trying to understand sticky counters

2017-04-11 Thread Adam Spiers

Hi all,

I've pored over the Configuration Manual again and again, and I'm
still struggling to fully understand sticky counters.  This paragraph
seems to hold some important information:

   Once a "track-sc*" rule is executed, the key is looked up in the table
   and if it is not found, an entry is allocated for it. Then a pointer to
   that entry is kept during all the session's life, and this entry's
   counters are updated as often as possible, every time the session's
   counters are updated, and also systematically when the session ends.
   Counters are only updated for events that happen after the tracking has
   been started. As an exception, connection counters and request counters
   are systematically updated so that they reflect useful
   information.

It seems that one of the key concepts here is "session".  I'm assuming
that this actually means "TCP session", as in layer 5 of the OSI
model; is that correct?  Unfortunately there is nowhere in the manual
which explicitly states this definition, despite countless uses of the
term, but there are some hints scattered around, e.g. in the
"tcp-request session" section:

   Once a session is validated, (ie. after all handshakes have been completed),

and in the "reject" part of the "tcp-request connection" section.

It seems that each session can have a maximum of three entries
associated with it in stick-tables, because there is a maximum of 3
sets of sticky counters per connection.  And these entries could
potentially be in 1, 2, or 3 different stick-tables, depending on
where and how the track-scX directive is written, right?

Thirdly, I'm struggling to understand these examples:

 Example: accept all connections from white-listed hosts, reject too fast
  connection without counting them, and track accepted connections.
  This results in connection rate being capped from abusive sources.

   tcp-request connection accept if { src -f /etc/haproxy/whitelist.lst }
   tcp-request connection reject if { src_conn_rate gt 10 }
   tcp-request connection track-sc0 src

 Example: accept all connections from white-listed hosts, count all other
  connections and reject too fast ones. This results in abusive ones
  being blocked as long as they don't slow down.

   tcp-request connection accept if { src -f /etc/haproxy/whitelist.lst }
   tcp-request connection track-sc0 src
   tcp-request connection reject if { sc0_conn_rate gt 10 }

The stick-table directives are missing, but my experiments suggest
that not only they are mandatory, but also they must track conn_rate
samples, otherwise HAProxy has no way to know the duration of the
sliding time window which the connection rate relates to, and nothing
will get rejected.  So I think the examples should include those
directives for clarity.  When I added this, it worked for me:

stick-table type ip size 100k store conn_rate(30s)

Furthermore, I don't understand the explanation text which says
"without counting them".  If they're not counted, how can the
connection rate be measured?  So what is the real difference between
these two examples?

I'd be really grateful for any light which can be shed here.  I'm
normally pretty good at inhaling large, complex technical manuals, but
I've really been struggling with HAProxy's for some reason :-/

Thanks!
Adam



Re: ACL with dynamic pattern

2017-04-11 Thread Holger Just
Hi Alexander,

Alexander Lebedev wrote:
> I want to implement CSRF check with haproxy.
> I want to check cookie value matched the header value and deny request
> if they're didn't equal.

The ACLs are only equipped to compare a dynamic value (e.g. from a
fetch) with a statically define value. It is not possible to compare two
dynamic values.

There is a workaround for this however. It consists in building a
specific dynamic value and checking if that suits your requirements.

The following example shows how this can work. At first, we are building
a new temporary header containing the value from the cookie and the
header concatenated. We have to use a header here since the
concatenation is a log-format statement which is not allowed for
`http-request set var`.

In the ACL, we then check this concatenated value using a specific
regular expression, It checks if the matched string contains the exact
value two times, separated by the equal signs we added in the header value.

When I build this some time ago, we had to declare a back-reference in
the regex for this to work. I'm not sure it is still required, but
shouldn't hurt in any case.

# Build the temporary header
http-request set-header X-CHECK %[req.cook(token)]==%[req.hdr(token)]

# Deny the request if the header value is not valid
acl valid_csrf_token hdr(X-CHECK) -m reg ^(?.+)==\1$
http-request deny unless valid_csrf_token

# Cleanup
http-request del-header X-CHECK

This should more-or-less work for your use case.

Two caveats apply:

(1) Your CSRF scheme seems a bit unconventional. For this to be secure,
please also ensure that the cookie is set to HttpOnly and ensure that
that the value your client sets in the header can not be gathered from
outside your origin (and is esp. not indirectly gathered from the cookie
by the client).

(2) The comparison of the values in the ACL is not done in constant
time. That means checking the values takes more or less time depending
on the inputs. An attacker can use this to check a lot of values and
measure the response time of HAProxy. With some statistical checks, an
attacker can learn the value of the user's cookie by triggering several
(probably: a lot) requests and measuring these tiny timing differences.
To actually be secure, you should instead perform the CSRF check in your
application and ensure you use a constant-time comparison algorithm there.

Best,
Holger



Re: Propagating agent-check weight change to tracking servers

2017-04-11 Thread Willy Tarreau
Hi Michal,

On Tue, Apr 11, 2017 at 04:41:25PM +0200, Michal wrote:
> Hello Willy,
> 
> So I'm fighting with dba97077 made by Frédéric Lécaille - it broke many
> things.

That's the principle of distributed development : better get your changes
in shape first so that you don't have to adapt to others' changes.

> E.g. you can't use "source", because that patch broke it. I'm curious how
> many other stuff got broken with those patches around default-server.

Well, for having reviewed this patch set in depth, I didn't spot anything
obvious. That doesn't mean there's no bug, it's just that I didn't notice
any.

> We need some CI (even if they will only build haproxy) and IMHO people with
> @haproxy.com mails should test their code before posting and merging :(

Well, all people test their code, but there are countless combinations
of possible breakage. There's a reason why this version is still in
development so your comment suggesting that people don't work the way
you'd do it is totally inappropriate.

And in case you're wondering, we're still in development and there will
be more breakage to come. That's the principle of a development branch.

> In attachment there is patch fixed after your (Willy) review. Sorry for
> loop,
> I was focused on fixing all those problems with Frédérics patch I just
> didn't
> think how to replace do..while (which obviously works) with this cleaner
> version - thanks for this. The patch got even simpler.

If you don't mind I'll retag it "MEDIUM" because it will definitely impact
some users since it's a behaviour change. I'll review the patch ASAP.

Thanks,
Willy



Re: Propagating agent-check weight change to tracking servers

2017-04-11 Thread Michał
Hello Willy,

So I'm fighting with dba97077 made by Frédéric Lécaille - it broke many
things.
E.g. you can't use "source", because that patch broke it. I'm curious how
many other stuff got broken with those patches around default-server.

We need some CI (even if they will only build haproxy) and IMHO people with
@haproxy.com mails should test their code before posting and merging :(

In attachment there is patch fixed after your (Willy) review. Sorry for
loop,
I was focused on fixing all those problems with Frédérics patch I just
didn't
think how to replace do..while (which obviously works) with this cleaner
version - thanks for this. The patch got even simpler.

Thanks again,
Michał

2017-03-27 15:34 GMT+02:00 Willy Tarreau :

> Hi Michal,
>
> On Mon, Mar 27, 2017 at 03:18:10PM +0200, Michal wrote:
> > Hello,
> > Thank you for your response. I agree with part about backward
> compatibility
> > and of course I don't want to break working things
> >
> > I prepared patch with described functionality and with two notes in doc
> to
> > let users know about this behaviour.
>
> Thanks. Just a few points :
>
> 1) cosmetic cleanups below :
>
> > diff --git a/doc/configuration.txt b/doc/configuration.txt
> > index fb3e691..7f22782 100644
> > --- a/doc/configuration.txt
> > +++ b/doc/configuration.txt
> > @@ -11370,6 +11370,10 @@ track [/]
> >enabled. If  is omitted the current one is used. If
> disable-on-404 is
> >used, it has to be enabled on both proxies.
> >
> > +  Note:
> > +Relative weight changes are propagated to all tracking servers. Each
> > +tracking server will have it's weight recalculated separately.
>
> s/it's/its/
>
> > +
> >Supported in default-server: No
> >
> >  verify [none|required]
> > diff --git a/doc/management.txt b/doc/management.txt
> > index 1d34f84..3f3730a 100644
> > --- a/doc/management.txt
> > +++ b/doc/management.txt
> > @@ -1694,6 +1694,10 @@ set weight / [%]
> >"admin". Both the backend and the server may be specified either by
> their
> >name or by their numeric ID, prefixed with a sharp ('#').
> >
> > +  Note:
> > +Relative weight changes are propagated to all tracking servers. Each
> > +tracking server will have it's weight recalculated separately.
>
> s/it's/its/
>
> > +
> >  show cli sockets
> >List CLI sockets. The output format is composed of 3 fields separated
> by
> >spaces. The first field is the socket address, it can be a unix
> socket, a
> > diff --git a/src/server.c b/src/server.c
> > index 5589723..9462a16 100644
> > --- a/src/server.c
> > +++ b/src/server.c
> > @@ -45,6 +45,9 @@
> >  static void srv_update_state(struct server *srv, int version, char
> **params);
> >  static int srv_apply_lastaddr(struct server *srv, int *err_code);
> >
> > +const char *server_propagate_weight_change_request(struct server *sv,
> > + const char *weight_str);
> > +
> >  /* List head of all known server keywords */
> >  static struct srv_kw_list srv_keywords = {
> >   .list = LIST_HEAD_INIT(srv_keywords.list)
> > @@ -912,6 +915,8 @@ const char *server_parse_weight_change_request(struct
> server *sv,
> >   struct proxy *px;
> >   long int w;
> >   char *end;
> > + const char *msg;
> > + int relative = 0;
> >
> >   px = sv->proxy;
> >
> > @@ -933,6 +938,8 @@ const char *server_parse_weight_change_request(struct
> server *sv,
> >   w = sv->iweight * w / 100;
> >   if (w > 256)
> >   w = 256;
> > +
> > + relative = 1;
> >   }
> >   else if (w < 0 || w > 256)
> >   return "Absolute weight can only be between 0 and 256
> inclusive.\n";
> > @@ -945,6 +952,34 @@ const char *server_parse_weight_change_request(struct
> server *sv,
> >   sv->uweight = w;
> >   server_recalc_eweight(sv);
> >
> > + if (relative) {
> > + msg = server_propagate_weight_change_request(sv,
> weight_str);
> > + if (msg != NULL) {
> > + return msg;
> > + }
> > + }
> > +
> > + return NULL;
> > +}
> > +
> > +const char *server_propagate_weight_change_request(struct server *sv,
> > + const char *weight_str)
> > +{
> > + if (sv->trackers == NULL)
> > + return NULL;
> > +
> > + struct server *_propagated_server = sv->trackers;
> > + const char *msg;
>
> Please don't mix code and declarations, building this emits a warning.
> Also please avoid starting your variable name with an underscore, it
> makes it look "special".
>
> > +
> > + do {
> > + msg = server_parse_weight_change_req
> uest(_propagated_server,
> > + weight_str);
> > +
> > + if (msg != NULL) {
> > + return msg;
> > + }
> > + } while ((_propagated_server = _propagated_server->tracknext) !=
> NULL);
>
> The test at the beginning of the function and the loop combined into
> this quite simplified form :
>
> 

Re: IPv6 resolvers seems not works

2017-04-11 Thread Baptiste
Hi all,

Thank you Frededic!!!
Willy, you can merge (and backport to 1.6) Frederic's patch please?

Baptiste


On Tue, Apr 11, 2017 at 10:45 AM, Павел Знаменский 
wrote:

> Frederic,
> Your patch fixes this issue.
>
> Thank you!
>
>
> 2017-04-11 9:53 GMT+03:00 Frederic Lecaille :
>
>> On 04/10/2017 04:17 PM, Frederic Lecaille wrote:
>>
>>> On 04/10/2017 01:42 PM, Павел Знаменский wrote:
>>>
 Hello,

>>>
>>> Hello,
>>>
>>> I'm trying to add IPv6 address as a nameserver to able resolve addresses
 in IPv6-only environment:

 resolvers google_dns_10m
 nameserver google_dns1 2001:4860:4860:::53
 nameserver google_dns2 2001:4860:4860::8844:53
 hold valid 10m
 resolve_retries 2

 But I getting error:
 [ALERT] 099/133733 (10412) : Starting [google_dns_10m/google_dns1]
 nameserver: can't connect socket.

 As I understood resolver uses AF_INET when connecting to the nameserver
 and that's why IPv6 doesn't work.

>>>
>>> In fact, the address families used during socket() and connect() syscall
>>> are different.
>>>
>>> This is revealed by strace:
>>>
>>> socket(PF_INET, SOCK_DGRAM, IPPROTO_UDP) = 4
>>> connect(4, {sa_family=AF_INET6, sin6_port=htons(53), inet_pton(AF_INET6,
>>> "2001:4860:4860::8844", _addr), sin6_flowinfo=0, sin6_scope_id=0},
>>> 28) = -1 EAFNOSUPPORT (Address family not supported by protocol)
>>>
>>> should be:
>>>
>>> socket(PF_INET6, ...)
>>>
>>
>> This patch fixes the issue.
>>
>>
>>
>>
>


Re: [RFC][PATCHES] seamless reload

2017-04-11 Thread Olivier Houchard
On Tue, Apr 11, 2017 at 01:23:42PM +0200, Pavlos Parissis wrote:
> On 10/04/2017 11:52 μμ, Olivier Houchard wrote:
> > On Mon, Apr 10, 2017 at 11:08:56PM +0200, Pavlos Parissis wrote:
> >> On 10/04/2017 08:09 , Olivier Houchard wrote:
> >>>
> >>> Hi,
> >>>
> >>> On top of those patches, here a 3 more patches.
> >>> The first one makes the systemd wrapper check for a HAPROXY_STATS_SOCKET
> >>> environment variable. If set, it will use that as an argument to -x, when
> >>> reloading the process.
> >>
> >> I see you want to introduce a specific environment variable for this 
> >> functionality
> >> and then fetched in the code with getenv(). This is a way to do it.
> >>
> >> IMHO: I prefer to pass a value to an argument, for instance -x. It is also
> >> consistent with haproxy binary where someone uses -x argument as well.
> >>
> > 
> > I'm not sure what you mean by this ?
> > It is supposed to be directly the value given to -x as an argument.
> > 
> 
> Ignore what I wrote above as I was under the wrong impression that we need to
> pass -x to systemd wrapper during the reload, but the systemd wrapper is not
> invoked during the reload.
> 
> 
> >>> The second one sends listening unix sockets, as well as IPv4/v6 sockets. 
> >>> I see no reason not to, and that means we no longer have to wait until
> >>> the old process close the socket before being able to accept new 
> >>> connections
> >>> on it.
> >>
> >>> The third one adds a new global optoin, nosockettransfer, if set, we 
> >>> assume
> >>> we will never try to transfer listening sockets through the stats socket,
> >>> and close any socket nout bound to our process, to save a few file
> >>> descriptors.
> >>>
> >>
> >> IMHO: a better name would be 'stats nounsedsockets', as it is referring to 
> >> a
> >> generic functionality of UNIX stats socket, rather to a very specific 
> >> functionality.
> >>
> > 
> > Well really it is a global setting, maybe my wording isn't great, but it
> > makes haproxy close all sockets not bound to this process, as it used to,
> > instead of keeping them around in case somebody asks for them. 
> > 
> >> I hope tomorrow I will find some time to test your patches.
> >>
> > 
> > Thanks a lot ! This is greatly appreciated.
> > 
> 
> Do you have a certain way to test it?
> My plan is to have a stress environment where I fire-up X thousands of TCP
> connections and produce a graph, which contain number of TCP RST received from
> the client during a soft-reload of haproxy. I'll run this test against haproxy
> 1.8 with and without your patches. So, I will have a clear indication if your
> patches solved the issue.
> 

That sounds good to me. My testing involved an haproxy running with 4
processes, 2 listeners, 1000 ports for each, and 2 processes bound to each
listener, 2 injectors doing http requests (that merely got a redirect as an
answer from haproxy), one on each listener, and reloading haproxy in a loop.

Regards,

Olivier



Re: [RFC][PATCHES] seamless reload

2017-04-11 Thread Pavlos Parissis
On 10/04/2017 11:52 μμ, Olivier Houchard wrote:
> On Mon, Apr 10, 2017 at 11:08:56PM +0200, Pavlos Parissis wrote:
>> On 10/04/2017 08:09 , Olivier Houchard wrote:
>>>
>>> Hi,
>>>
>>> On top of those patches, here a 3 more patches.
>>> The first one makes the systemd wrapper check for a HAPROXY_STATS_SOCKET
>>> environment variable. If set, it will use that as an argument to -x, when
>>> reloading the process.
>>
>> I see you want to introduce a specific environment variable for this 
>> functionality
>> and then fetched in the code with getenv(). This is a way to do it.
>>
>> IMHO: I prefer to pass a value to an argument, for instance -x. It is also
>> consistent with haproxy binary where someone uses -x argument as well.
>>
> 
> I'm not sure what you mean by this ?
> It is supposed to be directly the value given to -x as an argument.
> 

Ignore what I wrote above as I was under the wrong impression that we need to
pass -x to systemd wrapper during the reload, but the systemd wrapper is not
invoked during the reload.


>>> The second one sends listening unix sockets, as well as IPv4/v6 sockets. 
>>> I see no reason not to, and that means we no longer have to wait until
>>> the old process close the socket before being able to accept new connections
>>> on it.
>>
>>> The third one adds a new global optoin, nosockettransfer, if set, we assume
>>> we will never try to transfer listening sockets through the stats socket,
>>> and close any socket nout bound to our process, to save a few file
>>> descriptors.
>>>
>>
>> IMHO: a better name would be 'stats nounsedsockets', as it is referring to a
>> generic functionality of UNIX stats socket, rather to a very specific 
>> functionality.
>>
> 
> Well really it is a global setting, maybe my wording isn't great, but it
> makes haproxy close all sockets not bound to this process, as it used to,
> instead of keeping them around in case somebody asks for them. 
> 
>> I hope tomorrow I will find some time to test your patches.
>>
> 
> Thanks a lot ! This is greatly appreciated.
> 

Do you have a certain way to test it?
My plan is to have a stress environment where I fire-up X thousands of TCP
connections and produce a graph, which contain number of TCP RST received from
the client during a soft-reload of haproxy. I'll run this test against haproxy
1.8 with and without your patches. So, I will have a clear indication if your
patches solved the issue.


Any ideas?

Cheers,
Pavlos



signature.asc
Description: OpenPGP digital signature


Re: [RFC][PATCHES] seamless reload

2017-04-11 Thread Pavlos Parissis
On 10/04/2017 11:48 μμ, Olivier Houchard wrote:
> On Mon, Apr 10, 2017 at 10:49:21PM +0200, Pavlos Parissis wrote:
>> On 07/04/2017 11:17 , Olivier Houchard wrote:
>>> On Fri, Apr 07, 2017 at 09:58:57PM +0200, Pavlos Parissis wrote:
 On 06/04/2017 04:57 , Olivier Houchard wrote:
> On Thu, Apr 06, 2017 at 04:56:47PM +0200, Pavlos Parissis wrote:
>> On 06/04/2017 04:25 , Olivier Houchard wrote:
>>> Hi,
>>>
>>> The attached patchset is the first cut at an attempt to work around the
>>> linux issues with SOREUSEPORT that makes haproxy refuse a few new 
>>> connections
>>> under heavy load.
>>> This works by transferring the existing sockets to the new process via 
>>> the
>>> stats socket. A new command-line flag has been added, -x, that takes the
>>> path to the unix socket as an argument, and if set, will attempt to 
>>> retrieve
>>> all the listening sockets;
>>> Right now, any error, either while connecting to the socket, or 
>>> retrieving
>>> the file descriptors, is fatal, but maybe it'd be better to just fall 
>>> back
>>> to the previous behavior instead of opening any missing socket ? I'm 
>>> still
>>> undecided about that.
>>>
>>> Any testing, comments, etc would be greatly appreciated.
>>>
>>
>> Does this patch set support HAProxy in multiprocess mode (nbproc > 1) ?
>>
>
> Hi Pavlos,
>
> If it does not, it's a bug :)
> In my few tests, it seemed to work.
>
> Olivier
>


 I run systems with systemd and I can't see how I can test the seamless 
 reload
 functionality ( thanks for that) without a proper support for the UNIX 
 socket
 file argument (-x) in the haproxy-systemd-wrapper.

 I believe you need to modify the wrapper to accept -x argument for a single
 UNIX socket file or -X for a directory path with multiple UNIX socket 
 files,
 when HAProxy runs in multi-process mode.

 What do you think?

 Cheers,
 Pavlos



>>>
>>>
>>> Hi Pavlos,
>>>
>>> I didn't consider systemd, so it may be I have to investigate there.
>>> You don't need to talk to all the processes to get the sockets, in the new
>>> world order, each process does have all the sockets, although it will accept
>>> connections only for those for which it is configured to do so (I plan to 
>>> add
>>> a configuration option to restore the old behavior, for those who don't 
>>> need 
>>> that, and want to save file descriptors).
>>> Reading the haproxy-systemd-wrapper code, it should be trivial.
>>> I just need to figure out how to properly provide the socket path to the
>>>  wrapper.
>>> I see that you already made use of a few environment variables in
>>> haproxy.service. Would that be reasonnable to add a new one, that could
>>> be set in haproxy.service.d/overwrite.conf ? I'm not super-used to systemd,
>>> and I'm not sure of how people are doing that kind of things.
>>>
>>
>> I believe you are referring to $CONFIG and PIDFILE environment variables. 
>> Those
>> two variables are passed to the two arguments, which were present but 
>> impossible
>> to adjust their input, switching to variables allowed people to overwrite 
>> their input.
>>
>> In this case, we are talking about a new functionality I guess the best 
>> approach
>> would be to have ExecStart using EXTRAOPTS:
>> ExecStart=/usr/sbin/haproxy-systemd-wrapper -f $CONFIG -p $PIDFILE $EXTRAOPTS
>>
>> This will allow users to set a value to the new argument and any other 
>> argument
>> they want
>> cat /etc/systemd/system/haproxy.service.d/overwrite.conf
>> [Service]
>> Environment="CONFIG=/etc/haproxy/haproxy.cfg" "PIDFILE=/run/haproxy.pid"
>> "EXTRAOPTS=-x /foobar"
>>
>> or using default configuration file /etc/default/haproxy
>>
>> [Service]
>> Environment="CONFIG=/etc/haproxy/haproxy.cfg" "PIDFILE=/run/haproxy.pid"
>> EnvironmentFile=-/etc/default/haproxy
>> ExecStart=/usr/sbin/haproxy-systemd-wrapper -f $CONFIG -p $PIDFILE $EXTRAOPTS
>>
>> cat /etc/default/haproxy
>> EXTRAOPTS="-x /foobar"
>>
>> I hope it helps,
>> Cheers,
>>
> 
> 
> 
> Hi Pavlos,
> 
> Yeah I see what you mean, it is certainly doable, though -x is a bit special,
> because you don't use it the first time you run haproxy, only for reloading,
> so that would mean the wrapper has special knowledge about it, and remove it
> from the user-supplied command line the first time it's called. I'm a bit
> uneasy about that, but if it's felt the best way to do it, I'll go ahead.
> 

I see, I forgot that the -x argument has only a meaning for the reload phase and
ExecReload uses haproxy and not the haproxy-systemd-wrapper.
So, it makes sense to pass the environment variable and let
haproxy-systemd-wrapper do its thing only on the reload.

Please ignore what I wrote above for EXTRAOPTS, sorry for the confusion.

Cheers,
Pavlos



signature.asc
Description: OpenPGP digital signature


Re: ACL with dynamic pattern

2017-04-11 Thread Alexander Lebedev
Aleksandar, Hello!

No, it does not help.

haproxy -vv output:

HA-Proxy version 1.7.4 2017/03/27

Copyright 2000-2017 Willy Tarreau 



Build options :

  TARGET  = solaris

  CPU = generic

  CC  = /pub/site/opt/bin/gcc

  CFLAGS  = -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement
-fomit-frame-pointer -DFD_SETSIZE=65536 -D_REENTRANT

  OPTIONS = USE_ZLIB=1 USE_OPENSSL=1 USE_STATIC_PCRE=1 USE_PCRE_JIT=1



Default settings :

  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200



Encrypted password support via crypt(3): yes

Built with zlib version : 1.2.3

Running on zlib version : 1.2.3

Compression algorithms supported : identity("identity"),
deflate("deflate"), raw-deflate("deflate"), gzip("gzip")

Built with OpenSSL version : OpenSSL 1.0.2h  3 May 2016

Running on OpenSSL version : OpenSSL 1.0.2e 3 Dec 2015

OpenSSL library supports TLS extensions : yes

OpenSSL library supports SNI : yes

OpenSSL library supports prefer-server-ciphers : yes

Built with PCRE version : 8.32 2012-11-30

Running on PCRE version : 8.32 2012-11-30

PCRE library supports JIT : no (libpcre build without JIT?)

Built without Lua support



Available polling systems :

   poll : pref=200,  test result OK

 select : pref=150,  test result OK

Total: 2 (2 usable), will use poll.



Available filters :

[COMP] compression

[TRACE] trace

[SPOE] spoe
Alexander Lebedev


11 апр. 2017 г. 1:41 пользователь "Aleksandar Lazic" 
написал:

> Am 10-04-2017 10:55, schrieb Alexander Lebedev:
>
> Hello!
>>
>> I want to implement CSRF check with haproxy.
>> I want to check cookie value matched the header value and deny request if
>> they're didn't equal.
>>
>> Something like this:
>> alc token_valid req.cook(token) %[req.hdr(token)]
>> http-request deny unless token_valid
>>
>
> and when you add -m does this helps?
>
> acl token_valid req.cook(token) -m %[req.hdr(token)]
>
> or
>
> acl token_valid %[req.hdr(token)] -m req.cook(token)
>
> from
> http://cbonte.github.io/haproxy-dconv/1.7/configuration.html#7.1.3
>
> Btw.: what's the output of
> haproxy -vv
>
> But I can't find the way to perform this check.
>> Is it really impossible?
>>
>> Alexander Lebedev
>>
>
> Regards
> aleks
>


Re: ModSecurity: First integration patches

2017-04-11 Thread Thierry Fournier

> On 11 Apr 2017, at 11:24, Olivier Doucet  wrote:
> 
> Hi Thierry,
> 
> 
> 
> 2017-04-11 10:49 GMT+02:00 Thierry Fournier :
> Hi list
> 
> I join one usage of HAProxy / SPOE, it is WAF offloading.
> 
> These patches are a first version, it have some limitations describe
> in the README file in the directory contrib/modsecurity.
> 
>  - Christopher, please check the patch "BUG/MINOR", it is about spoe
>functions.
> 
>  - The exemple of ModSecurity compilation can be improved. It is based
>on my local distro.
> 
> The feedback are welcome.
> 
> Nice work for a first version ! I definitely see what great production usage 
> could be done with it. 


thanks.


> Your README file referenced several issues in ModSecurity itself, this is not 
> something I would expect from this highly used software …


If you’re taking about the section ModSecurity bug, you must keep in mind that
ModSecurity is written for Apache. The integration in HAProxy or NGINX is not
“natural” for these soft. (Hi hope that the V3 will embed a more compatible 
API).

The first bug is maybe due to my build system. In first time, I compile my own
version of APR, and maybe it is caused by a bad compile parameter. In second 
time, I used APR from system, and I have the same bug.

I have found some posts on Internet about this bug, but ant way to fix it.
I suppose that this bug is not available with Apache. The configuration way to
avoir this bug is using this conf: “SecAuditLogType Concurrent"

The second bug is averred only with HAProxy and NGINX. ModSecurity with
Apache doesn’t have this bug. I will be avoided without using wildcards. So, 
you can
lists each included configuration file.


> What happened if for some reason modSecurity does not answer in the timeout 
> defined ?


Good question. I suppose that the answer is around the SPOE timeouts.


> What happened if modsecurity throws an error ?


The variable “txn.modsec.code” is not set. So this variable is set to 0 if all 
is good,
not set if an error occurs, or it contains the recommended HTTP return code.

Thierry


> Olivier
> 




Re: ModSecurity: First integration patches

2017-04-11 Thread Olivier Doucet
Hi Thierry,



2017-04-11 10:49 GMT+02:00 Thierry Fournier :

> Hi list
>
> I join one usage of HAProxy / SPOE, it is WAF offloading.
>
> These patches are a first version, it have some limitations describe
> in the README file in the directory contrib/modsecurity.
>
>  - Christopher, please check the patch "BUG/MINOR", it is about spoe
>functions.
>
>  - The exemple of ModSecurity compilation can be improved. It is based
>on my local distro.
>
> The feedback are welcome.
>

Nice work for a first version ! I definitely see what great production
usage could be done with it.

Your README file referenced several issues in ModSecurity itself, this is
not something I would expect from this highly used software ...

What happened if for some reason modSecurity does not answer in the timeout
defined ? What happened if modsecurity throws an error ?


Olivier


ModSecurity: First integration patches

2017-04-11 Thread Thierry Fournier
Hi list

I join one usage of HAProxy / SPOE, it is WAF offloading.

These patches are a first version, it have some limitations describe
in the README file in the directory contrib/modsecurity.

 - Christopher, please check the patch "BUG/MINOR", it is about spoe
   functions.

 - The exemple of ModSecurity compilation can be improved. It is based
   on my local distro.

The feedback are welcome.

Thierry
>From 55702d5b7b3aa72f1e2befaa3edb5b5ccbb302f5 Mon Sep 17 00:00:00 2001
From: Thierry FOURNIER 
Date: Sun, 9 Apr 2017 05:41:27 +0200
Subject: [PATCH 1/3] BUG/MINOR: change header-declared function to static
 inline

When we include the header proto/spoe.h in other files in the same
project, the compilator claim that the symbol have multiple definitions:

   src/flt_spoe.o: In function `spoe_encode_varint':
   ~/git/haproxy/include/proto/spoe.h:45: multiple definition of `spoe_encode_varint'
   src/proto_http.o:~/git/haproxy/include/proto/spoe.h:45: first defined here
---
 include/proto/spoe.h |   16 
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/include/proto/spoe.h b/include/proto/spoe.h
index 06fb52d..da19db1 100644
--- a/include/proto/spoe.h
+++ b/include/proto/spoe.h
@@ -39,7 +39,7 @@
  *
  * On success, it returns the number of written bytes and <*buf> is moved after
  * the encoded value. Otherwise, it returns -1. */
-int
+static inline int
 spoe_encode_varint(uint64_t i, char **buf, char *end)
 {
 	unsigned char *p = (unsigned char *)*buf;
@@ -76,7 +76,7 @@ spoe_encode_varint(uint64_t i, char **buf, char *end)
  * 'spoe_encode_varint' for details about varint.
  * On success, it returns the number of read bytes and <*buf> is moved after the
  * varint. Otherwise, it returns -1. */
-int
+static inline int
 spoe_decode_varint(char **buf, char *end, uint64_t *i)
 {
 	unsigned char *p = (unsigned char *)*buf;
@@ -109,7 +109,7 @@ spoe_decode_varint(char **buf, char *end, uint64_t *i)
  * error is triggered.
  * On success, it returns  and <*buf> is moved after the encoded value. If
  * an error occurred, it returns -1. */
-int
+static inline int
 spoe_encode_buffer(const char *str, size_t len, char **buf, char *end)
 {
 	char *p = *buf;
@@ -137,7 +137,7 @@ spoe_encode_buffer(const char *str, size_t len, char **buf, char *end)
  * 'spoe_encode_buffer', but if there is not enough space, it does not fail.
  * On success, it returns the number of copied bytes and <*buf> is moved after
  * the encoded value. If an error occured, it returns -1. */
-int
+static inline int
 spoe_encode_frag_buffer(const char *str, size_t len, char **buf, char *end)
 {
 	char *p = *buf;
@@ -166,7 +166,7 @@ spoe_encode_frag_buffer(const char *str, size_t len, char **buf, char *end)
  * points on the first byte of the buffer.
  * On success, it returns the buffer length and <*buf> is moved after the
  * encoded buffer. Otherwise, it returns -1. */
-int
+static inline int
 spoe_decode_buffer(char **buf, char *end, char **str, size_t *len)
 {
 	char*p = *buf;
@@ -195,7 +195,7 @@ spoe_decode_buffer(char **buf, char *end, char **str, size_t *len)
  * partially encoded. In this case, the offset <*off> is updated to known how
  * many bytes has been encoded. If <*off> is zero at the end, it means that all
  * data has been encoded. */
-int
+static inline int
 spoe_encode_data(struct sample *smp, unsigned int *off, char **buf, char *end)
 {
 	char *p = *buf;
@@ -318,7 +318,7 @@ spoe_encode_data(struct sample *smp, unsigned int *off, char **buf, char *end)
  *  - ipv6: 16 bytes
  *  - binary and string: a buffer prefixed by its size, a variable-length
  *integer (see spoe_decode_buffer) */
-int
+static inline int
 spoe_skip_data(char **buf, char *end)
 {
 	char*str, *p = *buf;
@@ -366,7 +366,7 @@ spoe_skip_data(char **buf, char *end)
 /* Decode a typed data and fill . If an error occurred, -1 is returned,
  * otherwise the number of read bytes is returned and <*buf> is moved after the
  * decoded data. See spoe_skip_data for details. */
-int
+static inline int
 spoe_decode_data(char **buf, char *end, struct sample *smp)
 {
 	char  *str, *p = *buf;
-- 
1.7.10.4

>From 41bc8016bb0a4ed6ce5970d8b4ed6055737b9dad Mon Sep 17 00:00:00 2001
From: Thierry FOURNIER 
Date: Sun, 9 Apr 2017 05:38:19 +0200
Subject: [PATCH 2/3] MINOR: Add binary encoding request sample fetch

This sample fetch encodes the full http request available in the buffer.
The request may contain or not the body, and the body may be partial. This
sample-fetch is useful with SPOE.
---
 doc/configuration.txt |   15 
 src/proto_http.c  |  240 +
 2 files changed, 255 insertions(+)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 81b641e..3229ee1 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -14116,6 +14116,21 @@ payload_lv(,[,]) : binary (deprecated)
   (eg: "stick on", "stick match"), and for 

Re: IPv6 resolvers seems not works

2017-04-11 Thread Павел Знаменский
Frederic,
Your patch fixes this issue.

Thank you!


2017-04-11 9:53 GMT+03:00 Frederic Lecaille :

> On 04/10/2017 04:17 PM, Frederic Lecaille wrote:
>
>> On 04/10/2017 01:42 PM, Павел Знаменский wrote:
>>
>>> Hello,
>>>
>>
>> Hello,
>>
>> I'm trying to add IPv6 address as a nameserver to able resolve addresses
>>> in IPv6-only environment:
>>>
>>> resolvers google_dns_10m
>>> nameserver google_dns1 2001:4860:4860:::53
>>> nameserver google_dns2 2001:4860:4860::8844:53
>>> hold valid 10m
>>> resolve_retries 2
>>>
>>> But I getting error:
>>> [ALERT] 099/133733 (10412) : Starting [google_dns_10m/google_dns1]
>>> nameserver: can't connect socket.
>>>
>>> As I understood resolver uses AF_INET when connecting to the nameserver
>>> and that's why IPv6 doesn't work.
>>>
>>
>> In fact, the address families used during socket() and connect() syscall
>> are different.
>>
>> This is revealed by strace:
>>
>> socket(PF_INET, SOCK_DGRAM, IPPROTO_UDP) = 4
>> connect(4, {sa_family=AF_INET6, sin6_port=htons(53), inet_pton(AF_INET6,
>> "2001:4860:4860::8844", _addr), sin6_flowinfo=0, sin6_scope_id=0},
>> 28) = -1 EAFNOSUPPORT (Address family not supported by protocol)
>>
>> should be:
>>
>> socket(PF_INET6, ...)
>>
>
> This patch fixes the issue.
>
>
>
>


Re: IPv6 resolvers seems not works

2017-04-11 Thread Frederic Lecaille

On 04/10/2017 04:17 PM, Frederic Lecaille wrote:

On 04/10/2017 01:42 PM, Павел Знаменский wrote:

Hello,


Hello,


I'm trying to add IPv6 address as a nameserver to able resolve addresses
in IPv6-only environment:

resolvers google_dns_10m
nameserver google_dns1 2001:4860:4860:::53
nameserver google_dns2 2001:4860:4860::8844:53
hold valid 10m
resolve_retries 2

But I getting error:
[ALERT] 099/133733 (10412) : Starting [google_dns_10m/google_dns1]
nameserver: can't connect socket.

As I understood resolver uses AF_INET when connecting to the nameserver
and that's why IPv6 doesn't work.


In fact, the address families used during socket() and connect() syscall
are different.

This is revealed by strace:

socket(PF_INET, SOCK_DGRAM, IPPROTO_UDP) = 4
connect(4, {sa_family=AF_INET6, sin6_port=htons(53), inet_pton(AF_INET6,
"2001:4860:4860::8844", _addr), sin6_flowinfo=0, sin6_scope_id=0},
28) = -1 EAFNOSUPPORT (Address family not supported by protocol)

should be:

socket(PF_INET6, ...)


This patch fixes the issue.



>From 09fbee7c67dea87761165c35e8a1c0450575504c Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Fr=C3=A9d=C3=A9ric=20L=C3=A9caille?= 
Date: Tue, 11 Apr 2017 08:46:37 +0200
Subject: [PATCH] MINOR: dns: Wrong address family used when creating IPv6
 sockets.

AF_INET address family was always used to create sockets to connect
to name servers. This prevented any connection over IPv6 from working.
---
 src/dns.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/dns.c b/src/dns.c
index 075a701..a118598 100644
--- a/src/dns.c
+++ b/src/dns.c
@@ -1022,7 +1022,7 @@ int dns_init_resolvers(int close_socket)
 			dgram->data = _dgram_cb;
 
 			/* create network UDP socket for this nameserver */
-			if ((fd = socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP)) == -1) {
+			if ((fd = socket(curnameserver->addr.ss_family, SOCK_DGRAM, IPPROTO_UDP)) == -1) {
 Alert("Starting [%s/%s] nameserver: can't create socket.\n", curr_resolvers->id,
 		curnameserver->id);
 free(dgram);
-- 
2.1.4