stable-bot: Bugfixes waiting for a release 2.1 (4), 2.0 (1)

2020-04-21 Thread stable-bot
Hi,

This is a friendly bot that watches fixes pending for the next haproxy-stable 
release!  One such e-mail is sent periodically once patches are waiting in the 
last maintenance branch, and an ideal release date is computed based on the 
severity of these fixes and their merge date.  Responses to this mail must be 
sent to the mailing list.


Last release 2.1.4 was issued on 2020-04-02.  There are currently 4 patches in 
the queue cut down this way:
- 4 MINOR, first one merged on 2020-04-02

Thus the computed ideal release date for 2.1.5 would be 2020-04-30, which is in 
one week or less.

Last release 2.0.14 was issued on 2020-04-02.  There are currently 1 patches in 
the queue cut down this way:
- 1 MINOR, first one merged on 2020-04-02

Thus the computed ideal release date for 2.0.15 would be 2020-04-30, which is 
in one week or less.

The current list of patches in the queue is:
 - 2.1   - MINOR   : connection: always send address-less 
LOCAL PROXY connections
 - 2.0, 2.1  - MINOR   : protocol_buffer: Wrong maximum 
shifting.
 - 2.1   - MINOR   : ssl: memleak of the struct 
cert_key_and_chain
 - 2.1   - MINOR   : ssl/cli: memory leak in 'set ssl cert'

-- 
The haproxy stable-bot is freely provided by HAProxy Technologies to help 
improve the quality of each HAProxy release.  If you have any issue with these 
emails or if you want to suggest some improvements, please post them on the 
list so that the solutions suiting the most users can be found.



Re: [ANNOUNCE] haproxy-2.1.4

2020-04-21 Thread Илья Шипицин
ср, 22 апр. 2020 г. в 00:06, Tim Düsterhus :

> Ilya,
>
> Am 21.04.20 um 20:49 schrieb Илья Шипицин:
> > I thought of some more high level fuzzing without intercepting code path.
> > for example, we know about range queries
> >
> > Range: bytes=0-1023
> >
> >
> > i.e. bytes=(integer)-(integer)
> >
> >
> > what if we send
> >
> > Range: bytes=1023-0
> >
> > or
> > Range: bytes=1023
> >
> > or
> >
> > Range: bytes=abc-def
> >
> > and so on.
> > it does not require any code modification. but proper workload generator
> > should be chosen
> >
>
> That would not be the job of a fuzzer, but that of a HTTP compliancy
> checker, because that deals with business logic. Someone would need to
> encode all the rules and edge cases laid out in the RFC into a program,
> like someone did for h2spec. You don't need to have any smartness within
> that checker, sending static requests and reading the responses is
> sufficient there.
>

I heard of "level 2" fuzzing
https://blog.tox.chat/2015/09/fuzzing-the-new-groupchats/

i.e. fuzzing on top of protocol implementation


>
> A fuzzer attempts to generate data that trips over the input parsers in
> a way a human would not think of, because it's not an "obvious" edge
> case. For CVE-2018-14645 the bug would trigger when receiving values
> exceeding the range of an int, which might be an obvious edge case for a
> C developer, but is not something that's specifically acknowledged
> within the H2 specification. Negative values however are clearly invalid
> when talking about a byte range.
>
> Best regards
> Tim Düsterhus
>


Re: [ANNOUNCE] haproxy-2.1.4

2020-04-21 Thread Tim Düsterhus
Ilya,

Am 21.04.20 um 20:49 schrieb Илья Шипицин:
> I thought of some more high level fuzzing without intercepting code path.
> for example, we know about range queries
> 
> Range: bytes=0-1023
> 
> 
> i.e. bytes=(integer)-(integer)
> 
> 
> what if we send
> 
> Range: bytes=1023-0
> 
> or
> Range: bytes=1023
> 
> or
> 
> Range: bytes=abc-def
> 
> and so on.
> it does not require any code modification. but proper workload generator
> should be chosen
> 

That would not be the job of a fuzzer, but that of a HTTP compliancy
checker, because that deals with business logic. Someone would need to
encode all the rules and edge cases laid out in the RFC into a program,
like someone did for h2spec. You don't need to have any smartness within
that checker, sending static requests and reading the responses is
sufficient there.

A fuzzer attempts to generate data that trips over the input parsers in
a way a human would not think of, because it's not an "obvious" edge
case. For CVE-2018-14645 the bug would trigger when receiving values
exceeding the range of an int, which might be an obvious edge case for a
C developer, but is not something that's specifically acknowledged
within the H2 specification. Negative values however are clearly invalid
when talking about a byte range.

Best regards
Tim Düsterhus



Re: [ANNOUNCE] haproxy-2.1.4

2020-04-21 Thread Илья Шипицин
вт, 21 апр. 2020 г. в 20:24, Tim Düsterhus :

> Ilya,
>
> Am 21.04.20 um 17:02 schrieb Илья Шипицин:
> >> The two CVEs I mentioned were bugs *I* found using afl-fuzz. The biggest
> >> hurdle back when I attempted fuzzing was not getting an appropriate
> >> workload (I've just created a few basic requests using nghttp), but
> >> instead getting the requests into HAProxy in a way so that afl is able
> >> to detect branches that change based on input changes. This branch
> >> detection is *the* main selling point of afl. Just sending random
> >> garbage is not going to turn up interesting stuff, if anything.
> >>
> >
> >
> > I really beleive that people who can perform fuzzing are smarter than me.
> > But I hope
> > to be able to run fuzzing some day :)
> >
> > what are "branches" ? are them git branches ? do you have any setup
> > step-by-step
>
> Branches refer to branches within the generated machine code (i.e.
> conditional jumps). AFL works similarly to ASAN in that it adds some
> additional code to the executable to detect whether a branch was taken
> (i.e. a jump happened) or not.
>
> As a super simplified example consider the following code:
>
> if (buf[0] == 'b') {
>   if (buf[1] == '1') {
> crash();
>   }
>   // do something (1)
> }
> else {
>   // do something (2)
> }
>
> I would then use the following as the initial payload:
>
> buf = "a0"
>
> AFL would then execute the "(2)" line. Afterwards it might try the
> following (increase the first byte by 1):
>
> buf = "b0"
>


I thought of some more high level fuzzing without intercepting code path.
for example, we know about range queries

Range: bytes=0-1023


i.e. bytes=(integer)-(integer)


what if we send

Range: bytes=1023-0

or
Range: bytes=1023

or

Range: bytes=abc-def

and so on.
it does not require any code modification. but proper workload generator
should be chosen




> AFL would then detect that something changed: Instead of jumping to the
> 'else' it would continue executing the second 'if'. Now AFL knows that
> the first byte being 'b' is special (or at least different to 'a').
> Instead of attempting 'c' it might then proceed to modify the second
> byte. By incrementing it from '0' to '1' it notices that again something
> changed: The program crashes.
>
> This "intelligent" processing could find the bug with just 3 inputs
> instead of having to randomly test 256*256 combinations for the two
> bytes. In reality the results are even more impressive: AFL was able to
> generate a valid JPG image based on the starting input 'hello'.
>
> See this blog post:
> https://lcamtuf.blogspot.com/2014/11/pulling-jpegs-out-of-thin-air.html
>
> > described how those CVE were detected ?
> >
>
> I intended to write up a blog post after my initial find, but never got
> around to it. For the first one it basically went like this:
>
> 1. Compile the standalone hpack decoder.
> 2. Start afl-fuzz with a single input on the decoder.
> 3. Wait 2 minutes.
> 4. Report security issue.
>
> Not joking, it literally took 2 minutes of throwing data at the decoder
> to find the issue on a single core cloud server. I believe you'll be
> able to figure it out yourself. To reproduce the bug you need to check
> out commit f7db9305aa8642cb5145bba6f8948400c52396af (that's one before
> the fix).
>
> The second one was more involved and less reliable. I used desock.so
> from https://github.com/zardus/preeny to receive "network" input from
> stdin and patched HAProxy to exit after serving a single request. Then I
> used a simplistic configuration pointing to an nginx and seeded AFL
> using some HTTP/2 requests I generated using nghttp against `nc -l >
> request`. However that dirty hackery resulted in AFL not reliably
> detecting whether something changed because the input changed or whether
> it just randomly changed.
>

thank you, I'll try next weekend


>
> Best regards
> Tim Düsterhus
>


Re: [*EXT*] Re: Question about demo website

2020-04-21 Thread Ionel GARDAIS
Hi Willy,

Thanks for your feedback : I forgot the "option socket-stats" in the frontend.

It's all pretty now :)

-- 
Ionel GARDAIS
Tech'Advantage CIO - IT Team manager

- Mail original -
De: "Willy Tarreau" 
À: "Ionel GARDAIS" 
Cc: "William Lallemand" , "haproxy" 

Envoyé: Mardi 21 Avril 2020 16:26:33
Objet: Re: [*EXT*] Re: Question about demo website

Hi Ionel,

On Tue, Apr 21, 2020 at 10:51:24AM +0200, Ionel GARDAIS wrote:
> thanks William,
> 
> My fronted definition is :
> frontend ft-public
> bind ip.v.4.addr:80 name web-v4
> bind [ip:v:6:addr]:80 name web-v6
> 
> and I'm still seeing only a Frontend entry in the table
> 
> 
> I also tried to add 
> 
> stats show-desc
> stats show-legends
> stats show-node
> 
> to the dedicated stats listener with no luck.

That's what I'm having:

   frontend http-in
option socket-stats
bind 10.x.x.x:60080 ... name IPv4-direct
bind 10.x.x.x:60081 ... name IPv4-cached
bind :::80 ... v6only name IPv6-direct
bind 127.0.0.1:60080 name local
bind 127.0.0.1:65443 name local-https accept-proxy ssl ...
modehttp

   (...)
   backend demo
mode http
stats enable
stats show-node 1wt.eu
#stats show-legends# shows detailed info (ip, cookies, ...)
stats uri /
stats scope http-in
stats scope www
stats scope git
stats scope demo

As William mentioned, the socket names are those on the "bind" lines.
The "option socket-stats" in the frontend is the one allowing to have
one stats entry per bind line.

Hoping this helps,
Willy
--
232 avenue Napoleon BONAPARTE 92500 RUEIL MALMAISON
Capital EUR 219 300,00 - RCS Nanterre B 408 832 301 - TVA FR 09 408 832 301




Re: [ANNOUNCE] haproxy-2.1.4

2020-04-21 Thread Tim Düsterhus
Ilya,

Am 21.04.20 um 17:02 schrieb Илья Шипицин:
>> The two CVEs I mentioned were bugs *I* found using afl-fuzz. The biggest
>> hurdle back when I attempted fuzzing was not getting an appropriate
>> workload (I've just created a few basic requests using nghttp), but
>> instead getting the requests into HAProxy in a way so that afl is able
>> to detect branches that change based on input changes. This branch
>> detection is *the* main selling point of afl. Just sending random
>> garbage is not going to turn up interesting stuff, if anything.
>>
> 
> 
> I really beleive that people who can perform fuzzing are smarter than me.
> But I hope
> to be able to run fuzzing some day :)
> 
> what are "branches" ? are them git branches ? do you have any setup
> step-by-step

Branches refer to branches within the generated machine code (i.e.
conditional jumps). AFL works similarly to ASAN in that it adds some
additional code to the executable to detect whether a branch was taken
(i.e. a jump happened) or not.

As a super simplified example consider the following code:

if (buf[0] == 'b') {
  if (buf[1] == '1') {
crash();
  }
  // do something (1)
}
else {
  // do something (2)
}

I would then use the following as the initial payload:

buf = "a0"

AFL would then execute the "(2)" line. Afterwards it might try the
following (increase the first byte by 1):

buf = "b0"

AFL would then detect that something changed: Instead of jumping to the
'else' it would continue executing the second 'if'. Now AFL knows that
the first byte being 'b' is special (or at least different to 'a').
Instead of attempting 'c' it might then proceed to modify the second
byte. By incrementing it from '0' to '1' it notices that again something
changed: The program crashes.

This "intelligent" processing could find the bug with just 3 inputs
instead of having to randomly test 256*256 combinations for the two
bytes. In reality the results are even more impressive: AFL was able to
generate a valid JPG image based on the starting input 'hello'.

See this blog post:
https://lcamtuf.blogspot.com/2014/11/pulling-jpegs-out-of-thin-air.html

> described how those CVE were detected ?
> 

I intended to write up a blog post after my initial find, but never got
around to it. For the first one it basically went like this:

1. Compile the standalone hpack decoder.
2. Start afl-fuzz with a single input on the decoder.
3. Wait 2 minutes.
4. Report security issue.

Not joking, it literally took 2 minutes of throwing data at the decoder
to find the issue on a single core cloud server. I believe you'll be
able to figure it out yourself. To reproduce the bug you need to check
out commit f7db9305aa8642cb5145bba6f8948400c52396af (that's one before
the fix).

The second one was more involved and less reliable. I used desock.so
from https://github.com/zardus/preeny to receive "network" input from
stdin and patched HAProxy to exit after serving a single request. Then I
used a simplistic configuration pointing to an nginx and seeded AFL
using some HTTP/2 requests I generated using nghttp against `nc -l >
request`. However that dirty hackery resulted in AFL not reliably
detecting whether something changed because the input changed or whether
it just randomly changed.

Best regards
Tim Düsterhus



Re: [PATCH] Minor improvements to doc "http-request set-src"

2020-04-21 Thread Willy Tarreau
On Tue, Apr 21, 2020 at 04:36:55PM +0200, Tim Düsterhus wrote:
> Olivier,
> 
> Am 21.04.20 um 16:34 schrieb Olivier D:
> > ;)
> > Patch updated attached.
> > 
> 
> Now LGTM.
> 
> Reviewed-by: Tim Duesterhus 

Thanks guys, now applied.

Olivier, I noticed something strange, your patch was produced without
the usual a/ b/ prefixes and the file started at doc/. It looks as if
you had produced the patches using "--no-prefix", which then fails to
apply, so it's worth having a look at your setup. I didn't have problems
applying it with "patch -p0<" though, so the rest was OK.

Thanks,
Willy



Re: [ANNOUNCE] haproxy-2.1.4

2020-04-21 Thread Илья Шипицин
вт, 21 апр. 2020 г. в 19:13, Tim Düsterhus :

> Ilya,
>
> Am 21.04.20 um 15:47 schrieb Илья Шипицин:
> >> The write-up is available now:
> >> https://bugs.chromium.org/p/project-zero/issues/detail?id=2023
> >>
> >> It has a "Methodology-Fuzzing" label, so after CVE-2018-14645 and
> >> CVE-2018-20615 this is the 3rd CVE within H2 found using fuzzing that
> >> I'm aware of. It probably won't be the last. Can we please allocate some
> >> resources on making HAProxy more fuzzer friendly after 2.2 is out?
> >>
> >> I would also be interested in how Felix Wilhelm performed the fuzzing,
> >> do you happen to have details about that?
> >>
> >
> > h2spec is very close to fuzzing. so, we just fire numerous requests and
> see
> > what's going on.
> >
> > ok, couple of things missing - core dump catch and address sanitizing.
> not
> > hard to add.
> >
> > the question is "how to generate h2 fuzzing workload"
> >
>
> The two CVEs I mentioned were bugs *I* found using afl-fuzz. The biggest
> hurdle back when I attempted fuzzing was not getting an appropriate
> workload (I've just created a few basic requests using nghttp), but
> instead getting the requests into HAProxy in a way so that afl is able
> to detect branches that change based on input changes. This branch
> detection is *the* main selling point of afl. Just sending random
> garbage is not going to turn up interesting stuff, if anything.
>


I really beleive that people who can perform fuzzing are smarter than me.
But I hope
to be able to run fuzzing some day :)

what are "branches" ? are them git branches ? do you have any setup
step-by-step
described how those CVE were detected ?


>
> For CVE-2018-14645 this worked well, because I could use the standalone
> hpack decoder. For CVE-2018-20615 I worked with preeny/desock and saw
> that issues with branches being non-deterministic (I assume slight
> timing issues or packets being cut differently or something like that).
>
> Best regards
> Tim Düsterhus
>


Re: [PATCH] Minor improvements to doc "http-request set-src"

2020-04-21 Thread Willy Tarreau
On Tue, Apr 21, 2020 at 12:56:51PM +0200, Tim Düsterhus wrote:
> PS: Personal opinion, but I prefer quotes in replies to be shortened as
> much as possible, while still providing context. I don't want to scroll
> through kilobytes of stuff I've already seen :-)

Rest assured it's a shared opinion, as I also hate having to scroll
far away, and sometimes even miss isolated responses!

:-)

Willy



Re: [ANNOUNCE] haproxy-2.1.4

2020-04-21 Thread Willy Tarreau
Hi Tim,

On Tue, Apr 21, 2020 at 03:18:43PM +0200, Tim Düsterhus wrote:
> Willy,
> 
> Am 02.04.20 um 15:03 schrieb Willy Tarreau:
> > The main driver for this release is that it contains a fix for a serious
> > vulnerability that was responsibly reported last week by Felix Wilhelm
> > from Google Project Zero, affecting the HPACK decoder used for HTTP/2.
> > CVE-2020-11100 was assigned to this issue.
> > 
> > There is no configuration-based workaround for 2.1 and above.
> > 
> > This vulnerability makes it possible under certain circumstances to write
> > to a wide range of memory locations within the process' heap, with the
> > limitation that the attacker doesn't control the absolute address, so the
> > most likely result and by a far margin will be a process crash, but it is
> > not possible to completely rule out the faint possibility of a remote code
> > execution, at least in a lab-controlled environment. Felix was kind enough
> > to agree to delay the publication of his findings to the 20th of this month
> > in order to leave enough time to haproxy users to apply updates. But please
> > do not wait, as it is not very difficult to figure how to exploit the bug
> > based on the fix. Distros were notified and will also have fixes available
> > very shortly.
> > 
> 
> The write-up is available now:
> https://bugs.chromium.org/p/project-zero/issues/detail?id=2023
> 
> It has a "Methodology-Fuzzing" label, so after CVE-2018-14645 and
> CVE-2018-20615 this is the 3rd CVE within H2 found using fuzzing that
> I'm aware of. It probably won't be the last. Can we please allocate some
> resources on making HAProxy more fuzzer friendly after 2.2 is out?

Well, at the risk of sounding annoying I'm afraid not on my side. I mean,
it's already extremely hard for all of us to invest enough time on the
features that people want, to review contributed code and to fix bugs to
keep the code in a stable state. It's like in any other opensource project,
it's simply not possible to ask for something to be done to see time
suddenly appear out of nowhere.

Making the code "more fuzzer friendly" means everything and nothing at
the same time. It's already getting more fuzzer friendly thanks to a
much better layering and modularization that allows certain parts to
be more easily tested (hence the example you gave on how you could test
hpack). On the other hand, it also comes with some limits, and the
ability to develop, extend and maintain it is the most important aspect
that will always prevail when a choice needs to be made. And quite frankly
trying to untangle a layer7 proxy so that dynamic parts can be run out of
context will drive us nowhere just because by design that doesn't correspond
to what the code needs to do. Testing proxy code is very hard. It's no
surprize that varnishtest (now vtest) was purposely written from scratch
for this and is only used for testing proxies. Maybe new external tools are
needed and we'd need a better way to interface with them, I don't know.

There certainly are some parts that could be improved regarding fuzzing, I
honestly don't know. But I can't guess it by myself either. However I'm
willing to accept some patches if:
  - they don't affect maintainability/development
  - they don't affect performance

Last, it's important to keep in mind that the number of issues that are
really subject to such tools and methodologies is extremely low. Looking
since 1.8 (where the bug mentioned here was introduced 2.5 years ago),
no less than 520 bugs were fixed, 4 of which were tagged as critical and
required a coordinated fix (and all 4 in code I wrote myself). Half of
them were found using fuzzing, it's not even certain the two others could
have been found this way. However I don't want to see that the time
invested to improve fuzzing results in less efficiency at spotting and
fixing all the other ones because in the end each bug affects some users.

I'd personally see more value in investing time to write Coccinelle
scripts to spot coding mistakes that happen all the time and especially
when developers are tired or disturbed, and which often result in the
same issues as those detected through fuzzing. That doesn't mean I'm not
interested in fuzzing, it's just that I don't see this main goal as the
most valuable way to invest time for all those already deeply involved
in the project, but I'm happy to be proven wrong.

> I would also be interested in how Felix Wilhelm performed the fuzzing,
> do you happen to have details about that?

No, I only got the information that was just made public. But do not
hesitate to contact Felix about this, I'm sure he will happily share some
extra information to help us improve our side.

Regards,
Willy



Re: [PATCH] Minor improvements to doc "http-request set-src"

2020-04-21 Thread Olivier D
Hi,
Le mar. 21 avr. 2020 à 12:56, Tim Düsterhus  a écrit :

> Olivier,
>


> PS: Personal opinion, but I prefer quotes in replies to be shortened as
> much as possible, while still providing context. I don't want to scroll
> through kilobytes of stuff I've already seen :-)
>

;)
Patch updated attached.
From e6b11f3a795ec40c8b802d9d1190f3f6bbd15f5d Mon Sep 17 00:00:00 2001
From: Olivier Doucet 
Date: Tue, 21 Apr 2020 09:32:56 +0200
Subject: [PATCH] DOC: Improve documentation on http-request set-src

This patch adds more explanation on how to use "http-request set-src"
and a link to "option forwardfor".

This patch can be applied to all previous version starting at 1.6
---
 doc/configuration.txt | 9 -
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git doc/configuration.txt doc/configuration.txt
index 5d01835d7..e695ab7f5 100644
--- doc/configuration.txt
+++ doc/configuration.txt
@@ -5114,16 +5114,23 @@ http-request set-src  [ { if | unless } 
 ]
   This is used to set the source IP address to the value of specified
   expression. Useful when a proxy in front of HAProxy rewrites source IP, but
   provides the correct IP in a HTTP header; or you want to mask source IP for
-  privacy.
+  privacy. All subsequent calls to "src" fetch will return this value
+  (see example).
 
   Arguments :
   Is a standard HAProxy expression formed by a sample-fetch followed
 by some converters.
 
+  See also "option forwardfor".
+
   Example:
 http-request set-src hdr(x-forwarded-for)
 http-request set-src src,ipmask(24)
 
+# After the masking this will track connections
+# based on the IP address with the last byte zeroed out.
+http-request track-sc0 src
+
   When possible, set-src preserves the original source port as long as the
   address family allows it, otherwise the source port is set to 0.
 
-- 
2.18.0.windows.1



Re: [*EXT*] Re: Question about demo website

2020-04-21 Thread Willy Tarreau
Hi Ionel,

On Tue, Apr 21, 2020 at 10:51:24AM +0200, Ionel GARDAIS wrote:
> thanks William,
> 
> My fronted definition is :
> frontend ft-public
> bind ip.v.4.addr:80 name web-v4
> bind [ip:v:6:addr]:80 name web-v6
> 
> and I'm still seeing only a Frontend entry in the table
> 
> 
> I also tried to add 
> 
> stats show-desc
> stats show-legends
> stats show-node
> 
> to the dedicated stats listener with no luck.

That's what I'm having:

   frontend http-in
option socket-stats
bind 10.x.x.x:60080 ... name IPv4-direct
bind 10.x.x.x:60081 ... name IPv4-cached
bind :::80 ... v6only name IPv6-direct
bind 127.0.0.1:60080 name local
bind 127.0.0.1:65443 name local-https accept-proxy ssl ...
modehttp

   (...)
   backend demo
mode http
stats enable
stats show-node 1wt.eu
#stats show-legends# shows detailed info (ip, cookies, ...)
stats uri /
stats scope http-in
stats scope www
stats scope git
stats scope demo

As William mentioned, the socket names are those on the "bind" lines.
The "option socket-stats" in the frontend is the one allowing to have
one stats entry per bind line.

Hoping this helps,
Willy



Re: [ANNOUNCE] haproxy-2.1.4

2020-04-21 Thread Tim Düsterhus
Ilya,

Am 21.04.20 um 15:47 schrieb Илья Шипицин:
>> The write-up is available now:
>> https://bugs.chromium.org/p/project-zero/issues/detail?id=2023
>>
>> It has a "Methodology-Fuzzing" label, so after CVE-2018-14645 and
>> CVE-2018-20615 this is the 3rd CVE within H2 found using fuzzing that
>> I'm aware of. It probably won't be the last. Can we please allocate some
>> resources on making HAProxy more fuzzer friendly after 2.2 is out?
>>
>> I would also be interested in how Felix Wilhelm performed the fuzzing,
>> do you happen to have details about that?
>>
> 
> h2spec is very close to fuzzing. so, we just fire numerous requests and see
> what's going on.
> 
> ok, couple of things missing - core dump catch and address sanitizing. not
> hard to add.
> 
> the question is "how to generate h2 fuzzing workload"
> 

The two CVEs I mentioned were bugs *I* found using afl-fuzz. The biggest
hurdle back when I attempted fuzzing was not getting an appropriate
workload (I've just created a few basic requests using nghttp), but
instead getting the requests into HAProxy in a way so that afl is able
to detect branches that change based on input changes. This branch
detection is *the* main selling point of afl. Just sending random
garbage is not going to turn up interesting stuff, if anything.

For CVE-2018-14645 this worked well, because I could use the standalone
hpack decoder. For CVE-2018-20615 I worked with preeny/desock and saw
that issues with branches being non-deterministic (I assume slight
timing issues or packets being cut differently or something like that).

Best regards
Tim Düsterhus



Re: [ANNOUNCE] haproxy-2.1.4

2020-04-21 Thread Илья Шипицин
another option would be to enlist project at HackerOne and wait while Guido
Vranken will fuzz it :)

he already fuzzed dozens of projects, including openssl, openvpn, ...

https://guidovranken.com/

вт, 21 апр. 2020 г. в 18:21, Tim Düsterhus :

> Willy,
>
> Am 02.04.20 um 15:03 schrieb Willy Tarreau:
> > The main driver for this release is that it contains a fix for a serious
> > vulnerability that was responsibly reported last week by Felix Wilhelm
> > from Google Project Zero, affecting the HPACK decoder used for HTTP/2.
> > CVE-2020-11100 was assigned to this issue.
> >
> > There is no configuration-based workaround for 2.1 and above.
> >
> > This vulnerability makes it possible under certain circumstances to write
> > to a wide range of memory locations within the process' heap, with the
> > limitation that the attacker doesn't control the absolute address, so the
> > most likely result and by a far margin will be a process crash, but it is
> > not possible to completely rule out the faint possibility of a remote
> code
> > execution, at least in a lab-controlled environment. Felix was kind
> enough
> > to agree to delay the publication of his findings to the 20th of this
> month
> > in order to leave enough time to haproxy users to apply updates. But
> please
> > do not wait, as it is not very difficult to figure how to exploit the bug
> > based on the fix. Distros were notified and will also have fixes
> available
> > very shortly.
> >
>
> The write-up is available now:
> https://bugs.chromium.org/p/project-zero/issues/detail?id=2023
>
> It has a "Methodology-Fuzzing" label, so after CVE-2018-14645 and
> CVE-2018-20615 this is the 3rd CVE within H2 found using fuzzing that
> I'm aware of. It probably won't be the last. Can we please allocate some
> resources on making HAProxy more fuzzer friendly after 2.2 is out?
>
> I would also be interested in how Felix Wilhelm performed the fuzzing,
> do you happen to have details about that?
>
> Best regards
> Tim Düsterhus
>
>


Re: [ANNOUNCE] haproxy-2.1.4

2020-04-21 Thread Илья Шипицин
вт, 21 апр. 2020 г. в 18:21, Tim Düsterhus :

> Willy,
>
> Am 02.04.20 um 15:03 schrieb Willy Tarreau:
> > The main driver for this release is that it contains a fix for a serious
> > vulnerability that was responsibly reported last week by Felix Wilhelm
> > from Google Project Zero, affecting the HPACK decoder used for HTTP/2.
> > CVE-2020-11100 was assigned to this issue.
> >
> > There is no configuration-based workaround for 2.1 and above.
> >
> > This vulnerability makes it possible under certain circumstances to write
> > to a wide range of memory locations within the process' heap, with the
> > limitation that the attacker doesn't control the absolute address, so the
> > most likely result and by a far margin will be a process crash, but it is
> > not possible to completely rule out the faint possibility of a remote
> code
> > execution, at least in a lab-controlled environment. Felix was kind
> enough
> > to agree to delay the publication of his findings to the 20th of this
> month
> > in order to leave enough time to haproxy users to apply updates. But
> please
> > do not wait, as it is not very difficult to figure how to exploit the bug
> > based on the fix. Distros were notified and will also have fixes
> available
> > very shortly.
> >
>
> The write-up is available now:
> https://bugs.chromium.org/p/project-zero/issues/detail?id=2023
>
> It has a "Methodology-Fuzzing" label, so after CVE-2018-14645 and
> CVE-2018-20615 this is the 3rd CVE within H2 found using fuzzing that
> I'm aware of. It probably won't be the last. Can we please allocate some
> resources on making HAProxy more fuzzer friendly after 2.2 is out?
>
> I would also be interested in how Felix Wilhelm performed the fuzzing,
> do you happen to have details about that?
>

h2spec is very close to fuzzing. so, we just fire numerous requests and see
what's going on.

ok, couple of things missing - core dump catch and address sanitizing. not
hard to add.

the question is "how to generate h2 fuzzing workload"


>
> Best regards
> Tim Düsterhus
>
>


Re: [ANNOUNCE] haproxy-2.1.4

2020-04-21 Thread Tim Düsterhus
Willy,

Am 02.04.20 um 15:03 schrieb Willy Tarreau:
> The main driver for this release is that it contains a fix for a serious
> vulnerability that was responsibly reported last week by Felix Wilhelm
> from Google Project Zero, affecting the HPACK decoder used for HTTP/2.
> CVE-2020-11100 was assigned to this issue.
> 
> There is no configuration-based workaround for 2.1 and above.
> 
> This vulnerability makes it possible under certain circumstances to write
> to a wide range of memory locations within the process' heap, with the
> limitation that the attacker doesn't control the absolute address, so the
> most likely result and by a far margin will be a process crash, but it is
> not possible to completely rule out the faint possibility of a remote code
> execution, at least in a lab-controlled environment. Felix was kind enough
> to agree to delay the publication of his findings to the 20th of this month
> in order to leave enough time to haproxy users to apply updates. But please
> do not wait, as it is not very difficult to figure how to exploit the bug
> based on the fix. Distros were notified and will also have fixes available
> very shortly.
> 

The write-up is available now:
https://bugs.chromium.org/p/project-zero/issues/detail?id=2023

It has a "Methodology-Fuzzing" label, so after CVE-2018-14645 and
CVE-2018-20615 this is the 3rd CVE within H2 found using fuzzing that
I'm aware of. It probably won't be the last. Can we please allocate some
resources on making HAProxy more fuzzer friendly after 2.2 is out?

I would also be interested in how Felix Wilhelm performed the fuzzing,
do you happen to have details about that?

Best regards
Tim Düsterhus



Re: [*EXT*] Re: Question about demo website

2020-04-21 Thread William Lallemand
CCing Willy because he probably has the configuration of the demo
website.

On Tue, Apr 21, 2020 at 10:51:24AM +0200, Ionel GARDAIS wrote:
> thanks William,
> 
> My fronted definition is :
> frontend ft-public
> bind ip.v.4.addr:80 name web-v4
> bind [ip:v:6:addr]:80 name web-v6
> 
> and I'm still seeing only a Frontend entry in the table
> 
> 
> I also tried to add 
> 
> stats show-desc
> stats show-legends
> stats show-node
> 
> to the dedicated stats listener with no luck.
> 


Hm right, I thought it was showing the listeners by default but I can't
display them either. I don't know if that's a regression in the master
or if I forgot the keyword doing that in the configuration.

-- 
William Lallemand



Re: [PATCH] Minor improvements to doc "http-request set-src"

2020-04-21 Thread Tim Düsterhus
Olivier,

Am 21.04.20 um 09:37 schrieb Olivier D:
> Thank you for your valuable feedback. Find attached a new patch will all
> your comments taken into account.
> 

I've missed two more little things during my initial review:

1. The Subject of the patch should start with "DOC:" instead of "[DOC]".
2. All subsequent calls to src field will return this value (see example).
   -> It's not "field", but "fetch". Not sure whether "src" should also
be quoted in there.

Other than that it looks good to me now.

Best regards
Tim Düsterhus

PS: Personal opinion, but I prefer quotes in replies to be shortened as
much as possible, while still providing context. I don't want to scroll
through kilobytes of stuff I've already seen :-)



Distance Learning Package: Bid Writing

2020-04-21 Thread NFP Workshops


NFP WORKSHOPS
18 Blake Street, York YO1 8QG
Affordable Training Courses for Charities, Schools & Public Sector 
Organisations 




This email has been sent to haproxy@formilux.org
CLICK TO UNSUBSCRIBE FROM LIST
Alternatively send a blank e-mail to unsubscr...@nfpmail1902.co.uk quoting 
haproxy@formilux.org in the subject line.
Unsubscribe requests will take effect within seven days. 



Bid Writing: Distance Learning Package

 Learn at your home or office. No need to travel anywhere. Delivered by e-mail. 
The package includes all the topics from our popular Bid Writing: The Basics 
and Bid Writing: Advanced live workshops plus eight sample funding bids. Once 
you have covered all the materials you can submit up to five questions by email.

TOPICS COVERED

Do you know the most common reasons for rejection? Are you gathering the right 
evidence? Are you making the right arguments? Are you using the right 
terminology? Are your numbers right? Are you learning from rejections? Are you 
assembling the right documents? Do you know how to create a clear and concise 
standard funding bid?

Are you communicating with people or just excluding them? Do you know your own 
organisation well enough? Are you thinking through your projects carefully 
enough? Do you know enough about your competitors? Are you answering the 
questions funders will ask themselves about your application? Are you 
submitting applications correctly?

Are you applying to the right trusts? Are you applying to enough trusts? Are 
you asking for the right amount of money? Are you applying in the right ways? 
Are your projects the most fundable projects? Are you carrying out trust 
fundraising in a professional way? Are you delegating enough work?

Are you highly productive or just very busy? Are you looking for trusts in all 
the right places? How do you compare with your competitors for funding? Is the 
rest of your fundraising hampering your bids to trusts? Do you understand what 
trusts are ideally looking for?

TRAINEES

Staff members, volunteers, trustees or board members of charities, schools or 
public sector organisations who intend to submit grant funding applications to 
charitable grant making trusts and foundations. People who provide advice to 
these organisations may also order.

FORMAT

 The distance learning package consists of a 201 page text document in PDF 
format plus 8 real successful bids totalling 239 pages in PDF format. There is 
no audio or video content. This is not an online course. 

TIME COMMITMENT

Trainees should expect to spend around eight hours reading through all the 
materials, preparing their "to do" list for the months ahead, writing or 
revising their standard funding bid, submitting questions by email and 
processing responses.

TERMS

Training materials are for use only by the trainee named on the invoice. 
Training materials may not be copied, circulated or published. 

ORDER YOUR PACKAGE NOW

The cost of the Bid Writing: Distance Learning Package is £190 per trainee. 

To order please email ord...@nfpmail1902.co.uk with 
1) The name of the trainee.
2) The email address to send the materials to.
3) The name of your organisation.
4) The postal address of your organisation.
5) A purchase order number if required.

We will send you an invoice within 24 hours containing BACS electronic payment 
details. Once we receive payment the materials will be emailed to the specifed 
email address within 24 hours. Please check your spam folder to ensure you 
receive everything.
 
   QUESTIONS

If you have a question please e-mail questi...@nfpmail1902.co.uk You will 
usually receive a response within 24 hours. We are unable to accept questions 
by phone. 


FEEDBACK FROM PAST ATTENDEES AT OUR LIVE WORKSHOPS
I must say I was really impressed with the course and the content. My knowledge 
and confidence has increased hugely. I got a lot from your course and a lot of 
pointers! 
I can say after years of fundraising I learnt so much from your bid writing 
course. It was a very informative day and for someone who has not written bids 
before I am definitely more confident to get involved with them. 
I found the workshops very helpful. It is a whole new area for me but the 
information you imparted has given me a lot of confidence with the direction I 
need to take and for that I am very grateful.  
I found the day very informative and it gave me confidence to take on this 
aspect of work that I had been apprehensive of.  I enjoyed the session and 
found it valuable. 
So much relevant, practical information all passed on in a way which I was able 
to follow. All greatly enhanced by your sense of humour. 
It was a useful course and your examples real or otherwise helped to make it 
practical. Many thanks. The morning just flew by - always a good sign! I 
enjoyed the course and learnt a lot. I will begin putting this into practice.  


 



Re: [PATCH] MINOR: ssl: skip self issued CA in cert chain for ssl_ctx

2020-04-21 Thread William Lallemand
On Fri, Apr 03, 2020 at 10:34:12AM +0200, Emmanuel Hocdet wrote:
> 
> > Le 31 mars 2020 à 18:40, William Lallemand  a écrit 
> > :
> > 
> > On Thu, Mar 26, 2020 at 06:29:48PM +0100, William Lallemand wrote:
> >> 
> >> After some thinking and discussing with people involved in this part of
> >> HAProxy. I'm not feeling very confortable with setting this behavior by
> >> default, on top on that the next version is an LTS so its not a good
> >> idea to change this behavior yet. I think in most case it won't be a
> >> problem but it would be better if it's enabled by an option in the
> >> global section.
> >> 
> > 
> > Hi Manu,
> > 
> > Could you take a look at this? Because I already merged your first
> > patch, so if we don't do anything about it we may revert it before the
> > release.
> > 
> > Thanks a lot!
> 
> Hi William,
> 
> It’s really safe because self Issued CA is the X509 end chain by definition,
> but yes it change the behaviour.
> Why not an option in global section.
> 
> ++
> Manu
> 
Hello Manu,

I hope you are well and live well this confinement period.

Did you had time to work on the documentation patch and the global
option?


Thanks,
-- 
William Lallemand



Re: [PATCH] fix function comment

2020-04-21 Thread William Lallemand
On Sat, Apr 04, 2020 at 01:02:13PM +0500, Илья Шипицин wrote:
> Hello,
> 
> small fix attached.
> 
> Ilya Shipitcin

> From 2cf4b1a3baab84e420dcbbdf084c8138b2f8bd25 Mon Sep 17 00:00:00 2001
> From: Ilya Shipitsin 
> Date: Sat, 4 Apr 2020 12:59:53 +0500
> Subject: [PATCH] CLEANUP: src/log.c: fix comment
> 
> "fmt" is passed to parse_logformat_string, adjust comment
> accordingly

Thanks, merged.


-- 
William Lallemand



Re: [PATCH] CI: special purpose build, testing compatibility against "no-deprecated" openssl

2020-04-21 Thread Илья Шипицин
nice, I finished all CI stuff :)

I'll focus in copr / rpm next

вт, 21 апр. 2020 г. в 13:29, William Lallemand :

> On Mon, Apr 20, 2020 at 07:12:41PM +0500, Илья Шипицин wrote:
> > Lukas, Willy ?
> >
> > чт, 16 апр. 2020 г. в 23:16, Илья Шипицин :
> >
> > > Hello,
> > >
> > > I added weekly build for detection incompatibilities against
> > > "no-deprecated" openssl.
> > >
> > > (well, I first thought to add those option to travis, but it became
> > > over-engineered from my point of view)
> > >
> > > Lukas, if you have suggestions how to add to travis, I can try.
> > >
> > > Cheers,
> > > Ilya Shipitsin
> > >
>
> Thanks Ilya, I merged it.
>
> --
> William Lallemand
>


Re: New color on www.haproxy.org

2020-04-21 Thread William Lallemand
On Sat, Apr 18, 2020 at 10:42:46PM +0200, Aleksandar Lazic wrote:
> Hi.
> 
> I like the new table on https://www.haproxy.org/ . The color show now much 
> easier which version is in which state ;-)
> 
> Regards
> 
> Aleks
> 

Thanks for the feedback Aleks, I find that more readable too!

-- 
William Lallemand



Re: [PATCH] CI: special purpose build, testing compatibility against "no-deprecated" openssl

2020-04-21 Thread William Lallemand
On Mon, Apr 20, 2020 at 07:12:41PM +0500, Илья Шипицин wrote:
> Lukas, Willy ?
> 
> чт, 16 апр. 2020 г. в 23:16, Илья Шипицин :
> 
> > Hello,
> >
> > I added weekly build for detection incompatibilities against
> > "no-deprecated" openssl.
> >
> > (well, I first thought to add those option to travis, but it became
> > over-engineered from my point of view)
> >
> > Lukas, if you have suggestions how to add to travis, I can try.
> >
> > Cheers,
> > Ilya Shipitsin
> >

Thanks Ilya, I merged it.

-- 
William Lallemand



Re: Problem with crl certificate

2020-04-21 Thread Domenico Briganti
Wow, Many thanks!I implement these configurations, I will keep you
updated!
Best Regards,Domenico
Il giorno mar, 21/04/2020 alle 10.19 +0200, William Lallemand ha
scritto:
> On Tue, Apr 21, 2020 at 10:07:27AM +0200, Domenico Briganti wrote:
> > Thanks William,  yes, the reload of haproxy is a feasible way, I
> > hadn'tnoticed.I have just one doubt, since I update the crl every
> > day and Ihave mqtt connections that can stay connected for days, at
> > the end Ican have many haproxy process running, one a day, until
> > all oldconnection (of that day) terminates. I think that with ps
> > and netstatsI can see how many they are and how many old
> > connections each processmanages.However I can afford a complete
> > restart of haproxy once everytwo/three weeks.Regards,Domenico
> 
> If you configure the master CLI (haproxy -S binary argument), you
> willbe able to access to the CLI of the previous process and monitor
> theremaining connections. The previous process won't leave until
> theconnections aren't closed.
> You can force a process to leave even if there are still
> someconnections with the directive "hard-stop-after".
> https://cbonte.github.io/haproxy-dconv/2.2/configuration.html#3.1-hard-stop-after
> 
> You can also limit the number of workers with the directive"mworker-
> max-reloads".
> https://cbonte.github.io/haproxy-dconv/2.2/configuration.html#3.1-mworker-max-reloads
> 
> Regards,


Re: Problem with crl certificate

2020-04-21 Thread William Lallemand
On Tue, Apr 21, 2020 at 10:07:27AM +0200, Domenico Briganti wrote:
> Thanks William,  yes, the reload of haproxy is a feasible way, I hadn't
> noticed.I have just one doubt, since I update the crl every day and I
> have mqtt connections that can stay connected for days, at the end I
> can have many haproxy process running, one a day, until all old
> connection (of that day) terminates. I think that with ps and netstats
> I can see how many they are and how many old connections each process
> manages.However I can afford a complete restart of haproxy once every
> two/three weeks.
> Regards,Domenico


If you configure the master CLI (haproxy -S binary argument), you will
be able to access to the CLI of the previous process and monitor the
remaining connections. The previous process won't leave until the
connections aren't closed.

You can force a process to leave even if there are still some
connections with the directive "hard-stop-after".
https://cbonte.github.io/haproxy-dconv/2.2/configuration.html#3.1-hard-stop-after

You can also limit the number of workers with the directive
"mworker-max-reloads".
https://cbonte.github.io/haproxy-dconv/2.2/configuration.html#3.1-mworker-max-reloads

Regards,

-- 
William Lallemand



Re: Question about demo website

2020-04-21 Thread William Lallemand
Hello,

On Sun, Apr 19, 2020 at 11:17:41AM +0200, Ionel GARDAIS wrote:
> Hi list, 
> 
> On [ http://demo.haproxy.org/ | http://demo.haproxy.org ] , what does 
> IPv4-Direct, IPv4-cached, IPv6-direct, local, local-https represents in 
> regard to http-in ? 
> 

They are listeners (bind lines) in the http-in frontend.


> http-in looks like a frontend, are the other just "listen" directives ? 
> 

"Listen" directives would appear with a "Frontend" and a "Backend" line
in their table.

> How do they refer to http-in ? 

I don't know the configuration of this page but it's probably just a
"use_backend" line in the frontend configuration


> Thanks, 
> Ionel 
> 


-- 
William Lallemand



Re: Problem with crl certificate

2020-04-21 Thread Domenico Briganti
Thanks William,  yes, the reload of haproxy is a feasible way, I hadn't
noticed.I have just one doubt, since I update the crl every day and I
have mqtt connections that can stay connected for days, at the end I
can have many haproxy process running, one a day, until all old
connection (of that day) terminates. I think that with ps and netstats
I can see how many they are and how many old connections each process
manages.However I can afford a complete restart of haproxy once every
two/three weeks.
Regards,Domenico

Il giorno mar, 21/04/2020 alle 08.54 +0200, William Lallemand ha
scritto:
> Hello,
> On Mon, Apr 20, 2020 at 03:15:57PM +0200, Domenico Briganti wrote:
> > Ciao Marco,  thanks for your help.We've found the problem, we do
> > need also the CRL from ROOT CA on top ofthe file passed to crl-file 
> > parameter, thant contein already theintermediate crl.But now we
> > have another challenges, but we're going to loose this timeas
> > already discussed in [1] and [2].We proxy MQTT connections, and wa
> > can't afford a restart of haproxyevery day to force haproxy to take
> > the updated CRL...Any help?Regards,Domenico[1] 
> > https://discourse.haproxy.org/t/crl-reload-and-long-life-tcp-connections/2645/2[2
> > ] 
> > https://discourse.haproxy.org/t/ssl-termination-fails-when-crl-is-published/2336
> 
> Indeed a reload of HAProxy is still required, but that shouldn't be
> aproblem. With the reload, active connections won't be killed. 
> You just need to configure the seamless reload by adding the
> option"expose-fd listeners" to your stats socket line, this way you
> won't haveimpact on your service.
> There is currently some active development on the CLI for
> pushingcertificates on-the-fly, the CRL is not available for this
> yet, butcould be added in the future.
> Regards,


Re: [PATCH] Minor improvements to doc "http-request set-src"

2020-04-21 Thread Olivier D
Hello,

Le lun. 20 avr. 2020 à 20:37, Tim Düsterhus  a écrit :

> Olivier,
>
> Am 20.04.20 um 20:03 schrieb Olivier D:
> > I'm using gmail so I add to attach patches and was not able to send them
> > directly. If format is wrong, tell me :)
> >
>
> Format looks good to me. Your commit message however does not (fully)
> follow the instructions within the CONTRIBUTING file
> (
> https://github.com/haproxy/haproxy/blob/dfad6a41ad9f012671b703788dd679cf24eb8c5a/CONTRIBUTING#L562-L567
> ):
>
> >As a rule of thumb, your patch MUST NEVER be made only of a subject
> line,
> >it *must* contain a description. Even one or two lines, or indicating
> >whether a backport is desired or not. It turns out that single-line
> commits
> >are so rare in the Git world that they require special manual (hence
> >painful) handling when they are backported, and at least for this
> reason
> >it's important to keep this in mind.
>
> Regarding the patch itself:
>
> > diff --git doc/configuration.txt doc/configuration.txt
> > index 5d01835d7..ddfabcd92 100644
> > --- doc/configuration.txt
> > +++ doc/configuration.txt
> > @@ -6735,7 +6735,8 @@ option forwardfor [ except  ] [ header
>  ] [ if-none ]
> >header for a known source address or network by adding the "except"
> keyword
> >followed by the network address. In this case, any source IP matching
> the
> >network will not cause an addition of this header. Most common uses
> are with
> > -  private networks or 127.0.0.1.
> > +  private networks or 127.0.0.1. Another way to do it is to tell
> HAProxy to
> > +  trust a custom header with "http-request set-src".
>
> This change looks incorrect to me. "option forwardfor" is for sending,
> not "receiving" IP addresses.
>
> >Alternatively, the keyword "if-none" states that the header will only
> be
> >added if it is not present. This should only be used in perfectly
> trusted
> > @@ -6760,6 +6761,14 @@ option forwardfor [ except  ] [ header
>  ] [ if-none ]
> >  mode http
> >  option forwardfor header X-Client
> >
> > +  Example :
> > +# Trust a specific header and use it as origin IP.
> > +# If not found, source IP will be used.
> > +frontend www
> > +mode http
> > +http-request set-src CF-Connecting-IP
>
> I believe this should read `http-request set-src
> %[req.hdr(CF-Connecting-IP)]`. However:
>
> 1. I don't like having company specific headers in there. Especially
> since Cloudflare supports the standard XFF.
> 2. I don't consider that a useful addition.
>
> > +option forwardfor
> > +
> >See also : "option httpclose", "option http-server-close",
> >   "option http-keep-alive"
> >
>
> Patch 2:
>
> > diff --git doc/configuration.txt doc/configuration.txt
> > index ddfabcd92..49324fa53 100644
> > --- doc/configuration.txt
> > +++ doc/configuration.txt
> > @@ -5114,7 +5114,8 @@ http-request set-src  [ { if | unless }
>  ]
> >This is used to set the source IP address to the value of specified
> >expression. Useful when a proxy in front of HAProxy rewrites source
> IP, but
> >provides the correct IP in a HTTP header; or you want to mask source
> IP for
> > -  privacy.
> > +  privacy. All subsequent calls to src field will return this value
> > +  (see example).
>
> This change looks good to me.
>
> >Arguments :
> >Is a standard HAProxy expression formed by a sample-fetch
> followed
> > @@ -5124,6 +5125,11 @@ http-request set-src  [ { if | unless }
>  ]
> >  http-request set-src hdr(x-forwarded-for)
> >  http-request set-src src,ipmask(24)
> >
> > +  Example:
>
> Only a single "Example:" heading is used throughout the documentation.
> As the first line can be shared with the previous example you could
> write something like: # After the masking this will track connections
> based on the IP address with the last octet zeroed out.
>
> > +# This will track connection based on header IP
> > +http-request set-src hdr(x-forwarded-for)
> > +http-request track-sc0 src
> > +
> >When possible, set-src preserves the original source port as long as
> the
> >address family allows it, otherwise the source port is set to 0.
>

Thank you for your valuable feedback. Find attached a new patch will all
your comments taken into account.

Olivier


>
> Best regards
> Tim Düsterhus
>
From e6b11f3a795ec40c8b802d9d1190f3f6bbd15f5d Mon Sep 17 00:00:00 2001
From: Olivier Doucet 
Date: Tue, 21 Apr 2020 09:32:56 +0200
Subject: [PATCH] [DOC] Improve documentation on http-request set-src

This patch adds more explanation on how to use "http-request set-src"
and a link to "option forwardfor".

This patch can be applied to all previous version starting at 1.6
---
 doc/configuration.txt | 9 -
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git doc/configuration.txt doc/configuration.txt
index 5d01835d7..e695ab7f5 100644
--- doc/configuration.txt
+++ doc/configuration.txt
@@ -5114,16 +5114,23 @@ 

Re: Problem with crl certificate

2020-04-21 Thread William Lallemand


Hello,

On Mon, Apr 20, 2020 at 03:15:57PM +0200, Domenico Briganti wrote:
> Ciao Marco,  thanks for your help.
> We've found the problem, we do need also the CRL from ROOT CA on top of
> the file passed to crl-file parameter, thant contein already the
> intermediate crl.
> But now we have another challenges, but we're going to loose this time
> as already discussed in [1] and [2].
> We proxy MQTT connections, and wa can't afford a restart of haproxy
> every day to force haproxy to take the updated CRL...
> Any help?
> Regards,Domenico
> [1] 
> https://discourse.haproxy.org/t/crl-reload-and-long-life-tcp-connections/2645/2[2
> ] 
> https://discourse.haproxy.org/t/ssl-termination-fails-when-crl-is-published/2336

Indeed a reload of HAProxy is still required, but that shouldn't be a
problem. With the reload, active connections won't be killed. 

You just need to configure the seamless reload by adding the option
"expose-fd listeners" to your stats socket line, this way you won't have
impact on your service.

There is currently some active development on the CLI for pushing
certificates on-the-fly, the CRL is not available for this yet, but
could be added in the future.

Regards,

-- 
William Lallemand