Re: [ANNOUNCE] haproxy-2.1.4

2020-04-22 Thread Tim Düsterhus
Willy,

Am 21.04.20 um 16:58 schrieb Willy Tarreau:
>> I would also be interested in how Felix Wilhelm performed the fuzzing,
>> do you happen to have details about that?
> 
> No, I only got the information that was just made public. But do not
> hesitate to contact Felix about this, I'm sure he will happily share some
> extra information to help us improve our side.
> 

I did and received a reply:
https://bugs.chromium.org/p/project-zero/issues/detail?id=2023#c6

Felix Wilhelm used contrib/hpack/decode.c as the basis for the fuzz
driver, like I did for my first CVE. The difference to my understanding
is that his version is more efficient, because it's not fork+exec()ing
new processes all the time and instead just uses function calls.

Best regards
Tim Düsterhus



Re: [ANNOUNCE] haproxy-2.1.4

2020-04-21 Thread Илья Шипицин
ср, 22 апр. 2020 г. в 00:06, Tim Düsterhus :

> Ilya,
>
> Am 21.04.20 um 20:49 schrieb Илья Шипицин:
> > I thought of some more high level fuzzing without intercepting code path.
> > for example, we know about range queries
> >
> > Range: bytes=0-1023
> >
> >
> > i.e. bytes=(integer)-(integer)
> >
> >
> > what if we send
> >
> > Range: bytes=1023-0
> >
> > or
> > Range: bytes=1023
> >
> > or
> >
> > Range: bytes=abc-def
> >
> > and so on.
> > it does not require any code modification. but proper workload generator
> > should be chosen
> >
>
> That would not be the job of a fuzzer, but that of a HTTP compliancy
> checker, because that deals with business logic. Someone would need to
> encode all the rules and edge cases laid out in the RFC into a program,
> like someone did for h2spec. You don't need to have any smartness within
> that checker, sending static requests and reading the responses is
> sufficient there.
>

I heard of "level 2" fuzzing
https://blog.tox.chat/2015/09/fuzzing-the-new-groupchats/

i.e. fuzzing on top of protocol implementation


>
> A fuzzer attempts to generate data that trips over the input parsers in
> a way a human would not think of, because it's not an "obvious" edge
> case. For CVE-2018-14645 the bug would trigger when receiving values
> exceeding the range of an int, which might be an obvious edge case for a
> C developer, but is not something that's specifically acknowledged
> within the H2 specification. Negative values however are clearly invalid
> when talking about a byte range.
>
> Best regards
> Tim Düsterhus
>


Re: [ANNOUNCE] haproxy-2.1.4

2020-04-21 Thread Tim Düsterhus
Ilya,

Am 21.04.20 um 20:49 schrieb Илья Шипицин:
> I thought of some more high level fuzzing without intercepting code path.
> for example, we know about range queries
> 
> Range: bytes=0-1023
> 
> 
> i.e. bytes=(integer)-(integer)
> 
> 
> what if we send
> 
> Range: bytes=1023-0
> 
> or
> Range: bytes=1023
> 
> or
> 
> Range: bytes=abc-def
> 
> and so on.
> it does not require any code modification. but proper workload generator
> should be chosen
> 

That would not be the job of a fuzzer, but that of a HTTP compliancy
checker, because that deals with business logic. Someone would need to
encode all the rules and edge cases laid out in the RFC into a program,
like someone did for h2spec. You don't need to have any smartness within
that checker, sending static requests and reading the responses is
sufficient there.

A fuzzer attempts to generate data that trips over the input parsers in
a way a human would not think of, because it's not an "obvious" edge
case. For CVE-2018-14645 the bug would trigger when receiving values
exceeding the range of an int, which might be an obvious edge case for a
C developer, but is not something that's specifically acknowledged
within the H2 specification. Negative values however are clearly invalid
when talking about a byte range.

Best regards
Tim Düsterhus



Re: [ANNOUNCE] haproxy-2.1.4

2020-04-21 Thread Илья Шипицин
вт, 21 апр. 2020 г. в 20:24, Tim Düsterhus :

> Ilya,
>
> Am 21.04.20 um 17:02 schrieb Илья Шипицин:
> >> The two CVEs I mentioned were bugs *I* found using afl-fuzz. The biggest
> >> hurdle back when I attempted fuzzing was not getting an appropriate
> >> workload (I've just created a few basic requests using nghttp), but
> >> instead getting the requests into HAProxy in a way so that afl is able
> >> to detect branches that change based on input changes. This branch
> >> detection is *the* main selling point of afl. Just sending random
> >> garbage is not going to turn up interesting stuff, if anything.
> >>
> >
> >
> > I really beleive that people who can perform fuzzing are smarter than me.
> > But I hope
> > to be able to run fuzzing some day :)
> >
> > what are "branches" ? are them git branches ? do you have any setup
> > step-by-step
>
> Branches refer to branches within the generated machine code (i.e.
> conditional jumps). AFL works similarly to ASAN in that it adds some
> additional code to the executable to detect whether a branch was taken
> (i.e. a jump happened) or not.
>
> As a super simplified example consider the following code:
>
> if (buf[0] == 'b') {
>   if (buf[1] == '1') {
> crash();
>   }
>   // do something (1)
> }
> else {
>   // do something (2)
> }
>
> I would then use the following as the initial payload:
>
> buf = "a0"
>
> AFL would then execute the "(2)" line. Afterwards it might try the
> following (increase the first byte by 1):
>
> buf = "b0"
>


I thought of some more high level fuzzing without intercepting code path.
for example, we know about range queries

Range: bytes=0-1023


i.e. bytes=(integer)-(integer)


what if we send

Range: bytes=1023-0

or
Range: bytes=1023

or

Range: bytes=abc-def

and so on.
it does not require any code modification. but proper workload generator
should be chosen




> AFL would then detect that something changed: Instead of jumping to the
> 'else' it would continue executing the second 'if'. Now AFL knows that
> the first byte being 'b' is special (or at least different to 'a').
> Instead of attempting 'c' it might then proceed to modify the second
> byte. By incrementing it from '0' to '1' it notices that again something
> changed: The program crashes.
>
> This "intelligent" processing could find the bug with just 3 inputs
> instead of having to randomly test 256*256 combinations for the two
> bytes. In reality the results are even more impressive: AFL was able to
> generate a valid JPG image based on the starting input 'hello'.
>
> See this blog post:
> https://lcamtuf.blogspot.com/2014/11/pulling-jpegs-out-of-thin-air.html
>
> > described how those CVE were detected ?
> >
>
> I intended to write up a blog post after my initial find, but never got
> around to it. For the first one it basically went like this:
>
> 1. Compile the standalone hpack decoder.
> 2. Start afl-fuzz with a single input on the decoder.
> 3. Wait 2 minutes.
> 4. Report security issue.
>
> Not joking, it literally took 2 minutes of throwing data at the decoder
> to find the issue on a single core cloud server. I believe you'll be
> able to figure it out yourself. To reproduce the bug you need to check
> out commit f7db9305aa8642cb5145bba6f8948400c52396af (that's one before
> the fix).
>
> The second one was more involved and less reliable. I used desock.so
> from https://github.com/zardus/preeny to receive "network" input from
> stdin and patched HAProxy to exit after serving a single request. Then I
> used a simplistic configuration pointing to an nginx and seeded AFL
> using some HTTP/2 requests I generated using nghttp against `nc -l >
> request`. However that dirty hackery resulted in AFL not reliably
> detecting whether something changed because the input changed or whether
> it just randomly changed.
>

thank you, I'll try next weekend


>
> Best regards
> Tim Düsterhus
>


Re: [ANNOUNCE] haproxy-2.1.4

2020-04-21 Thread Tim Düsterhus
Ilya,

Am 21.04.20 um 17:02 schrieb Илья Шипицин:
>> The two CVEs I mentioned were bugs *I* found using afl-fuzz. The biggest
>> hurdle back when I attempted fuzzing was not getting an appropriate
>> workload (I've just created a few basic requests using nghttp), but
>> instead getting the requests into HAProxy in a way so that afl is able
>> to detect branches that change based on input changes. This branch
>> detection is *the* main selling point of afl. Just sending random
>> garbage is not going to turn up interesting stuff, if anything.
>>
> 
> 
> I really beleive that people who can perform fuzzing are smarter than me.
> But I hope
> to be able to run fuzzing some day :)
> 
> what are "branches" ? are them git branches ? do you have any setup
> step-by-step

Branches refer to branches within the generated machine code (i.e.
conditional jumps). AFL works similarly to ASAN in that it adds some
additional code to the executable to detect whether a branch was taken
(i.e. a jump happened) or not.

As a super simplified example consider the following code:

if (buf[0] == 'b') {
  if (buf[1] == '1') {
crash();
  }
  // do something (1)
}
else {
  // do something (2)
}

I would then use the following as the initial payload:

buf = "a0"

AFL would then execute the "(2)" line. Afterwards it might try the
following (increase the first byte by 1):

buf = "b0"

AFL would then detect that something changed: Instead of jumping to the
'else' it would continue executing the second 'if'. Now AFL knows that
the first byte being 'b' is special (or at least different to 'a').
Instead of attempting 'c' it might then proceed to modify the second
byte. By incrementing it from '0' to '1' it notices that again something
changed: The program crashes.

This "intelligent" processing could find the bug with just 3 inputs
instead of having to randomly test 256*256 combinations for the two
bytes. In reality the results are even more impressive: AFL was able to
generate a valid JPG image based on the starting input 'hello'.

See this blog post:
https://lcamtuf.blogspot.com/2014/11/pulling-jpegs-out-of-thin-air.html

> described how those CVE were detected ?
> 

I intended to write up a blog post after my initial find, but never got
around to it. For the first one it basically went like this:

1. Compile the standalone hpack decoder.
2. Start afl-fuzz with a single input on the decoder.
3. Wait 2 minutes.
4. Report security issue.

Not joking, it literally took 2 minutes of throwing data at the decoder
to find the issue on a single core cloud server. I believe you'll be
able to figure it out yourself. To reproduce the bug you need to check
out commit f7db9305aa8642cb5145bba6f8948400c52396af (that's one before
the fix).

The second one was more involved and less reliable. I used desock.so
from https://github.com/zardus/preeny to receive "network" input from
stdin and patched HAProxy to exit after serving a single request. Then I
used a simplistic configuration pointing to an nginx and seeded AFL
using some HTTP/2 requests I generated using nghttp against `nc -l >
request`. However that dirty hackery resulted in AFL not reliably
detecting whether something changed because the input changed or whether
it just randomly changed.

Best regards
Tim Düsterhus



Re: [ANNOUNCE] haproxy-2.1.4

2020-04-21 Thread Илья Шипицин
вт, 21 апр. 2020 г. в 19:13, Tim Düsterhus :

> Ilya,
>
> Am 21.04.20 um 15:47 schrieb Илья Шипицин:
> >> The write-up is available now:
> >> https://bugs.chromium.org/p/project-zero/issues/detail?id=2023
> >>
> >> It has a "Methodology-Fuzzing" label, so after CVE-2018-14645 and
> >> CVE-2018-20615 this is the 3rd CVE within H2 found using fuzzing that
> >> I'm aware of. It probably won't be the last. Can we please allocate some
> >> resources on making HAProxy more fuzzer friendly after 2.2 is out?
> >>
> >> I would also be interested in how Felix Wilhelm performed the fuzzing,
> >> do you happen to have details about that?
> >>
> >
> > h2spec is very close to fuzzing. so, we just fire numerous requests and
> see
> > what's going on.
> >
> > ok, couple of things missing - core dump catch and address sanitizing.
> not
> > hard to add.
> >
> > the question is "how to generate h2 fuzzing workload"
> >
>
> The two CVEs I mentioned were bugs *I* found using afl-fuzz. The biggest
> hurdle back when I attempted fuzzing was not getting an appropriate
> workload (I've just created a few basic requests using nghttp), but
> instead getting the requests into HAProxy in a way so that afl is able
> to detect branches that change based on input changes. This branch
> detection is *the* main selling point of afl. Just sending random
> garbage is not going to turn up interesting stuff, if anything.
>


I really beleive that people who can perform fuzzing are smarter than me.
But I hope
to be able to run fuzzing some day :)

what are "branches" ? are them git branches ? do you have any setup
step-by-step
described how those CVE were detected ?


>
> For CVE-2018-14645 this worked well, because I could use the standalone
> hpack decoder. For CVE-2018-20615 I worked with preeny/desock and saw
> that issues with branches being non-deterministic (I assume slight
> timing issues or packets being cut differently or something like that).
>
> Best regards
> Tim Düsterhus
>


Re: [ANNOUNCE] haproxy-2.1.4

2020-04-21 Thread Willy Tarreau
Hi Tim,

On Tue, Apr 21, 2020 at 03:18:43PM +0200, Tim Düsterhus wrote:
> Willy,
> 
> Am 02.04.20 um 15:03 schrieb Willy Tarreau:
> > The main driver for this release is that it contains a fix for a serious
> > vulnerability that was responsibly reported last week by Felix Wilhelm
> > from Google Project Zero, affecting the HPACK decoder used for HTTP/2.
> > CVE-2020-11100 was assigned to this issue.
> > 
> > There is no configuration-based workaround for 2.1 and above.
> > 
> > This vulnerability makes it possible under certain circumstances to write
> > to a wide range of memory locations within the process' heap, with the
> > limitation that the attacker doesn't control the absolute address, so the
> > most likely result and by a far margin will be a process crash, but it is
> > not possible to completely rule out the faint possibility of a remote code
> > execution, at least in a lab-controlled environment. Felix was kind enough
> > to agree to delay the publication of his findings to the 20th of this month
> > in order to leave enough time to haproxy users to apply updates. But please
> > do not wait, as it is not very difficult to figure how to exploit the bug
> > based on the fix. Distros were notified and will also have fixes available
> > very shortly.
> > 
> 
> The write-up is available now:
> https://bugs.chromium.org/p/project-zero/issues/detail?id=2023
> 
> It has a "Methodology-Fuzzing" label, so after CVE-2018-14645 and
> CVE-2018-20615 this is the 3rd CVE within H2 found using fuzzing that
> I'm aware of. It probably won't be the last. Can we please allocate some
> resources on making HAProxy more fuzzer friendly after 2.2 is out?

Well, at the risk of sounding annoying I'm afraid not on my side. I mean,
it's already extremely hard for all of us to invest enough time on the
features that people want, to review contributed code and to fix bugs to
keep the code in a stable state. It's like in any other opensource project,
it's simply not possible to ask for something to be done to see time
suddenly appear out of nowhere.

Making the code "more fuzzer friendly" means everything and nothing at
the same time. It's already getting more fuzzer friendly thanks to a
much better layering and modularization that allows certain parts to
be more easily tested (hence the example you gave on how you could test
hpack). On the other hand, it also comes with some limits, and the
ability to develop, extend and maintain it is the most important aspect
that will always prevail when a choice needs to be made. And quite frankly
trying to untangle a layer7 proxy so that dynamic parts can be run out of
context will drive us nowhere just because by design that doesn't correspond
to what the code needs to do. Testing proxy code is very hard. It's no
surprize that varnishtest (now vtest) was purposely written from scratch
for this and is only used for testing proxies. Maybe new external tools are
needed and we'd need a better way to interface with them, I don't know.

There certainly are some parts that could be improved regarding fuzzing, I
honestly don't know. But I can't guess it by myself either. However I'm
willing to accept some patches if:
  - they don't affect maintainability/development
  - they don't affect performance

Last, it's important to keep in mind that the number of issues that are
really subject to such tools and methodologies is extremely low. Looking
since 1.8 (where the bug mentioned here was introduced 2.5 years ago),
no less than 520 bugs were fixed, 4 of which were tagged as critical and
required a coordinated fix (and all 4 in code I wrote myself). Half of
them were found using fuzzing, it's not even certain the two others could
have been found this way. However I don't want to see that the time
invested to improve fuzzing results in less efficiency at spotting and
fixing all the other ones because in the end each bug affects some users.

I'd personally see more value in investing time to write Coccinelle
scripts to spot coding mistakes that happen all the time and especially
when developers are tired or disturbed, and which often result in the
same issues as those detected through fuzzing. That doesn't mean I'm not
interested in fuzzing, it's just that I don't see this main goal as the
most valuable way to invest time for all those already deeply involved
in the project, but I'm happy to be proven wrong.

> I would also be interested in how Felix Wilhelm performed the fuzzing,
> do you happen to have details about that?

No, I only got the information that was just made public. But do not
hesitate to contact Felix about this, I'm sure he will happily share some
extra information to help us improve our side.

Regards,
Willy



Re: [ANNOUNCE] haproxy-2.1.4

2020-04-21 Thread Tim Düsterhus
Ilya,

Am 21.04.20 um 15:47 schrieb Илья Шипицин:
>> The write-up is available now:
>> https://bugs.chromium.org/p/project-zero/issues/detail?id=2023
>>
>> It has a "Methodology-Fuzzing" label, so after CVE-2018-14645 and
>> CVE-2018-20615 this is the 3rd CVE within H2 found using fuzzing that
>> I'm aware of. It probably won't be the last. Can we please allocate some
>> resources on making HAProxy more fuzzer friendly after 2.2 is out?
>>
>> I would also be interested in how Felix Wilhelm performed the fuzzing,
>> do you happen to have details about that?
>>
> 
> h2spec is very close to fuzzing. so, we just fire numerous requests and see
> what's going on.
> 
> ok, couple of things missing - core dump catch and address sanitizing. not
> hard to add.
> 
> the question is "how to generate h2 fuzzing workload"
> 

The two CVEs I mentioned were bugs *I* found using afl-fuzz. The biggest
hurdle back when I attempted fuzzing was not getting an appropriate
workload (I've just created a few basic requests using nghttp), but
instead getting the requests into HAProxy in a way so that afl is able
to detect branches that change based on input changes. This branch
detection is *the* main selling point of afl. Just sending random
garbage is not going to turn up interesting stuff, if anything.

For CVE-2018-14645 this worked well, because I could use the standalone
hpack decoder. For CVE-2018-20615 I worked with preeny/desock and saw
that issues with branches being non-deterministic (I assume slight
timing issues or packets being cut differently or something like that).

Best regards
Tim Düsterhus



Re: [ANNOUNCE] haproxy-2.1.4

2020-04-21 Thread Илья Шипицин
another option would be to enlist project at HackerOne and wait while Guido
Vranken will fuzz it :)

he already fuzzed dozens of projects, including openssl, openvpn, ...

https://guidovranken.com/

вт, 21 апр. 2020 г. в 18:21, Tim Düsterhus :

> Willy,
>
> Am 02.04.20 um 15:03 schrieb Willy Tarreau:
> > The main driver for this release is that it contains a fix for a serious
> > vulnerability that was responsibly reported last week by Felix Wilhelm
> > from Google Project Zero, affecting the HPACK decoder used for HTTP/2.
> > CVE-2020-11100 was assigned to this issue.
> >
> > There is no configuration-based workaround for 2.1 and above.
> >
> > This vulnerability makes it possible under certain circumstances to write
> > to a wide range of memory locations within the process' heap, with the
> > limitation that the attacker doesn't control the absolute address, so the
> > most likely result and by a far margin will be a process crash, but it is
> > not possible to completely rule out the faint possibility of a remote
> code
> > execution, at least in a lab-controlled environment. Felix was kind
> enough
> > to agree to delay the publication of his findings to the 20th of this
> month
> > in order to leave enough time to haproxy users to apply updates. But
> please
> > do not wait, as it is not very difficult to figure how to exploit the bug
> > based on the fix. Distros were notified and will also have fixes
> available
> > very shortly.
> >
>
> The write-up is available now:
> https://bugs.chromium.org/p/project-zero/issues/detail?id=2023
>
> It has a "Methodology-Fuzzing" label, so after CVE-2018-14645 and
> CVE-2018-20615 this is the 3rd CVE within H2 found using fuzzing that
> I'm aware of. It probably won't be the last. Can we please allocate some
> resources on making HAProxy more fuzzer friendly after 2.2 is out?
>
> I would also be interested in how Felix Wilhelm performed the fuzzing,
> do you happen to have details about that?
>
> Best regards
> Tim Düsterhus
>
>


Re: [ANNOUNCE] haproxy-2.1.4

2020-04-21 Thread Илья Шипицин
вт, 21 апр. 2020 г. в 18:21, Tim Düsterhus :

> Willy,
>
> Am 02.04.20 um 15:03 schrieb Willy Tarreau:
> > The main driver for this release is that it contains a fix for a serious
> > vulnerability that was responsibly reported last week by Felix Wilhelm
> > from Google Project Zero, affecting the HPACK decoder used for HTTP/2.
> > CVE-2020-11100 was assigned to this issue.
> >
> > There is no configuration-based workaround for 2.1 and above.
> >
> > This vulnerability makes it possible under certain circumstances to write
> > to a wide range of memory locations within the process' heap, with the
> > limitation that the attacker doesn't control the absolute address, so the
> > most likely result and by a far margin will be a process crash, but it is
> > not possible to completely rule out the faint possibility of a remote
> code
> > execution, at least in a lab-controlled environment. Felix was kind
> enough
> > to agree to delay the publication of his findings to the 20th of this
> month
> > in order to leave enough time to haproxy users to apply updates. But
> please
> > do not wait, as it is not very difficult to figure how to exploit the bug
> > based on the fix. Distros were notified and will also have fixes
> available
> > very shortly.
> >
>
> The write-up is available now:
> https://bugs.chromium.org/p/project-zero/issues/detail?id=2023
>
> It has a "Methodology-Fuzzing" label, so after CVE-2018-14645 and
> CVE-2018-20615 this is the 3rd CVE within H2 found using fuzzing that
> I'm aware of. It probably won't be the last. Can we please allocate some
> resources on making HAProxy more fuzzer friendly after 2.2 is out?
>
> I would also be interested in how Felix Wilhelm performed the fuzzing,
> do you happen to have details about that?
>

h2spec is very close to fuzzing. so, we just fire numerous requests and see
what's going on.

ok, couple of things missing - core dump catch and address sanitizing. not
hard to add.

the question is "how to generate h2 fuzzing workload"


>
> Best regards
> Tim Düsterhus
>
>


Re: [ANNOUNCE] haproxy-2.1.4

2020-04-21 Thread Tim Düsterhus
Willy,

Am 02.04.20 um 15:03 schrieb Willy Tarreau:
> The main driver for this release is that it contains a fix for a serious
> vulnerability that was responsibly reported last week by Felix Wilhelm
> from Google Project Zero, affecting the HPACK decoder used for HTTP/2.
> CVE-2020-11100 was assigned to this issue.
> 
> There is no configuration-based workaround for 2.1 and above.
> 
> This vulnerability makes it possible under certain circumstances to write
> to a wide range of memory locations within the process' heap, with the
> limitation that the attacker doesn't control the absolute address, so the
> most likely result and by a far margin will be a process crash, but it is
> not possible to completely rule out the faint possibility of a remote code
> execution, at least in a lab-controlled environment. Felix was kind enough
> to agree to delay the publication of his findings to the 20th of this month
> in order to leave enough time to haproxy users to apply updates. But please
> do not wait, as it is not very difficult to figure how to exploit the bug
> based on the fix. Distros were notified and will also have fixes available
> very shortly.
> 

The write-up is available now:
https://bugs.chromium.org/p/project-zero/issues/detail?id=2023

It has a "Methodology-Fuzzing" label, so after CVE-2018-14645 and
CVE-2018-20615 this is the 3rd CVE within H2 found using fuzzing that
I'm aware of. It probably won't be the last. Can we please allocate some
resources on making HAProxy more fuzzer friendly after 2.2 is out?

I would also be interested in how Felix Wilhelm performed the fuzzing,
do you happen to have details about that?

Best regards
Tim Düsterhus



Re: [ANNOUNCE] haproxy-2.1.4

2020-04-02 Thread Julien Pivotto
On 02 Apr 15:27, Julien Pivotto wrote:
> On 02 Apr 15:03, Willy Tarreau wrote:
> > Hi,
> > 
> > HAProxy 2.1.4 was released on 2020/04/02. It added 99 new commits
> > after version 2.1.3.
> > 
> > The main driver for this release is that it contains a fix for a serious
> > vulnerability that was responsibly reported last week by Felix Wilhelm
> > from Google Project Zero, affecting the HPACK decoder used for HTTP/2.
> > CVE-2020-11100 was assigned to this issue.
> > 
> > There is no configuration-based workaround for 2.1 and above.
> 
> 
> Is disabling HTTP2 a workaround?
> 
> Thanks.

Sorry, I have only read the 2.1 mail.

Thanks

> 
> > 
> > This vulnerability makes it possible under certain circumstances to write
> > to a wide range of memory locations within the process' heap, with the
> > limitation that the attacker doesn't control the absolute address, so the
> > most likely result and by a far margin will be a process crash, but it is
> > not possible to completely rule out the faint possibility of a remote code
> > execution, at least in a lab-controlled environment. Felix was kind enough
> > to agree to delay the publication of his findings to the 20th of this month
> > in order to leave enough time to haproxy users to apply updates. But please
> > do not wait, as it is not very difficult to figure how to exploit the bug
> > based on the fix. Distros were notified and will also have fixes available
> > very shortly.
> > 
> > Three other important fixes are present in this version:
> >   - a non-portable way of calculating a list pointer that breaks with
> > gcc 10 unless using -fno-tree-pta. This bug results in infinite loops
> > at random places in the code depending how the compiler decides to
> > optimize the code.
> > 
> >   - a bug in the way TLV fields are extracted from the PROXY protocol, as
> > they could be mistakenly looked up in the subsequent payload, even
> > though these would have limited effects since these ones would generally
> > be meaningless for the transported protocol, but could be used to hide a
> > source address from logging for example.
> > 
> >   - the "tarpit" rules were partially broken in that since 1.9 they wouldn't
> > prevent a connection from being sent to a server while the 500 response
> > is delivered to the client. Given that they are often used to block
> > suspicious activity it's problematic.
> > 
> > The rest is less important, but still relevant to some users. Among those
> > noticeable I can enumerate:
> >   - the O(N^2) ACL unique-id allocator that could take several minutes to
> > boot on certain very large configs was reworked to follow O(NlogN)
> > instead.
> > 
> >   - the default global maxconn setting when not set in the configuration was
> > incorrectly set to the process' soft limit instead of the hard limit,
> > resulting in much lower connection counts on some setups after upgrade
> > from 1.x to 2.x. It now properly follows the hard limit.
> > 
> >   - a new thread-safe random number generator that will avoid the risk that
> > the "uuid" sample fetch function returns the exact same UUID in several
> > threads.
> > 
> >   - issues in HTX mode affecting filters, namely cache and compression, that
> > could lead to data corruption.
> > 
> >   - alignment issues causing bus error on Sparc64 were addressed
> > 
> >   - fixed a rare case of possible segfault on soft-stop when a finishing 
> > thread
> > flushes its pools while another one is freeing some elements.
> > 
> > 
> > Please have a look at the changelog below for a more detailed list of fixes,
> > and do not forget to update, either from the sources or from your regular
> > distro channels.
> > 
> > Please find the usual URLs below :
> >Site index   : http://www.haproxy.org/
> >Discourse: http://discourse.haproxy.org/
> >Slack channel: https://slack.haproxy.org/
> >Issue tracker: https://github.com/haproxy/haproxy/issues
> >Sources  : http://www.haproxy.org/download/2.1/src/
> >Git repository   : http://git.haproxy.org/git/haproxy-2.1.git/
> >Git Web browsing : http://git.haproxy.org/?p=haproxy-2.1.git
> >Changelog: http://www.haproxy.org/download/2.1/src/CHANGELOG
> >Cyril's HTML doc : http://cbonte.github.io/haproxy-dconv/
> > 
> > Willy
> > ---
> > Complete changelog :
> > Balvinder Singh Rawat (1):
> >   DOC: correct typo in alert message about rspirep
> > 
> > Bjoern Jacke (1):
> >   DOC: fix typo about no-tls-tickets
> > 
> > Björn Jacke (1):
> >   DOC: improve description of no-tls-tickets
> > 
> > Carl Henrik Lunde (1):
> >   OPTIM: startup: fast unique_id allocation for acl.
> > 
> > Christopher Faulet (26):
> >   BUG/MINOR: mux-fcgi: Forbid special characters when matching 
> > PATH_INFO param
> >   MINOR: mux-fcgi: Make the capture of the path-info optional in 
> > pathinfo regex
> >   MINOR: http-htx: 

Re: [ANNOUNCE] haproxy-2.1.4

2020-04-02 Thread Willy Tarreau
On Thu, Apr 02, 2020 at 03:27:07PM +0200, Julien Pivotto wrote:
> On 02 Apr 15:03, Willy Tarreau wrote:
> > Hi,
> > 
> > HAProxy 2.1.4 was released on 2020/04/02. It added 99 new commits
> > after version 2.1.3.
> > 
> > The main driver for this release is that it contains a fix for a serious
> > vulnerability that was responsibly reported last week by Felix Wilhelm
> > from Google Project Zero, affecting the HPACK decoder used for HTTP/2.
> > CVE-2020-11100 was assigned to this issue.
> > 
> > There is no configuration-based workaround for 2.1 and above.
> 
> 
> Is disabling HTTP2 a workaround?

When possible yes, but in 2.1 and above you cannot as it's native,
hence "no config workaround" :-(

Willy



Re: [ANNOUNCE] haproxy-2.1.4

2020-04-02 Thread Julien Pivotto
On 02 Apr 15:03, Willy Tarreau wrote:
> Hi,
> 
> HAProxy 2.1.4 was released on 2020/04/02. It added 99 new commits
> after version 2.1.3.
> 
> The main driver for this release is that it contains a fix for a serious
> vulnerability that was responsibly reported last week by Felix Wilhelm
> from Google Project Zero, affecting the HPACK decoder used for HTTP/2.
> CVE-2020-11100 was assigned to this issue.
> 
> There is no configuration-based workaround for 2.1 and above.


Is disabling HTTP2 a workaround?

Thanks.

> 
> This vulnerability makes it possible under certain circumstances to write
> to a wide range of memory locations within the process' heap, with the
> limitation that the attacker doesn't control the absolute address, so the
> most likely result and by a far margin will be a process crash, but it is
> not possible to completely rule out the faint possibility of a remote code
> execution, at least in a lab-controlled environment. Felix was kind enough
> to agree to delay the publication of his findings to the 20th of this month
> in order to leave enough time to haproxy users to apply updates. But please
> do not wait, as it is not very difficult to figure how to exploit the bug
> based on the fix. Distros were notified and will also have fixes available
> very shortly.
> 
> Three other important fixes are present in this version:
>   - a non-portable way of calculating a list pointer that breaks with
> gcc 10 unless using -fno-tree-pta. This bug results in infinite loops
> at random places in the code depending how the compiler decides to
> optimize the code.
> 
>   - a bug in the way TLV fields are extracted from the PROXY protocol, as
> they could be mistakenly looked up in the subsequent payload, even
> though these would have limited effects since these ones would generally
> be meaningless for the transported protocol, but could be used to hide a
> source address from logging for example.
> 
>   - the "tarpit" rules were partially broken in that since 1.9 they wouldn't
> prevent a connection from being sent to a server while the 500 response
> is delivered to the client. Given that they are often used to block
> suspicious activity it's problematic.
> 
> The rest is less important, but still relevant to some users. Among those
> noticeable I can enumerate:
>   - the O(N^2) ACL unique-id allocator that could take several minutes to
> boot on certain very large configs was reworked to follow O(NlogN)
> instead.
> 
>   - the default global maxconn setting when not set in the configuration was
> incorrectly set to the process' soft limit instead of the hard limit,
> resulting in much lower connection counts on some setups after upgrade
> from 1.x to 2.x. It now properly follows the hard limit.
> 
>   - a new thread-safe random number generator that will avoid the risk that
> the "uuid" sample fetch function returns the exact same UUID in several
> threads.
> 
>   - issues in HTX mode affecting filters, namely cache and compression, that
> could lead to data corruption.
> 
>   - alignment issues causing bus error on Sparc64 were addressed
> 
>   - fixed a rare case of possible segfault on soft-stop when a finishing 
> thread
> flushes its pools while another one is freeing some elements.
> 
> 
> Please have a look at the changelog below for a more detailed list of fixes,
> and do not forget to update, either from the sources or from your regular
> distro channels.
> 
> Please find the usual URLs below :
>Site index   : http://www.haproxy.org/
>Discourse: http://discourse.haproxy.org/
>Slack channel: https://slack.haproxy.org/
>Issue tracker: https://github.com/haproxy/haproxy/issues
>Sources  : http://www.haproxy.org/download/2.1/src/
>Git repository   : http://git.haproxy.org/git/haproxy-2.1.git/
>Git Web browsing : http://git.haproxy.org/?p=haproxy-2.1.git
>Changelog: http://www.haproxy.org/download/2.1/src/CHANGELOG
>Cyril's HTML doc : http://cbonte.github.io/haproxy-dconv/
> 
> Willy
> ---
> Complete changelog :
> Balvinder Singh Rawat (1):
>   DOC: correct typo in alert message about rspirep
> 
> Bjoern Jacke (1):
>   DOC: fix typo about no-tls-tickets
> 
> Björn Jacke (1):
>   DOC: improve description of no-tls-tickets
> 
> Carl Henrik Lunde (1):
>   OPTIM: startup: fast unique_id allocation for acl.
> 
> Christopher Faulet (26):
>   BUG/MINOR: mux-fcgi: Forbid special characters when matching PATH_INFO 
> param
>   MINOR: mux-fcgi: Make the capture of the path-info optional in pathinfo 
> regex
>   MINOR: http-htx: Add a function to retrieve the headers size of an HTX 
> message
>   MINOR: filters: Forward data only if the last filter forwards something
>   BUG/MINOR: filters: Count HTTP headers as filtered data but don't 
> forward them
>   BUG/MINOR: http-htx: Don't return error if authority is 

[ANNOUNCE] haproxy-2.1.4

2020-04-02 Thread Willy Tarreau
Hi,

HAProxy 2.1.4 was released on 2020/04/02. It added 99 new commits
after version 2.1.3.

The main driver for this release is that it contains a fix for a serious
vulnerability that was responsibly reported last week by Felix Wilhelm
from Google Project Zero, affecting the HPACK decoder used for HTTP/2.
CVE-2020-11100 was assigned to this issue.

There is no configuration-based workaround for 2.1 and above.

This vulnerability makes it possible under certain circumstances to write
to a wide range of memory locations within the process' heap, with the
limitation that the attacker doesn't control the absolute address, so the
most likely result and by a far margin will be a process crash, but it is
not possible to completely rule out the faint possibility of a remote code
execution, at least in a lab-controlled environment. Felix was kind enough
to agree to delay the publication of his findings to the 20th of this month
in order to leave enough time to haproxy users to apply updates. But please
do not wait, as it is not very difficult to figure how to exploit the bug
based on the fix. Distros were notified and will also have fixes available
very shortly.

Three other important fixes are present in this version:
  - a non-portable way of calculating a list pointer that breaks with
gcc 10 unless using -fno-tree-pta. This bug results in infinite loops
at random places in the code depending how the compiler decides to
optimize the code.

  - a bug in the way TLV fields are extracted from the PROXY protocol, as
they could be mistakenly looked up in the subsequent payload, even
though these would have limited effects since these ones would generally
be meaningless for the transported protocol, but could be used to hide a
source address from logging for example.

  - the "tarpit" rules were partially broken in that since 1.9 they wouldn't
prevent a connection from being sent to a server while the 500 response
is delivered to the client. Given that they are often used to block
suspicious activity it's problematic.

The rest is less important, but still relevant to some users. Among those
noticeable I can enumerate:
  - the O(N^2) ACL unique-id allocator that could take several minutes to
boot on certain very large configs was reworked to follow O(NlogN)
instead.

  - the default global maxconn setting when not set in the configuration was
incorrectly set to the process' soft limit instead of the hard limit,
resulting in much lower connection counts on some setups after upgrade
from 1.x to 2.x. It now properly follows the hard limit.

  - a new thread-safe random number generator that will avoid the risk that
the "uuid" sample fetch function returns the exact same UUID in several
threads.

  - issues in HTX mode affecting filters, namely cache and compression, that
could lead to data corruption.

  - alignment issues causing bus error on Sparc64 were addressed

  - fixed a rare case of possible segfault on soft-stop when a finishing thread
flushes its pools while another one is freeing some elements.


Please have a look at the changelog below for a more detailed list of fixes,
and do not forget to update, either from the sources or from your regular
distro channels.

Please find the usual URLs below :
   Site index   : http://www.haproxy.org/
   Discourse: http://discourse.haproxy.org/
   Slack channel: https://slack.haproxy.org/
   Issue tracker: https://github.com/haproxy/haproxy/issues
   Sources  : http://www.haproxy.org/download/2.1/src/
   Git repository   : http://git.haproxy.org/git/haproxy-2.1.git/
   Git Web browsing : http://git.haproxy.org/?p=haproxy-2.1.git
   Changelog: http://www.haproxy.org/download/2.1/src/CHANGELOG
   Cyril's HTML doc : http://cbonte.github.io/haproxy-dconv/

Willy
---
Complete changelog :
Balvinder Singh Rawat (1):
  DOC: correct typo in alert message about rspirep

Bjoern Jacke (1):
  DOC: fix typo about no-tls-tickets

Björn Jacke (1):
  DOC: improve description of no-tls-tickets

Carl Henrik Lunde (1):
  OPTIM: startup: fast unique_id allocation for acl.

Christopher Faulet (26):
  BUG/MINOR: mux-fcgi: Forbid special characters when matching PATH_INFO 
param
  MINOR: mux-fcgi: Make the capture of the path-info optional in pathinfo 
regex
  MINOR: http-htx: Add a function to retrieve the headers size of an HTX 
message
  MINOR: filters: Forward data only if the last filter forwards something
  BUG/MINOR: filters: Count HTTP headers as filtered data but don't forward 
them
  BUG/MINOR: http-htx: Don't return error if authority is updated without 
changes
  BUG/MINOR: http-ana: Matching on monitor-uri should be case-sensitive
  MINOR: http-ana: Match on the path if the monitor-uri starts by a /
  BUG/MAJOR: http-ana: Always abort the request when a tarpit is triggered
  BUG/MINOR: http-htx: Do case-insensive