Question: How to not reset the TTL on a stick table entry?

2020-11-02 Thread Nick Ramirez

Hello,

In my HAProxy config, I would like to ban people for a certain amount of 
time by setting a general-purpose counter from 0 to 1, where 1 = banned, 
in a stick table. When the stick table entry expires, the counter is 
reset to 0 and the person is un-banned. This works fine. However, I 
would like to ignore this person's requests while they're banned. That 
way, as they make requests, they are not continuously banning 
themselves.


Consider if I use this ACL and "track" line:

```
acl is_banned sc_get_gpc1(0) gt 0
http-request track-sc0 be_name unless is_banned
```

This ACL uses `sc_get_gpc1(0)` to read the value of the general-purpose 
counter. When this ACL is used by the `track-sc0` line, it *resets the 
TTL* on the stick table entry, which means that a person will be banned 
forever unless they stop making requests. I don't want this.  I want to 
ban them for only 10 seconds. So, instead, I use this ACL:


```
acl is_banned be_name,table_gpc1 gt 0
http-request track-sc0 be_name unless is_banned
```

By using the `table_gpc1` conveter, the TTL is *not* reset when the ACL 
is used, which is good.


My question is, is this an undocumented feature? A bug that may one day 
be "fixed"? Why is there a difference between `sc_get_gpc1(0)` and 
`table_gpc1`, where the former resets the TTL on the stick table entry, 
but the latter does not?


Also, if this is a bug, would it be helpful to have a parameter on the 
track-sc0 line that allows me to opt in to not resetting the TTL?


Thank you,
Nick Ramirez


Re: DNS Load balancing needs feedback and advice.

2020-11-02 Thread Emeric Brun
Hi Lukas,
> I find this a little surprising given that there already is a great
> DNS load-balancer out there (dnsdist) from the folks at powerdns and
> when I look at the status of the haproxy resolver, I don't feel like
> DNS sparkes a huge amount of developer interest. Loadbalancing DNS
> will certainly require more attention and enthusiasm than what the
> resolver code get's today, and even more important: long term
> maintenance.

Thanks for this comment :)

>> Reading RFCs, I notice multiple fallback cases (if server not support eEDNS 
>> we should retry request without eDNS
> 
> The edns fallback should be obsolete and has been disabled on the
> large public resolver on flagday 2019.
> 
> https://dnsflagday.net/2019/
> 
Good to know, It confirms basic DNS is from stone age.

 
>> or if response is truncated we should retry over TCP
> 
> This is and always will be very necessary. Deploying the haproxy
> resolver feature would be a lot less dangerous if we would support
> this (or make all requests over TCP in the first place).
> 
> 
>> So we decide to make the assumption that nowadays, all modern DNS servers 
>> support both TCP (and pipelined requests
>> as defined in rfc 7766) and eDNS. In this case the DNS loadbalancer will 
>> forward messages received from clients in UDP
>> or TCP (supporting eDNS or not) to server via pipelined TCP conn.
> 
> That's probably a good idea. You still have to handle all the UDP pain
> on the frontend though.

Exactly, not all UDP pain, fallbacks will be handled by clients :) (handle this 
on backend side would be the worst pain)
 
> 
>> In addition, I had a more technical question: eDNS first purpose is clearly 
>> to bypass the 512 bytes limitation of standard DNS over UDP,
>> but I did'nt find details about usage of eDNS over TCP which seems mandatory 
>> if we want to perform DNSsec (since DNSsec
>> exloit some eDNS pseudo-header fields). The main question is how to handle 
>> the payload size field of the eDNS pseudo header
>> if messages are exchanged over TCP.
> 
> I'm not sure what the RFC specifically says, but I'd say don't send
> the "UDP payload size" field if the transport is TCP and ignore/filter
> it when received over TCP.


payload size field is part of the pseudo header def, we can't wipe this, but it 
would be a kind of "maximum message size" thing in case of TCP I suppose.

 
> not a dns expert here though,
Really appreciate.

R,
Emeric



Re: DNS Load balancing needs feedback and advice.

2020-11-02 Thread Lukas Tribus
Hello Emeric,


On Mon, 2 Nov 2020 at 15:41, Emeric Brun  wrote:
>
> Hi All,
>
> We are currently studying to develop a DNS messages load balancer (into 
> haproxy core)

I find this a little surprising given that there already is a great
DNS load-balancer out there (dnsdist) from the folks at powerdns and
when I look at the status of the haproxy resolver, I don't feel like
DNS sparkes a huge amount of developer interest. Loadbalancing DNS
will certainly require more attention and enthusiasm than what the
resolver code get's today, and even more important: long term
maintenance.


> After a global pass on RFCs (DNS, DNS over TCP, eDNS, DNSsec ...) we noticed 
> that practices on DNS have largely evolved
> since stone age.
>
> Since the last brainstorm meeting I had with Baptiste Assmann and Willy 
> Tarreau, we were attempted to make some
> assumptions and choices and we want to submit them to community to have your 
> thoughts.
>
> Reading RFCs, I notice multiple fallback cases (if server not support eEDNS 
> we should retry request without eDNS

The edns fallback should be obsolete and has been disabled on the
large public resolver on flagday 2019.

https://dnsflagday.net/2019/


> or if response is truncated we should retry over TCP

This is and always will be very necessary. Deploying the haproxy
resolver feature would be a lot less dangerous if we would support
this (or make all requests over TCP in the first place).


> So we decide to make the assumption that nowadays, all modern DNS servers 
> support both TCP (and pipelined requests
> as defined in rfc 7766) and eDNS. In this case the DNS loadbalancer will 
> forward messages received from clients in UDP
> or TCP (supporting eDNS or not) to server via pipelined TCP conn.

That's probably a good idea. You still have to handle all the UDP pain
on the frontend though.


> In addition, I had a more technical question: eDNS first purpose is clearly 
> to bypass the 512 bytes limitation of standard DNS over UDP,
> but I did'nt find details about usage of eDNS over TCP which seems mandatory 
> if we want to perform DNSsec (since DNSsec
> exloit some eDNS pseudo-header fields). The main question is how to handle 
> the payload size field of the eDNS pseudo header
> if messages are exchanged over TCP.

I'm not sure what the RFC specifically says, but I'd say don't send
the "UDP payload size" field if the transport is TCP and ignore/filter
it when received over TCP.



not a dns expert here though,

lukas



Re: [2.0.17] crash with coredump

2020-11-02 Thread Maciej Zdeb
I'm wondering, the corrupted address was always at "wait_event" in h2s
struct, after its removal in:
http://git.haproxy.org/?p=haproxy-2.2.git;a=commitdiff;h=5723f295d85febf5505f8aef6afabb6b23d6fdec;hp=f11be0ea1e8e571234cb41a2fcdde2cf2161df37
crashes went away.

But with the above patch and after altering h2s struct into:
struct h2s {
[...]
struct tasklet *shut_tl;
struct wait_event *recv_wait; /* recv wait_event the conn_stream
associated is waiting on (via h2_subscribe) */
struct wait_event *send_wait; /* send wait_event the conn_stream
associated is waiting on (via h2_subscribe) */
struct list list; /* To be used when adding in h2c->send_list or
h2c->fctl_lsit */
};

the crash returned.

However after recv_wait and send_wait were merged in:
http://git.haproxy.org/?p=haproxy-2.2.git;a=commit;h=f96508aae6b49277dcf142caa35042678cf8e2ca
crashes went away again.

In my opinion shut_tl should be corrupted again, but it is not. Maybe the
last patch fixed it?

pon., 2 lis 2020 o 15:37 Kirill A. Korinsky  napisał(a):

> Maciej,
>
> Looks like memory corruption is still here but it corrupt just some
> another place.
>
> Willy do you agree?
>
> --
> wbr, Kirill
>
> On 2. Nov 2020, at 15:34, Maciej Zdeb  wrote:
>
> So after Kirill suggestion to modify h2s struct in a way that tasklet
> "shut_tl" is before recv_wait I verified if in 2.2.4 the same crash will
> occur nd it did not!
>
> After the patch that merges recv_wait and send_wait:
> http://git.haproxy.org/?p=haproxy-2.2.git;a=commit;h=f96508aae6b49277dcf142caa35042678cf8e2ca
> and witch such h2s (tasklet shut_tl before wait_event subs) the crashes
> are gone:
>
> struct h2s {
> [...]
> struct buffer rxbuf; /* receive buffer, always valid (buf_empty or
> real buffer) */
> struct tasklet *shut_tl;  /* deferred shutdown tasklet, to retry
> to send an RST after we failed to,
>* in case there's no other subscription
> to do it */
> struct wait_event *subs;  /* recv wait_event the conn_stream
> associated is waiting on (via h2_subscribe) */
> struct list list; /* To be used when adding in h2c->send_list or
> h2c->fctl_lsit */
> };
>
>
>
> pon., 2 lis 2020 o 12:42 Maciej Zdeb  napisał(a):
>
>> Great idea Kirill,
>>
>> With such modification:
>>
>> struct h2s {
>> [...]
>> struct tasklet *shut_tl;
>> struct wait_event *recv_wait; /* recv wait_event the conn_stream
>> associated is waiting on (via h2_subscribe) */
>> struct wait_event *send_wait; /* send wait_event the conn_stream
>> associated is waiting on (via h2_subscribe) */
>> struct list list; /* To be used when adding in h2c->send_list or
>> h2c->fctl_lsit */
>> };
>>
>> it crashed just like before.
>>
>> pon., 2 lis 2020 o 11:12 Kirill A. Korinsky 
>> napisał(a):
>>
>>> Hi,
>>>
>>> Thanks for update.
>>>
>>> After read Wully's recommendation and provided commit that fixed an
>>> issue I'm curious can you "edit" a bit this commit and move `shut_tl`
>>> before `recv_wait` instead of removed `wait_event`?
>>>
>>> It is a quiet dummy way to confirm that memory corruption had gone, and
>>> not just moved to somewhere else.
>>>
>>> --
>>> wbr, Kirill
>>>
>>> On 2. Nov 2020, at 10:58, Maciej Zdeb  wrote:
>>>
>>> Hi,
>>>
>>> Update for people on the list that might be interested in the issue,
>>> because part of discussion was private.
>>>
>>> I wanted to check Willy suggestion and modified h2s struct (added dummy
>>> fields):
>>>
>>> struct h2s {
>>> [...]
>>> uint16_t status; /* HTTP response status */
>>> unsigned long long body_len; /* remaining body length according
>>> to content-length if H2_SF_DATA_CLEN */
>>> struct buffer rxbuf; /* receive buffer, always valid (buf_empty
>>> or real buffer) */
>>> int dummy0;
>>> struct wait_event wait_event; /* Wait list, when we're
>>> attempting to send a RST but we can't send */
>>> int dummy1;
>>> struct wait_event *recv_wait; /* recv wait_event the conn_stream
>>> associated is waiting on (via h2_subscribe) */
>>> int dummy2;
>>> struct wait_event *send_wait; /* send wait_event the conn_stream
>>> associated is waiting on (via h2_subscribe) */
>>> int dummy3;
>>> struct list list; /* To be used when adding in h2c->send_list or
>>> h2c->fctl_lsit */
>>> struct list sending_list; /* To be used when adding in
>>> h2c->sending_list */
>>> };
>>>
>>> With such modified h2s struct, the crash did not occur.
>>>
>>> I've checked HAProxy 2.1, it crashes like 2.0.
>>>
>>> I've also checked 2.2, bisection showed that this commit:
>>> http://git.haproxy.org/?p=haproxy-2.2.git;a=commitdiff;h=5723f295d85febf5505f8aef6afabb6b23d6fdec;hp=f11be0ea1e8e571234cb41a2fcdde2cf2161df37
>>> fixed the crashes we experienced. I'm not sure how/if it fixed the memory
>>> corruption, it is possible that memory is still corrupted 

DNS Load balancing needs feedback and advice.

2020-11-02 Thread Emeric Brun
Hi All,

We are currently studying to develop a DNS messages load balancer (into haproxy 
core)

After a global pass on RFCs (DNS, DNS over TCP, eDNS, DNSsec ...) we noticed 
that practices on DNS have largely evolved
since stone age.

Since the last brainstorm meeting I had with Baptiste Assmann and Willy 
Tarreau, we were attempted to make some
assumptions and choices and we want to submit them to community to have your 
thoughts.

Reading RFCs, I notice multiple fallback cases (if server not support eEDNS we 
should retry request without eDNS or if response
is truncated we should retry over TCP) which could clearly make the project 
really difficult to implement and sub optimal on
performances point of view. 

So we decide to make the assumption that nowadays, all modern DNS servers 
support both TCP (and pipelined requests
as defined in rfc 7766) and eDNS. In this case the DNS loadbalancer will 
forward messages received from clients in UDP
or TCP (supporting eDNS or not) to server via pipelined TCP conn.

We are requesting the community and experienced users of DNS servers to share 
their thoughts about this.

In addition, I had a more technical question: eDNS first purpose is clearly to 
bypass the 512 bytes limitation of standard DNS over UDP,
but I did'nt find details about usage of eDNS over TCP which seems mandatory if 
we want to perform DNSsec (since DNSsec
exloit some eDNS pseudo-header fields). The main question is how to handle the 
payload size field of the eDNS pseudo header
if messages are exchanged over TCP.

Finally, all others advice or thoughts about DNS loadbalancing in Haproxy are 
also welcome.

R,
Emeric 



Re: [2.0.17] crash with coredump

2020-11-02 Thread Kirill A. Korinsky
Maciej,

Looks like memory corruption is still here but it corrupt just some another 
place.

Willy do you agree?

--
wbr, Kirill

> On 2. Nov 2020, at 15:34, Maciej Zdeb  wrote:
> 
> So after Kirill suggestion to modify h2s struct in a way that tasklet 
> "shut_tl" is before recv_wait I verified if in 2.2.4 the same crash will 
> occur nd it did not!
> 
> After the patch that merges recv_wait and send_wait: 
> http://git.haproxy.org/?p=haproxy-2.2.git;a=commit;h=f96508aae6b49277dcf142caa35042678cf8e2ca
>  
> 
> and witch such h2s (tasklet shut_tl before wait_event subs) the crashes are 
> gone:
> 
> struct h2s {
> [...]
> struct buffer rxbuf; /* receive buffer, always valid (buf_empty or 
> real buffer) */
> struct tasklet *shut_tl;  /* deferred shutdown tasklet, to retry to 
> send an RST after we failed to,
>* in case there's no other subscription to 
> do it */
> struct wait_event *subs;  /* recv wait_event the conn_stream 
> associated is waiting on (via h2_subscribe) */
> struct list list; /* To be used when adding in h2c->send_list or 
> h2c->fctl_lsit */
> };
> 
> 
> 
> pon., 2 lis 2020 o 12:42 Maciej Zdeb mailto:mac...@zdeb.pl>> 
> napisał(a):
> Great idea Kirill,
> 
> With such modification:
> 
> struct h2s {
> [...]
> struct tasklet *shut_tl;
> struct wait_event *recv_wait; /* recv wait_event the conn_stream 
> associated is waiting on (via h2_subscribe) */
> struct wait_event *send_wait; /* send wait_event the conn_stream 
> associated is waiting on (via h2_subscribe) */
> struct list list; /* To be used when adding in h2c->send_list or 
> h2c->fctl_lsit */
> };
> 
> it crashed just like before.
> 
> pon., 2 lis 2020 o 11:12 Kirill A. Korinsky  > napisał(a):
> Hi,
> 
> Thanks for update.
> 
> After read Wully's recommendation and provided commit that fixed an issue I'm 
> curious can you "edit" a bit this commit and move `shut_tl` before 
> `recv_wait` instead of removed `wait_event`?
> 
> It is a quiet dummy way to confirm that memory corruption had gone, and not 
> just moved to somewhere else.
> 
> --
> wbr, Kirill
> 
>> On 2. Nov 2020, at 10:58, Maciej Zdeb > > wrote:
>> 
>> Hi,
>> 
>> Update for people on the list that might be interested in the issue, because 
>> part of discussion was private.
>> 
>> I wanted to check Willy suggestion and modified h2s struct (added dummy 
>> fields):
>> 
>> struct h2s {
>> [...]
>> uint16_t status; /* HTTP response status */
>> unsigned long long body_len; /* remaining body length according to 
>> content-length if H2_SF_DATA_CLEN */
>> struct buffer rxbuf; /* receive buffer, always valid (buf_empty or 
>> real buffer) */
>> int dummy0;
>> struct wait_event wait_event; /* Wait list, when we're attempting to 
>> send a RST but we can't send */
>> int dummy1;
>> struct wait_event *recv_wait; /* recv wait_event the conn_stream 
>> associated is waiting on (via h2_subscribe) */
>> int dummy2;
>> struct wait_event *send_wait; /* send wait_event the conn_stream 
>> associated is waiting on (via h2_subscribe) */
>> int dummy3;
>> struct list list; /* To be used when adding in h2c->send_list or 
>> h2c->fctl_lsit */
>> struct list sending_list; /* To be used when adding in 
>> h2c->sending_list */
>> };
>> 
>> With such modified h2s struct, the crash did not occur.
>> 
>> I've checked HAProxy 2.1, it crashes like 2.0.
>> 
>> I've also checked 2.2, bisection showed that this commit: 
>> http://git.haproxy.org/?p=haproxy-2.2.git;a=commitdiff;h=5723f295d85febf5505f8aef6afabb6b23d6fdec;hp=f11be0ea1e8e571234cb41a2fcdde2cf2161df37
>>  
>> 
>>  fixed the crashes we experienced. I'm not sure how/if it fixed the memory 
>> corruption, it is possible that memory is still corrupted but not causing 
>> the crash.
>> 
>> 
>> 
>> pt., 25 wrz 2020 o 16:25 Kirill A. Korinsky > > napisał(a):
>> Very interesting.
>> 
>> Anyway, I can see that this pice of code was refactored some time ago: 
>> https://github.com/haproxy/haproxy/commit/f96508aae6b49277dcf142caa35042678cf8e2ca
>>  
>> 
>> 
>> Maybe it is worth to try 2.2 or 2.3 branch?
>> 
>> Yes, it is a blind shot and just a guess.
>> 
>> --
>> wbr, Kirill
>> 
>>> On 25. Sep 2020, at 16:01, Maciej Zdeb >> > wrote:
>>> 
>>> Yes at the same place with same value:
>>> 
>>> (gdb) bt full
>>> #0  0x559ce98b964b in h2s_notify_recv (h2s=0x559cef94e7e0) at 
>>> src/mux_h2.c:783
>>> sw = 

Re: [2.0.17] crash with coredump

2020-11-02 Thread Maciej Zdeb
So after Kirill suggestion to modify h2s struct in a way that tasklet
"shut_tl" is before recv_wait I verified if in 2.2.4 the same crash will
occur nd it did not!

After the patch that merges recv_wait and send_wait:
http://git.haproxy.org/?p=haproxy-2.2.git;a=commit;h=f96508aae6b49277dcf142caa35042678cf8e2ca
and witch such h2s (tasklet shut_tl before wait_event subs) the crashes are
gone:

struct h2s {
[...]
struct buffer rxbuf; /* receive buffer, always valid (buf_empty or
real buffer) */
struct tasklet *shut_tl;  /* deferred shutdown tasklet, to retry to
send an RST after we failed to,
   * in case there's no other subscription
to do it */
struct wait_event *subs;  /* recv wait_event the conn_stream
associated is waiting on (via h2_subscribe) */
struct list list; /* To be used when adding in h2c->send_list or
h2c->fctl_lsit */
};



pon., 2 lis 2020 o 12:42 Maciej Zdeb  napisał(a):

> Great idea Kirill,
>
> With such modification:
>
> struct h2s {
> [...]
> struct tasklet *shut_tl;
> struct wait_event *recv_wait; /* recv wait_event the conn_stream
> associated is waiting on (via h2_subscribe) */
> struct wait_event *send_wait; /* send wait_event the conn_stream
> associated is waiting on (via h2_subscribe) */
> struct list list; /* To be used when adding in h2c->send_list or
> h2c->fctl_lsit */
> };
>
> it crashed just like before.
>
> pon., 2 lis 2020 o 11:12 Kirill A. Korinsky  napisał(a):
>
>> Hi,
>>
>> Thanks for update.
>>
>> After read Wully's recommendation and provided commit that fixed an issue
>> I'm curious can you "edit" a bit this commit and move `shut_tl` before
>> `recv_wait` instead of removed `wait_event`?
>>
>> It is a quiet dummy way to confirm that memory corruption had gone, and
>> not just moved to somewhere else.
>>
>> --
>> wbr, Kirill
>>
>> On 2. Nov 2020, at 10:58, Maciej Zdeb  wrote:
>>
>> Hi,
>>
>> Update for people on the list that might be interested in the issue,
>> because part of discussion was private.
>>
>> I wanted to check Willy suggestion and modified h2s struct (added dummy
>> fields):
>>
>> struct h2s {
>> [...]
>> uint16_t status; /* HTTP response status */
>> unsigned long long body_len; /* remaining body length according
>> to content-length if H2_SF_DATA_CLEN */
>> struct buffer rxbuf; /* receive buffer, always valid (buf_empty
>> or real buffer) */
>> int dummy0;
>> struct wait_event wait_event; /* Wait list, when we're attempting
>> to send a RST but we can't send */
>> int dummy1;
>> struct wait_event *recv_wait; /* recv wait_event the conn_stream
>> associated is waiting on (via h2_subscribe) */
>> int dummy2;
>> struct wait_event *send_wait; /* send wait_event the conn_stream
>> associated is waiting on (via h2_subscribe) */
>> int dummy3;
>> struct list list; /* To be used when adding in h2c->send_list or
>> h2c->fctl_lsit */
>> struct list sending_list; /* To be used when adding in
>> h2c->sending_list */
>> };
>>
>> With such modified h2s struct, the crash did not occur.
>>
>> I've checked HAProxy 2.1, it crashes like 2.0.
>>
>> I've also checked 2.2, bisection showed that this commit:
>> http://git.haproxy.org/?p=haproxy-2.2.git;a=commitdiff;h=5723f295d85febf5505f8aef6afabb6b23d6fdec;hp=f11be0ea1e8e571234cb41a2fcdde2cf2161df37
>> fixed the crashes we experienced. I'm not sure how/if it fixed the memory
>> corruption, it is possible that memory is still corrupted but not causing
>> the crash.
>>
>>
>>
>> pt., 25 wrz 2020 o 16:25 Kirill A. Korinsky 
>> napisał(a):
>>
>>> Very interesting.
>>>
>>> Anyway, I can see that this pice of code was refactored some time ago:
>>> https://github.com/haproxy/haproxy/commit/f96508aae6b49277dcf142caa35042678cf8e2ca
>>>
>>> Maybe it is worth to try 2.2 or 2.3 branch?
>>>
>>> Yes, it is a blind shot and just a guess.
>>>
>>> --
>>> wbr, Kirill
>>>
>>> On 25. Sep 2020, at 16:01, Maciej Zdeb  wrote:
>>>
>>> Yes at the same place with same value:
>>>
>>> (gdb) bt full
>>> #0  0x559ce98b964b in h2s_notify_recv (h2s=0x559cef94e7e0) at
>>> src/mux_h2.c:783
>>> sw = 0x
>>>
>>>
>>>
>>> pt., 25 wrz 2020 o 15:42 Kirill A. Korinsky 
>>> napisał(a):
>>>
 > On 25. Sep 2020, at 15:26, Maciej Zdeb  wrote:
 >
 > I was mailing outside the list with Willy and Christopher but it's
 worth sharing that the problem occurs even with nbthread = 1. I've managed
 to confirm it today.


 I'm curious is it crashed at the same place with the same value?

 --
 wbr, Kirill



>>>
>>


Re: [ANNOUNCE] haproxy-2.3-dev9

2020-11-02 Thread Илья Шипицин
сб, 31 окт. 2020 г. в 17:53, Willy Tarreau :

> Hi,
>
> HAProxy 2.3-dev9 was released on 2020/10/31. It added 27 new commits
> after version 2.3-dev8.
>
> Things have cooled down quite a bit, I really appreciate it. To be
> honest, I've really been hesitating between releasing 2.3-final now
> or leaving one extra week. Finally, considering that we're not late
> and that the last fixed issues were recently reported, I considered
> that it was worth waiting one more week to confirm this encouraging
> trend.
>

freebsd builds are unstable
https://github.com/haproxy/haproxy/runs/1341524534

also, couple of reg-tests fail on openssl no-deprecated mode
https://github.com/haproxy/haproxy/issues/924


should we address those failures before 2.3 release ?



>
> The changes since 2.3-dev8 are fairly small.
>
> The boringSSL saga continued with some OCSP fixes. And support for early
> data needed to be adjusted because that one now claims to be openssl 1.1.1
> but lacks some of its features... Hopefully now we got it right!
>
> While testing the cache compliance with standards, Rémi found a few issues
> related to an incomplete parsing of the cache-control header and a few
> other
> minor issues that he addressed. This will make the cache more accurate in
> front of certain applications. The cache also knows how to respond 304 to
> conditional requests, which should lower the external bandwidth with
> returning browsers. Some new sample fetches were added to check for cached
> responses.
>
> Amaury added some stats on H2 traffic which are quite welcome, I always
> felt frustrated by not knowing the H1/H2 ratios without looking at the
> logs.
>
> The rest looks like routine fixes.
>
> There are still two things I'd like us to have a look at, just in case
> we get an opportunity to fix old issues before the release. One of them
> is tha Maciej who reported some crashes with SPOE in 2.0 managed to
> bisect it and to find that in 2.2 it stopped crashing after a change at
> the H2 level which seems totally unrelated at first glance, so it's
> possible that we change some sequencing somewhere or that a new bug hid
> another one. The second one is that we've got a report of a suspicious
> rare crash in 2.2 which happens only when an http-after-response rule is
> present. Again, none of them is a 2.3 regression so they will not defer
> 2.3 release but the least bugs at release time, the better.
>
> I'm aware that there's quite a bunch of code floating around that people
> will want to put into 2.4. I just don't know if anything's ready yet for
> -next or not, but just in case I've rebased it on top of master.
>
> For those who read me right now, have a nice week-end :-)
>
> Please find the usual URLs below :
>Site index   : http://www.haproxy.org/
>Discourse: http://discourse.haproxy.org/
>Slack channel: https://slack.haproxy.org/
>Issue tracker: https://github.com/haproxy/haproxy/issues
>Wiki : https://github.com/haproxy/wiki/wiki
>Sources  : http://www.haproxy.org/download/2.3/src/
>Git repository   : http://git.haproxy.org/git/haproxy.git/
>Git Web browsing : http://git.haproxy.org/?p=haproxy.git
>Changelog: http://www.haproxy.org/download/2.3/src/CHANGELOG
>Cyril's HTML doc : http://cbonte.github.io/haproxy-dconv/
>
> Willy
> ---
> Complete changelog :
> Amaury Denoyelle (8):
>   MINOR: mux-h2: register a stats module
>   MINOR: mux-h2: add counters instance to h2c
>   MINOR: mux-h2: add stats for received frame types
>   MINOR: mux-h2: report detected error on stats
>   MINOR: mux-h2: count open connections/streams on stats
>   BUG/MINOR: server: fix srv downtime calcul on starting
>   BUG/MINOR: server: fix down_time report for stats
>   BUG/MINOR: lua: initialize sample before using it
>
> Emmanuel Hocdet (1):
>   BUG/MEDIUM: ssl: OCSP must work with BoringSSL
>
> Ilya Shipitsin (2):
>   BUILD: ssl: more elegant OpenSSL early data support check
>   CI: github actions: update h2spec to 2.6.0
>
> Remi Tricot Le Breton (1):
>   MINOR: cache: Store the "Last-Modified" date in the cache_entry
>
> Remi Tricot-Le Breton (6):
>   MINOR: cache: Process the If-Modified-Since header in conditional
> requests
>   MINOR: cache: Create res.cache_hit and res.cache_name sample fetches
>   MINOR: cache: Add Expires header value parsing
>   MINOR: ist: Add a case insensitive istmatch function
>   BUG/MINOR: cache: Manage multiple values in cache-control header
> value
>   BUG/MINOR: cache: Inverted variables in http_calc_maxage function
>
> Tim Duesterhus (1):
>   BUG/MINOR: cache: Check the return value of http_replace_res_status
>
> William Dauchy (1):
>   CLEANUP: http_ana: remove unused assignation of `att_beg`
>
> Willy Tarreau (7):
>   BUG/MINOR: log: fix memory leak on logsrv parse error
>   BUG/MINOR: log: fix risk of null deref on error path
>   

Re: [2.0.17] crash with coredump

2020-11-02 Thread Maciej Zdeb
Great idea Kirill,

With such modification:

struct h2s {
[...]
struct tasklet *shut_tl;
struct wait_event *recv_wait; /* recv wait_event the conn_stream
associated is waiting on (via h2_subscribe) */
struct wait_event *send_wait; /* send wait_event the conn_stream
associated is waiting on (via h2_subscribe) */
struct list list; /* To be used when adding in h2c->send_list or
h2c->fctl_lsit */
};

it crashed just like before.

pon., 2 lis 2020 o 11:12 Kirill A. Korinsky  napisał(a):

> Hi,
>
> Thanks for update.
>
> After read Wully's recommendation and provided commit that fixed an issue
> I'm curious can you "edit" a bit this commit and move `shut_tl` before
> `recv_wait` instead of removed `wait_event`?
>
> It is a quiet dummy way to confirm that memory corruption had gone, and
> not just moved to somewhere else.
>
> --
> wbr, Kirill
>
> On 2. Nov 2020, at 10:58, Maciej Zdeb  wrote:
>
> Hi,
>
> Update for people on the list that might be interested in the issue,
> because part of discussion was private.
>
> I wanted to check Willy suggestion and modified h2s struct (added dummy
> fields):
>
> struct h2s {
> [...]
> uint16_t status; /* HTTP response status */
> unsigned long long body_len; /* remaining body length according to
> content-length if H2_SF_DATA_CLEN */
> struct buffer rxbuf; /* receive buffer, always valid (buf_empty or
> real buffer) */
> int dummy0;
> struct wait_event wait_event; /* Wait list, when we're attempting
> to send a RST but we can't send */
> int dummy1;
> struct wait_event *recv_wait; /* recv wait_event the conn_stream
> associated is waiting on (via h2_subscribe) */
> int dummy2;
> struct wait_event *send_wait; /* send wait_event the conn_stream
> associated is waiting on (via h2_subscribe) */
> int dummy3;
> struct list list; /* To be used when adding in h2c->send_list or
> h2c->fctl_lsit */
> struct list sending_list; /* To be used when adding in
> h2c->sending_list */
> };
>
> With such modified h2s struct, the crash did not occur.
>
> I've checked HAProxy 2.1, it crashes like 2.0.
>
> I've also checked 2.2, bisection showed that this commit:
> http://git.haproxy.org/?p=haproxy-2.2.git;a=commitdiff;h=5723f295d85febf5505f8aef6afabb6b23d6fdec;hp=f11be0ea1e8e571234cb41a2fcdde2cf2161df37
> fixed the crashes we experienced. I'm not sure how/if it fixed the memory
> corruption, it is possible that memory is still corrupted but not causing
> the crash.
>
>
>
> pt., 25 wrz 2020 o 16:25 Kirill A. Korinsky  napisał(a):
>
>> Very interesting.
>>
>> Anyway, I can see that this pice of code was refactored some time ago:
>> https://github.com/haproxy/haproxy/commit/f96508aae6b49277dcf142caa35042678cf8e2ca
>>
>> Maybe it is worth to try 2.2 or 2.3 branch?
>>
>> Yes, it is a blind shot and just a guess.
>>
>> --
>> wbr, Kirill
>>
>> On 25. Sep 2020, at 16:01, Maciej Zdeb  wrote:
>>
>> Yes at the same place with same value:
>>
>> (gdb) bt full
>> #0  0x559ce98b964b in h2s_notify_recv (h2s=0x559cef94e7e0) at
>> src/mux_h2.c:783
>> sw = 0x
>>
>>
>>
>> pt., 25 wrz 2020 o 15:42 Kirill A. Korinsky 
>> napisał(a):
>>
>>> > On 25. Sep 2020, at 15:26, Maciej Zdeb  wrote:
>>> >
>>> > I was mailing outside the list with Willy and Christopher but it's
>>> worth sharing that the problem occurs even with nbthread = 1. I've managed
>>> to confirm it today.
>>>
>>>
>>> I'm curious is it crashed at the same place with the same value?
>>>
>>> --
>>> wbr, Kirill
>>>
>>>
>>>
>>
>


Re: [2.0.17] crash with coredump

2020-11-02 Thread Kirill A. Korinsky
Hi,

Thanks for update.

After read Wully's recommendation and provided commit that fixed an issue I'm 
curious can you "edit" a bit this commit and move `shut_tl` before `recv_wait` 
instead of removed `wait_event`?

It is a quiet dummy way to confirm that memory corruption had gone, and not 
just moved to somewhere else.

--
wbr, Kirill

> On 2. Nov 2020, at 10:58, Maciej Zdeb  wrote:
> 
> Hi,
> 
> Update for people on the list that might be interested in the issue, because 
> part of discussion was private.
> 
> I wanted to check Willy suggestion and modified h2s struct (added dummy 
> fields):
> 
> struct h2s {
> [...]
> uint16_t status; /* HTTP response status */
> unsigned long long body_len; /* remaining body length according to 
> content-length if H2_SF_DATA_CLEN */
> struct buffer rxbuf; /* receive buffer, always valid (buf_empty or 
> real buffer) */
> int dummy0;
> struct wait_event wait_event; /* Wait list, when we're attempting to 
> send a RST but we can't send */
> int dummy1;
> struct wait_event *recv_wait; /* recv wait_event the conn_stream 
> associated is waiting on (via h2_subscribe) */
> int dummy2;
> struct wait_event *send_wait; /* send wait_event the conn_stream 
> associated is waiting on (via h2_subscribe) */
> int dummy3;
> struct list list; /* To be used when adding in h2c->send_list or 
> h2c->fctl_lsit */
> struct list sending_list; /* To be used when adding in 
> h2c->sending_list */
> };
> 
> With such modified h2s struct, the crash did not occur.
> 
> I've checked HAProxy 2.1, it crashes like 2.0.
> 
> I've also checked 2.2, bisection showed that this commit: 
> http://git.haproxy.org/?p=haproxy-2.2.git;a=commitdiff;h=5723f295d85febf5505f8aef6afabb6b23d6fdec;hp=f11be0ea1e8e571234cb41a2fcdde2cf2161df37
>  
> 
>  fixed the crashes we experienced. I'm not sure how/if it fixed the memory 
> corruption, it is possible that memory is still corrupted but not causing the 
> crash.
> 
> 
> 
> pt., 25 wrz 2020 o 16:25 Kirill A. Korinsky  > napisał(a):
> Very interesting.
> 
> Anyway, I can see that this pice of code was refactored some time ago: 
> https://github.com/haproxy/haproxy/commit/f96508aae6b49277dcf142caa35042678cf8e2ca
>  
> 
> 
> Maybe it is worth to try 2.2 or 2.3 branch?
> 
> Yes, it is a blind shot and just a guess.
> 
> --
> wbr, Kirill
> 
>> On 25. Sep 2020, at 16:01, Maciej Zdeb > > wrote:
>> 
>> Yes at the same place with same value:
>> 
>> (gdb) bt full
>> #0  0x559ce98b964b in h2s_notify_recv (h2s=0x559cef94e7e0) at 
>> src/mux_h2.c:783
>> sw = 0x
>> 
>> 
>> 
>> pt., 25 wrz 2020 o 15:42 Kirill A. Korinsky > > napisał(a):
>> > On 25. Sep 2020, at 15:26, Maciej Zdeb > > > wrote:
>> >
>> > I was mailing outside the list with Willy and Christopher but it's worth 
>> > sharing that the problem occurs even with nbthread = 1. I've managed to 
>> > confirm it today.
>> 
>> 
>> I'm curious is it crashed at the same place with the same value?
>> 
>> --
>> wbr, Kirill
>> 
>> 
> 



signature.asc
Description: Message signed with OpenPGP


Re: [2.0.17] crash with coredump

2020-11-02 Thread Maciej Zdeb
Hi,

Update for people on the list that might be interested in the issue,
because part of discussion was private.

I wanted to check Willy suggestion and modified h2s struct (added dummy
fields):

struct h2s {
[...]
uint16_t status; /* HTTP response status */
unsigned long long body_len; /* remaining body length according to
content-length if H2_SF_DATA_CLEN */
struct buffer rxbuf; /* receive buffer, always valid (buf_empty or
real buffer) */
int dummy0;
struct wait_event wait_event; /* Wait list, when we're attempting
to send a RST but we can't send */
int dummy1;
struct wait_event *recv_wait; /* recv wait_event the conn_stream
associated is waiting on (via h2_subscribe) */
int dummy2;
struct wait_event *send_wait; /* send wait_event the conn_stream
associated is waiting on (via h2_subscribe) */
int dummy3;
struct list list; /* To be used when adding in h2c->send_list or
h2c->fctl_lsit */
struct list sending_list; /* To be used when adding in
h2c->sending_list */
};

With such modified h2s struct, the crash did not occur.

I've checked HAProxy 2.1, it crashes like 2.0.

I've also checked 2.2, bisection showed that this commit:
http://git.haproxy.org/?p=haproxy-2.2.git;a=commitdiff;h=5723f295d85febf5505f8aef6afabb6b23d6fdec;hp=f11be0ea1e8e571234cb41a2fcdde2cf2161df37
fixed the crashes we experienced. I'm not sure how/if it fixed the memory
corruption, it is possible that memory is still corrupted but not causing
the crash.



pt., 25 wrz 2020 o 16:25 Kirill A. Korinsky  napisał(a):

> Very interesting.
>
> Anyway, I can see that this pice of code was refactored some time ago:
> https://github.com/haproxy/haproxy/commit/f96508aae6b49277dcf142caa35042678cf8e2ca
>
> Maybe it is worth to try 2.2 or 2.3 branch?
>
> Yes, it is a blind shot and just a guess.
>
> --
> wbr, Kirill
>
> On 25. Sep 2020, at 16:01, Maciej Zdeb  wrote:
>
> Yes at the same place with same value:
>
> (gdb) bt full
> #0  0x559ce98b964b in h2s_notify_recv (h2s=0x559cef94e7e0) at
> src/mux_h2.c:783
> sw = 0x
>
>
>
> pt., 25 wrz 2020 o 15:42 Kirill A. Korinsky  napisał(a):
>
>> > On 25. Sep 2020, at 15:26, Maciej Zdeb  wrote:
>> >
>> > I was mailing outside the list with Willy and Christopher but it's
>> worth sharing that the problem occurs even with nbthread = 1. I've managed
>> to confirm it today.
>>
>>
>> I'm curious is it crashed at the same place with the same value?
>>
>> --
>> wbr, Kirill
>>
>>
>>
>