stable-bot: Bugfixes waiting for a release 2.3 (23), 2.2 (17), 2.1 (26), 2.0 (24)

2020-12-29 Thread stable-bot
Hi,

This is a friendly bot that watches fixes pending for the next haproxy-stable 
release!  One such e-mail is sent periodically once patches are waiting in the 
last maintenance branch, and an ideal release date is computed based on the 
severity of these fixes and their merge date.  Responses to this mail must be 
sent to the mailing list.


Last release 2.3.2 was issued on 2020-11-28.  There are currently 23 
patches in the queue cut down this way:
- 2 MAJOR, first one merged on 2020-12-14
- 6 MEDIUM, first one merged on 2020-12-14
- 15 MINOR, first one merged on 2020-12-14

Thus the computed ideal release date for 2.3.3 would be 2020-12-28, which was 
within the last week.

Last release 2.2.6 was issued on 2020-11-30.  There are currently 17 
patches in the queue cut down this way:
- 2 MAJOR, first one merged on 2020-12-14
- 3 MEDIUM, first one merged on 2020-12-14
- 12 MINOR, first one merged on 2020-12-14

Thus the computed ideal release date for 2.2.7 would be 2020-12-28, which was 
within the last week.

Last release 2.1.10 was issued on 2020-11-05.  There are currently 26 
patches in the queue cut down this way:
- 4 MAJOR, first one merged on 2020-11-13
- 5 MEDIUM, first one merged on 2020-11-13
- 17 MINOR, first one merged on 2020-11-06

Thus the computed ideal release date for 2.1.11 would be 2020-12-11, which was 
two weeks ago.

Last release 2.0.19 was issued on 2020-11-06.  There are currently 24 
patches in the queue cut down this way:
- 4 MAJOR, first one merged on 2020-11-13
- 5 MEDIUM, first one merged on 2020-11-13
- 15 MINOR, first one merged on 2020-11-13

Thus the computed ideal release date for 2.0.20 would be 2020-12-11, which was 
two weeks ago.

The current list of patches in the queue is:
 - 2.0, 2.1, 2.2, 2.3- MAJOR   : spoa/python: Fixing return None
 - 2.0, 2.1  - MAJOR   : peers: fix partial message decoding
 - 2.0, 2.1  - MAJOR   : filters: Always keep all offsets up to 
date during data filtering
 - 2.0, 2.1  - MAJOR   : spoe: Be sure to remove all references 
on a released spoe applet
 - 2.2, 2.3  - MAJOR   : ring: tcp forward on ring can break 
the reader counter.
 - 2.0, 2.1, 2.2, 2.3- MEDIUM  : spoa/python: Fixing references to 
None
 - 2.0, 2.1, 2.2, 2.3- MEDIUM  : lb-leastconn: Reposition a server 
using the right eweight
 - 2.3   - MEDIUM  : task: close a possible data race 
condition on a tasklet's list link
 - 2.0, 2.1, 2.2, 2.3- MEDIUM  : spoa/python: Fixing PyObject_Call 
positional arguments
 - 2.3   - MEDIUM  : lists: Lock the element while we check 
if it is in a list.
 - 2.3   - MEDIUM  : local log format regression.
 - 2.0, 2.1  - MEDIUM  : peers: fix decoding of multi-byte 
length in stick-table messages
 - 2.0, 2.1  - MEDIUM  : filters: Forward all filtered data at 
the end of http filtering
 - 2.0, 2.1, 2.2, 2.3- MINOR   : spoa/python: Cleanup references 
for failed Module Addobject operations
 - 2.0, 2.1  - MINOR   : peers: Do not ignore a protocol error 
for dictionary entries.
 - 2.0, 2.1  - MINOR   : peers: Missing TX cache entries reset.
 - 2.2, 2.3  - MINOR   : http-check: Use right condition to 
consider HTX message as full
 - 2.0, 2.1, 2.2, 2.3- MINOR   : lua: Some lua init operation are 
processed unsafe
 - 2.0, 2.1  - MINOR   : lua: set buffer size during map lookups
 - 2.1   - MINOR   : http-htx: Just warn if payload of an 
errorfile doesn't match the C-L
 - 2.1, 2.2, 2.3 - MINOR   : lua: missing "\n" in error message
 - 2.3   - MINOR   : listener: use sockaddr_in6 for IPv6
 - 2.0, 2.1  - MINOR   : http-fetch: Fix calls w/o parentheses 
of the cookie sample fetches
 - 2.0, 2.1, 2.2, 2.3- MINOR   : lua: lua-load doesn't check its 
parameters
 - 2.0, 2.1, 2.2, 2.3- MINOR   : lua: warn when registering action, 
conv, sf, cli or applet multiple times
 - 2.0, 2.1  - MINOR   : http-ana: Don't wait for the body of 
CONNECT requests
 - 2.0, 2.1  - MINOR   : http-fetch: Extract cookie value even 
when no cookie name
 - 2.2, 2.3  - MINOR   : mux-h1: Handle keep-alive timeout for 
idle frontend connections
 - 2.0, 2.1, 2.2, 2.3- MINOR   : spoa/python: Cleanup ipaddress 
objects if initialization fails
 - 2.0, 2.1  - MINOR   : pattern: a sample marked as const 
could be written
 - 2.2, 2.3  - MINOR   : tcpcheck: Don't rearm the check 
timeout on each read
 - 2.0, 2.1, 2.2, 2.3- MINOR   : lua: Post init register function 
are not executed beyond the first one
 - 2.0, 2.1, 2.2, 2.3- MINOR   : tools: 

Re: [PATCH] enable coverity daily scan again

2020-12-29 Thread Илья Шипицин
вт, 29 дек. 2020 г. в 17:15, Tim Düsterhus :

> Ilya,
>
> Am 29.12.20 um 13:01 schrieb Илья Шипицин:
> > It is coverity weirdness. Build was submitted. If you follow to findings,
> > dates are updated
>
> Perfect, thanks. Looking at the overview page:
> https://scan.coverity.com/projects/haproxy
>
> It appears as if the "Components" list is outdated since the refactoring
> of the repository. As an example the ebtree stuff now is in
> include/import/eb*tree.h. It probably should be updated.
>

I do not have access to that area


>
> Best regards
> Tim Düsterhus
>


Re: [PR] Add srvkey option to stick-table

2020-12-29 Thread Thayne McCombs
> I am not sure we will import the second patch for the reg test: it makes
usage of curl. We try to not use external programs for reg tests except
if there is no choice.

Fine by me. I actually meant to say that my second attempt at writing a reg
attempt should replace that patch. Apparently I forgot.

Thayne McCombs
Senior Software Engineer
tha...@lucidchart.com



Lucid.co 


On Tue, Dec 29, 2020 at 2:52 AM Frederic Lecaille 
wrote:

> On 12/19/20 9:07 AM, Thayne McCombs wrote:
> >  From 731d6a00f3cf0c70d7a8a916e1984c225f3a9dd6 Mon Sep 17 00:00:00 2001
> > From: Thayne McCombs 
> > Date: Sat, 19 Dec 2020 00:59:35 -0700
> > Subject: [PATCH] Add test for stickiness using the server address for the
> >   server key
> >
> > ---
> >   reg-tests/stickiness/srvkey-addr.vtc | 259 +++
> >   1 file changed, 259 insertions(+)
> >   create mode 100644 reg-tests/stickiness/srvkey-addr.vtc
> >
> > diff --git a/reg-tests/stickiness/srvkey-addr.vtc
> > b/reg-tests/stickiness/srvkey-addr.vtc
> > new file mode 100644
> > index 0..b5675089f
> > --- /dev/null
> > +++ b/reg-tests/stickiness/srvkey-addr.vtc
> > @@ -0,0 +1,259 @@
> > +vtest "A reg test for stickiness with srvkey addr"
> > +feature ignore_unknown_macro
> > +
> > +
> > +# The aim of this test is to check that "stick on" rules
> > +# do the job they are supposed to do.
> > +# If we remove one of the "stick on" rule, this script fails.
> > +
> > +#REQUIRE_VERSION=2.0
>
> Ok for this test but I think we should use 2.4 which is the current
> developemnt version.
>
> Willy, I think we can import this patch after having merged the previous
> one [1/2] patch about this feature implementation.
>


Bid Writing, Major Donors and Volunteering Workshops

2020-12-29 Thread NFP Workshops


NFP WORKSHOPS
18 Blake Street, York YO1 8QG   01133 280988
Affordable Training Courses for Charities, Schools & Public Sector 
Organisations 




This email has been sent to haproxy@formilux.org
CLICK TO UNSUBSCRIBE FROM LIST
Alternatively send a blank e-mail to unsubscr...@nfpmail2001.co.uk quoting 
haproxy@formilux.org in the subject line.
Unsubscribe requests will take effect within seven days. 




Bid Writing: The Basics

Online via ZOOM  

COST £95

TOPICS COVERED

Do you know the most common reasons for rejection? Are you gathering the right 
evidence? Are you making the right arguments? Are you using the right 
terminology? Are your numbers right? Are you learning from rejections? Are you 
assembling the right documents? Do you know how to create a clear and concise 
standard funding bid?

Are you communicating with people or just excluding them? Do you know your own 
organisation well enough? Are you thinking through your projects carefully 
enough? Do you know enough about your competitors? Are you answering the 
questions funders will ask themselves about your application? Are you 
submitting applications correctly?

PARTICIPANTS  

Staff members, volunteers, trustees or board members of charities, schools, not 
for profits or public sector organisations who intend to submit grant funding 
applications to charitable grant making trusts and foundations. People who 
provide advice to these organisations are also welcome.
Bid Writing: Advanced

Online via ZOOM  

COST £95

TOPICS COVERED

Are you applying to the right trusts? Are you applying to enough trusts? Are 
you asking for the right amount of money? Are you applying in the right ways? 
Are your projects the most fundable projects? 

Are you carrying out trust fundraising in a professional way? Are you 
delegating enough work? Are you highly productive or just very busy? Are you 
looking for trusts in all the right places? 

How do you compare with your competitors for funding? Is the rest of your 
fundraising hampering your bids to trusts? Do you understand what trusts are 
ideally looking for?

PARTICIPANTS  

Staff members, volunteers, trustees or board members of charities, schools, not 
for profits or public sector organisations who intend to submit grant funding 
applications to charitable grant making trusts and foundations. People who 
provide advice to these organisations are also welcome.
Dates & Booking Links
BID WRITING: THE BASICS
Mon 11 Jan 2020
10.00 to 12.30Booking Link
Mon 25 Jan 2020
10.00 to 12.30Booking Link
Mon 08 Feb 2020
10.00 to 12.30Booking Link
Mon 22 Feb 2020
10.00 to 12.30Booking Link
BID WRITING: ADVANCED
Tue 12 Jan 2020
10.00 to 12.30Booking Link
Tue 26 Jan 2020
10.00 to 12.30Booking Link
Tue 09 Feb 2020
10.00 to 12.30Booking Link

Tue 23 Feb 2020
10.00 to 12.30Booking Link



Recruiting and Managing Volunteers

Online via ZOOM 

COST £195

TOPICS COVERED

Where do you find volunteers? How do you find the right volunteers? How do you 
attract volunteers? How do you run volunteer recruitment events? How do you 
interview volunteers? How do you train volunteers? How do you motivate 
volunteers? How do you involve volunteers?

How do you recognise volunteer? How do you recognise problems with volunteers? 
How do you learn from volunteer problems? How do you retain volunteers? How do 
you manage volunteers? What about volunteers and your own staff? What about 
younger, older and employee volunteers?

PARTICIPANTS

Staff members, volunteers, trustees or board members of charities, schools, not 
for profits or public sector organisations who intend to recruit volunteers 
into their organisation and then manage those volunteers. People who provide 
advice to these organisations are also welcome.
Dates & Booking Links
RECRUITING AND MANAGING VOLUNTEERS
Wed 13 Jan 2021
10.00 to 16.00Booking Link
Wed 10 Mar 2021
10.00 to 16.00Booking Link



Major Donor Fundraising
 
Online via ZOOM
   
COST £95 

TOPICS COVERED
 
 Major Donor Characteristics, Motivations and Requirements. Researching and 
Screening Major Donors. Encouraging, Involving and Retaining Major Donors. 
Building Relationships with Major Donors. Major Donor Events and Activities. 

 

Setting Up Major Donor Clubs.Asking For Major Gifts. Looking After and 
Reporting Back to Major Donors. Delivering on Major Donor Expectations. Showing 
Your Appreciation to Major Donors. Fundraising Budgets and Committees.

PARTICIPANTS
Staff members, volunteers, trustees or board members of charities, schools, not 
for profits or public sector organisations who intend to carry out Major Donor 
Fundraising. People who provide advice to these organisations are also welcome.
Dates & Booking Links
MAJOR DONOR FUNDRAISING
Wed 10 Feb 2021
10.00 to 12.30Booking Link
Wed 14 Apr 2021
10.00 to 12.30Booking Link



FEEDBACK FROM PAST ATTENDEES AT LIVE WORKSHOPS 
I must say I was really impressed with the course and the content. My knowledge 
and confidence has increased hugely. I got a 

Re: [PATCH] enable coverity daily scan again

2020-12-29 Thread Tim Düsterhus
Ilya,

Am 29.12.20 um 13:01 schrieb Илья Шипицин:
> It is coverity weirdness. Build was submitted. If you follow to findings,
> dates are updated

Perfect, thanks. Looking at the overview page:
https://scan.coverity.com/projects/haproxy

It appears as if the "Components" list is outdated since the refactoring
of the repository. As an example the ebtree stuff now is in
include/import/eb*tree.h. It probably should be updated.

Best regards
Tim Düsterhus



Re: [PATCH] enable coverity daily scan again

2020-12-29 Thread Илья Шипицин
It is coverity weirdness. Build was submitted. If you follow to findings,
dates are updated

On Tue, Dec 29, 2020, 4:47 PM Tim Düsterhus  wrote:

> Willy,
>
> Am 28.12.20 um 12:05 schrieb Willy Tarreau:
> > Thanks to the name, I found it in a mail from Ilya on 2019-08-06 and
> > just uploaded it according to your procedure.
> >
> > Just push the patch now. Let's see if it works.
> >
>
> At least the build ran tonight:
> https://github.com/haproxy/haproxy/runs/1619717275?check_suite_focus=true
>
> I'm a bit confused by the last line "Coverity Scan upload failed: Build
> successfully submitted..".
>
> It says both "failed" and "successfully". I'm not sure if that is
> correct but Ilya will hopefully be able to tell us if it worked.
>
> Best regards
> Tim Düsterhus
>


Re: [PATCH] enable coverity daily scan again

2020-12-29 Thread Tim Düsterhus
Willy,

Am 28.12.20 um 12:05 schrieb Willy Tarreau:
> Thanks to the name, I found it in a mail from Ilya on 2019-08-06 and
> just uploaded it according to your procedure.
> 
> Just push the patch now. Let's see if it works.
> 

At least the build ran tonight:
https://github.com/haproxy/haproxy/runs/1619717275?check_suite_focus=true

I'm a bit confused by the last line "Coverity Scan upload failed: Build
successfully submitted..".

It says both "failed" and "successfully". I'm not sure if that is
correct but Ilya will hopefully be able to tell us if it worked.

Best regards
Tim Düsterhus



[PATCH v2] BUG/MEDIUM: cache: Fix hash collision in `accept-encoding` handling for `Vary`

2020-12-29 Thread Tim Duesterhus
This patch fixes GitHub Issue #988. Commit 
ce9e7b25217c46db1ac636b2c885a05bf91ae57e
was not sufficient, because it fell back to a hash comparison if the bitmap
of known encodings was not acceptable instead of directly returning the the
cached response is not compatible.

This patch also extends the reg-test to test the hash collision that was
mentioned in #988.

Vary handling is 2.4, no backport needed.
---
 reg-tests/cache/vary.vtc | 56 +---
 reg-tests/cache/vary_accept_encoding.vtc | 32 ++
 src/cache.c  | 29 +++-
 3 files changed, 91 insertions(+), 26 deletions(-)

diff --git a/reg-tests/cache/vary.vtc b/reg-tests/cache/vary.vtc
index a840799f3..be79b6ad2 100644
--- a/reg-tests/cache/vary.vtc
+++ b/reg-tests/cache/vary.vtc
@@ -5,7 +5,8 @@ varnishtest "Vary support"
 feature ignore_unknown_macro
 
 server s1 {
-   # Response varying on "accept-encoding"
+   # Response varying on "accept-encoding" with
+   # an unacceptable content-encoding
rxreq
expect req.url == "/accept-encoding"
txresp -hdr "Content-Encoding: gzip" \
@@ -16,6 +17,15 @@ server s1 {
# Response varying on "accept-encoding"
rxreq
expect req.url == "/accept-encoding"
+   txresp -hdr "Content-Encoding: gzip" \
+   -hdr "Vary: accept-encoding" \
+   -hdr "Cache-Control: max-age=5" \
+   -bodylen 45
+
+   # Response varying on "accept-encoding" with
+   # no content-encoding
+   rxreq
+   expect req.url == "/accept-encoding"
txresp -hdr "Content-Type: text/plain" \
-hdr "Vary: accept-encoding" \
-hdr "Cache-Control: max-age=5" \
@@ -57,7 +67,7 @@ server s1 {
expect req.url == "/referer-accept-encoding"
txresp -hdr "Vary: referer,accept-encoding" \
-hdr "Cache-Control: max-age=5" \
-   -hdr "Content-Encoding: gzip" \
+   -hdr "Content-Encoding: br" \
-bodylen 54
 
rxreq
@@ -72,14 +82,14 @@ server s1 {
expect req.url == "/multiple_headers"
txresp -hdr "Vary: accept-encoding" \
   -hdr "Cache-Control: max-age=5" \
-   -hdr "Content-Encoding: gzip" \
+   -hdr "Content-Encoding: br" \
   -bodylen 155
 
rxreq
expect req.url == "/multiple_headers"
txresp -hdr "Vary: accept-encoding" \
   -hdr "Cache-Control: max-age=5" \
-   -hdr "Content-Encoding: gzip" \
+   -hdr "Content-Encoding: br" \
   -bodylen 166
 
 
@@ -163,6 +173,16 @@ client c1 -connect ${h1_fe_sock} {
expect resp.http.content-encoding == "gzip"
expect resp.bodylen == 45
 
+   # The response for the first request had an unacceptable 
`content-encoding`
+   # which might happen if that's the only thing the server supports, but
+   # we must not cache that and instead defer to the server.
+   txreq -url "/accept-encoding" -hdr "Accept-Encoding: first_value"
+   rxresp
+   expect resp.status == 200
+   expect resp.http.content-encoding == "gzip"
+   expect resp.bodylen == 45
+   expect resp.http.X-Cache-Hit == 0
+
txreq -url "/accept-encoding" -hdr "Accept-Encoding: second_value"
rxresp
expect resp.status == 200
@@ -170,11 +190,15 @@ client c1 -connect ${h1_fe_sock} {
expect resp.http.content-type == "text/plain"
expect resp.http.X-Cache-Hit == 0
 
+   # This request matches the cache entry for the request above, despite
+   # matching the `accept-encoding` of the first request because the
+   # request above only has the `identity` encoding which is implicitly
+   # added, unless explicitely forbidden.
txreq -url "/accept-encoding" -hdr "Accept-Encoding: first_value"
rxresp
expect resp.status == 200
-   expect resp.bodylen == 45
-   expect resp.http.content-encoding == "gzip"
+   expect resp.bodylen == 48
+   expect resp.http.content-type == "text/plain"
expect resp.http.X-Cache-Hit == 1
 
txreq -url "/accept-encoding" -hdr "Accept-Encoding: second_value"
@@ -227,7 +251,7 @@ client c1 -connect ${h1_fe_sock} {
 
# Mixed Vary (Accept-Encoding + Referer)
txreq -url "/referer-accept-encoding" \
-   -hdr "Accept-Encoding: first_value,second_value" \
+   -hdr "Accept-Encoding: br, gzip" \
-hdr "Referer: referer"
rxresp
expect resp.status == 200
@@ -235,7 +259,7 @@ client c1 -connect ${h1_fe_sock} {
expect resp.http.X-Cache-Hit == 0
 
txreq -url "/referer-accept-encoding" \
-   -hdr "Accept-Encoding: first_value" \
+   -hdr "Accept-Encoding: br" \
-hdr "Referer: other-referer"
rxresp
expect resp.status == 200
@@ -243,7 +267,7 @@ client c1 

Re: [PATCH] BUG/MEDIUM: cache: Fix hash collision in `accept-encoding` handling for `Vary`

2020-12-29 Thread Tim Düsterhus
Remi,

Am 29.12.20 um 10:31 schrieb Remi Tricot-Le Breton:
> Concerning the vary.vtc, the new behavior can be explained as follows :
> Because of the absence of content-encoding in the second response of the
> vtc, its encoding is set to "identity", which is accepted by default by
> any client not rejecting it explicitely. So when looking into the cache
> for a response that might suit a request having an "Accept-Encoding:
> first_value" header, this specific one matches.

Which I would say is HAProxy working absolutely correctly with the test
testing something incorrect.

>> On a related note, I think we're still having an issue that was revealed
>> by your discovery of the bug and the fix. Indeed, initially there was
>> only the hash for the list of values. The goal was to simply have an
>> easy to compare value for a set of known encodings. Now for the known
>> ones we have a bitmap. Thus we shouldn't let the known values contribute
>> to the hash anymore.
>>
>> I remain convinced that now that we have the bitmap, we're probably
>> deploying too many efforts to allow caching of unknown encodings with
>> all the consequences it implies, and that just like in the past we would
>> simply not cache when Vary was presented, not caching when an unknown
>> encoding is presented would seem perfectly acceptable to me, especially
>> given the already wide list of known encodings. And when I see the test
>> using "Accept-Encoding: br,gzip,xxx,jdcqiab", this comforts me in this
>> idea, because my feeling here is that the client supports two known
>> encodings and other ones, if we find an object in the cache using no
>> more than these encodings, it can be used, otherwise the object must
>> be fetched from the server, and the response should only be cached if
>> it uses known encodings. And if the server uses "jdcqiab" as one of
>> the encodings I don't care if it results in the object not being cached.
> 
> It would not be that hard to discard responses that have an unknown
> encoding but for now the content-encoding of the response is only parsed
> when the response has a vary header (it could actually have been limited
> to a response that varies on accept-encoding specifically). And
> considering that we probably don't want to parse the content-encoding
> all the time, responses with an unknown encoding would only be
> considered uncacheable when they have a vary, and would be cached when
> they don't. I'm totally fine with it if you are too, it would only
> require a few lines of explanation in the doc.

I'll leave that up to you and just fix the test and send an updated
patch to fix *this* specific issue we're currently dealing with.

Best regards
Tim Düsterhus



Re: [PATCH] BUG/MEDIUM: cache: Fix hash collision in `accept-encoding` handling for `Vary`

2020-12-29 Thread Tim Düsterhus
Willy,

Am 29.12.20 um 07:52 schrieb Willy Tarreau:
> Thanks for this, but while your patch looks valid, the regtest fails
> for me:
> 
> [...]

Oh, indeed. I only tested the reg-test I modified and forgot to run all
the cache tests after finishing up my changes :-(

> I don't know if it's the code or the test, but we're having an issue here
> it seems. Note that this response has no content-encoding so there might
> be something caused by this.

I'd say it's the test. For the first request the server responded with
an unacceptable `content-encoding`, the third request expects that this
unacceptable `content-encoding` is being delivered from the cache.

In the real world it might be that this is the only encoding the server
knows, but HAProxy as the cache cannot know this and must defer to the
server.

> On a related note, I think we're still having an issue that was revealed
> by your discovery of the bug and the fix. Indeed, initially there was
> only the hash for the list of values. The goal was to simply have an
> easy to compare value for a set of known encodings. Now for the known
> ones we have a bitmap. Thus we shouldn't let the known values contribute
> to the hash anymore.

Indeed when suggesting to use a bitmap I had in mind that the hash would
be fully replaced by the bitmap.

> I remain convinced that now that we have the bitmap, we're probably
> deploying too many efforts to allow caching of unknown encodings with
> all the consequences it implies, and that just like in the past we would
> simply not cache when Vary was presented, not caching when an unknown
> encoding is presented would seem perfectly acceptable to me, especially
> given the already wide list of known encodings. And when I see the test
> using "Accept-Encoding: br,gzip,xxx,jdcqiab", this comforts me in this
> idea, because my feeling here is that the client supports two known
> encodings and other ones, if we find an object in the cache using no
> more than these encodings, it can be used, otherwise the object must
> be fetched from the server, and the response should only be cached if
> it uses known encodings. And if the server uses "jdcqiab" as one of
> the encodings I don't care if it results in the object not being cached.

Yes, I agree. Especially since adding new known encodings is easy to do
(add a bit, add logic to add that bit) and can easily be backported to
stable branches.

Best regards
Tim Düsterhus



Re: [PR] Add srvkey option to stick-table

2020-12-29 Thread Frederic Lecaille

On 12/19/20 9:07 AM, Thayne McCombs wrote:

 From 731d6a00f3cf0c70d7a8a916e1984c225f3a9dd6 Mon Sep 17 00:00:00 2001
From: Thayne McCombs 
Date: Sat, 19 Dec 2020 00:59:35 -0700
Subject: [PATCH] Add test for stickiness using the server address for the
  server key

---
  reg-tests/stickiness/srvkey-addr.vtc | 259 +++
  1 file changed, 259 insertions(+)
  create mode 100644 reg-tests/stickiness/srvkey-addr.vtc

diff --git a/reg-tests/stickiness/srvkey-addr.vtc 
b/reg-tests/stickiness/srvkey-addr.vtc

new file mode 100644
index 0..b5675089f
--- /dev/null
+++ b/reg-tests/stickiness/srvkey-addr.vtc
@@ -0,0 +1,259 @@
+vtest "A reg test for stickiness with srvkey addr"
+feature ignore_unknown_macro
+
+
+# The aim of this test is to check that "stick on" rules
+# do the job they are supposed to do.
+# If we remove one of the "stick on" rule, this script fails.
+
+#REQUIRE_VERSION=2.0


Ok for this test but I think we should use 2.4 which is the current 
developemnt version.


Willy, I think we can import this patch after having merged the previous 
one [1/2] patch about this feature implementation.




Re: [PR] Add srvkey option to stick-table

2020-12-29 Thread Frederic Lecaille

On 12/16/20 8:39 AM, Thayne McCombs wrote:

On 12/10/20 1:31 AM, Frederic Lecaille wrote:

It would be preferable to send all your patches, so that others than me
may review your work (no diff between different versions of patches) and
continue to split your work in several patches.


Ok, here is what I have so far as two patches (I combined feedback into the 
original commit):



Hello Thayne,

You work sounds perfect to me.

I am not sure we will import the second patch for the reg test: it makes 
usage of curl. We try to not use external programs for reg tests except 
if there is no choice. This is not the case here.


Willy, could you have a look at the first patch, especially sa2str() 
implementation. I have doubt about its portability:


   const int max_length = sizeof(struct sockaddr_un) - offsetof(struct 
sockaddr_un, sun_path) - 1;

   return memprintf(, "abns@%.*s", max_length, path+1);


As far as I see, everywhere that unix socket are used in haproxy code, 
we consider ->sun_path as NULL terminated.




Re: [PATCH] BUG/MEDIUM: cache: Fix hash collision in `accept-encoding` handling for `Vary`

2020-12-29 Thread Remi Tricot-Le Breton

Hello Willy, Tim,

On 29/12/2020 07:52, Willy Tarreau wrote:

Hi Tim,

On Mon, Dec 28, 2020 at 12:47:52PM +0100, Tim Duesterhus wrote:

This patch fixes GitHub Issue #988. Commit 
ce9e7b25217c46db1ac636b2c885a05bf91ae57e
was not sufficient, because it fell back to a hash comparison if the bitmap
of known encodings was not acceptable instead of directly returning the the
cached response is not compatible.

This patch also extends the reg-test to test the hash collision that was
mentioned in #988.

Vary handling is 2.4, no backport needed.

Thanks for this, but while your patch looks valid, the regtest fails
for me:

 c1http[ 0] |HTTP/1.1
 c1http[ 1] |200
 c1http[ 2] |OK
 c1http[ 3] |content-type: text/plain
 c1http[ 4] |vary: accept-encoding
 c1http[ 5] |cache-control: max-age=5
 c1http[ 6] |content-length: 48
 c1http[ 7] |age: 0
 c1http[ 8] |x-cache-hit: 1
 c1c-l|!"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNO
 c1bodylen = 48
**   c1=== expect resp.status == 200
 c1EXPECT resp.status (200) == "200" match
**   c1=== expect resp.bodylen == 45
 c1EXPECT resp.bodylen (48) == "45" failed

I don't know if it's the code or the test, but we're having an issue here
it seems. Note that this response has no content-encoding so there might
be something caused by this.


The specific accept-encoding test works but the regular vary.vtc does 
fail. But it might need a test change rather than a code change because 
it exhibits a use case we might not want to manage :
If a server responds with a known encoding to a request that only has 
unknown encodings, we won't be able to answer directly from the cache, 
even if the accept-encoding header values match. This is caused by the 
"return (ref.encoding_bitmap & new.encoding_bitmap) != 
ref.encoding_bitmap" line in the accept_encoding_hash_cmp function.
This is rather easy to do from a vtc but I don't know if this has a very 
high chance of happening on real networks. And considering that the 
alternative is a possible cache corruption because of hash collisions, 
we could probably add it as a known limitation.


Concerning the vary.vtc, the new behavior can be explained as follows :
Because of the absence of content-encoding in the second response of the 
vtc, its encoding is set to "identity", which is accepted by default by 
any client not rejecting it explicitely. So when looking into the cache 
for a response that might suit a request having an "Accept-Encoding: 
first_value" header, this specific one matches.



On a related note, I think we're still having an issue that was revealed
by your discovery of the bug and the fix. Indeed, initially there was
only the hash for the list of values. The goal was to simply have an
easy to compare value for a set of known encodings. Now for the known
ones we have a bitmap. Thus we shouldn't let the known values contribute
to the hash anymore.

I remain convinced that now that we have the bitmap, we're probably
deploying too many efforts to allow caching of unknown encodings with
all the consequences it implies, and that just like in the past we would
simply not cache when Vary was presented, not caching when an unknown
encoding is presented would seem perfectly acceptable to me, especially
given the already wide list of known encodings. And when I see the test
using "Accept-Encoding: br,gzip,xxx,jdcqiab", this comforts me in this
idea, because my feeling here is that the client supports two known
encodings and other ones, if we find an object in the cache using no
more than these encodings, it can be used, otherwise the object must
be fetched from the server, and the response should only be cached if
it uses known encodings. And if the server uses "jdcqiab" as one of
the encodings I don't care if it results in the object not being cached.


It would not be that hard to discard responses that have an unknown 
encoding but for now the content-encoding of the response is only parsed 
when the response has a vary header (it could actually have been limited 
to a response that varies on accept-encoding specifically). And 
considering that we probably don't want to parse the content-encoding 
all the time, responses with an unknown encoding would only be 
considered uncacheable when they have a vary, and would be cached when 
they don't. I'm totally fine with it if you are too, it would only 
require a few lines of explanation in the doc.



Just my two cents,
Willy



Rémi