Assignment of Agreement(s) between The Gap, Inc., and its affiliates and HAPROXY and its affiliates

2019-12-11 Thread OldNavyContractsSeparation
Hello,

As you may be aware, The Gap, Inc. (“Gap“), recently announced its plan to 
separate the Old Navy brand (“Old Navy”) into a separate publicly traded 
company (the “Transaction”). The Transaction is expected to take place in  
2020. Further information around the Transaction can be obtained via the 
following link: 
https://corporate.gapinc.com/en-us/articles/2019/02/gap-inc-announces-plan-to-separate-into-two-indepe
 .

In preparing for the Transaction, Gap, and its applicable affiliates 
(collectively, “Assignor”) will assign its rights and obligations under all 
agreements, to the extent related to Old Navy (collectively, the “Agreements”), 
to Old Navy, LLC (“Assignee”) effective March 1, 2020 (“Effective Date”).

Assignee will assume all of the rights and obligations under the  Agreements, 
related to Old Navy, to the same extent and with the same force and effect as 
if Assignee had been named as a party thereto, in place and instead of Assignor 
(the “Assigned Rights and Obligations”).

For avoidance of doubt, the Assigned Rights and Obligations relate solely to 
Old Navy, and Assignor will retain all other rights and obligations under the 
Agreements existing as of the effective date of the assignment, including all 
rights and obligations related to other products and services described in the 
Agreements which are not related to Old Navy.

Notwithstanding anything else contained in this letter or in the Agreements, 
the rights and obligations of Assignor and Assignee with respect to the 
performance under the Agreements will be several (and not joint, or joint and 
several). Under no circumstance shall Assignee or Assignor be liable to you for 
the other’s failure to fulfill its respective obligations under the Agreements, 
nor shall termination of the Agreements in relation to one party constitute 
termination of the Agreements for the other.  Except as specified in this 
letter, the provisions of the Agreements will remain in full force and effect, 
unamended, and the Agreements will be read in conjunction with this letter from 
and after the date hereof.

While we are notifying you of this event, there is no action required of you 
(apart from updating your systems with the new name and information) and it 
will have no impact on our business relationship or performance of the contract.

Please see attached for further details.

Thank You

The Gap, Inc.


Notice to Assign_HAPROXY.pdf
Description: Notice to Assign_HAPROXY.pdf


[ANNOUNCE] haproxy-2.0.11

2019-12-11 Thread Willy Tarreau
Hi,

HAProxy 2.0.11 was released on 2019/12/11. It added 38 new commits
after version 2.0.10.

So in order not to paraphrase myself too much, this version contains
the same fixes as in 2.1.1. Nothing alarming at all, essentially bugs
causing high CPU usage with threads that cannot sleep, a rare case of
server timeouts caused by splicing, and a rare risk of request/response
mixup when using H1 on the backend and an H2 client aborts and tries
again following a precise sequence that's very hard to reproduce.

Just like for 2.1.1, I suggest every user to upgrade in order to avoid
falling into one these traps which are particularly difficult to identify
and troubleshoot, though there's really no need to hurry. All those who
encountered strange behaviors must definitely update before reporting an
issue though.

We'll progressively slow down on 1.9 releases to motivate users to jump
to 2.1 or 2.0, especially in such situations where there's nothing really
problematic. The goal is that in 3-4 months from now we can definitely
drop it.

As a reminder, 1.5 turns end-of-life at the end of this year. After having
done another round of 1.6 I still failed to spot really compelling fixes
that would warrant another release. If in the future I get an occasional
request for extra backports (especially from distros), I may occasionally
provide some help as time permits, but 1.5 and 1.6 are pretty close and
picking from 1.6 is often reasonably straightforward so I don't even
think that will happen.

Please find the usual URLs below :
   Site index   : http://www.haproxy.org/
   Discourse: http://discourse.haproxy.org/
   Slack channel: https://slack.haproxy.org/
   Issue tracker: https://github.com/haproxy/haproxy/issues
   Sources  : http://www.haproxy.org/download/2.0/src/
   Git repository   : http://git.haproxy.org/git/haproxy-2.0.git/
   Git Web browsing : http://git.haproxy.org/?p=haproxy-2.0.git
   Changelog: http://www.haproxy.org/download/2.0/src/CHANGELOG
   Cyril's HTML doc : http://cbonte.github.io/haproxy-dconv/

Willy
---
Complete changelog :
Christopher Faulet (6):
  BUG/MINOR: contrib/prometheus-exporter: Use HTX errors and not legacy ones
  BUG/MINOR: http-htx: Don't make http_find_header() fail if the value is 
empty
  BUG/MEDIUM: mux-h1: Never reuse H1 connection if a shutw is pending
  BUG/MINOR: mux-h1: Don't rely on CO_FL_SOCK_RD_SH to set H1C_F_CS_SHUTDOWN
  BUG/MINOR: mux-h1: Fix conditions to know whether or not we may receive 
data
  BUG/MINOR: mux-h1: Be sure to set CS_FL_WANT_ROOM when EOM can't be added

Emmanuel Hocdet (1):
  BUG/MINOR: ssl: certificate choice can be unexpected with openssl >= 1.1.1

Jerome Magnin (1):
  BUG/MINOR: stream: init variables when the list is empty

Julien Pivotto (1):
  DOC: proxies: HAProxy only supports 3 connection modes

Mathias Weiersmueller (1):
  DOC: clarify matching strings on binary fetches

Olivier Houchard (3):
  BUG/MEDIUM: tasks: Make sure we switch wait queues in task_set_affinity().
  BUG/MEDIUM: checks: Make sure we set the task affinity just before 
connecting.
  BUG/MEDIUM: kqueue: Make sure we report read events even when no data.

Tim Duesterhus (1):
  DOC: Clarify behavior of server maxconn in HTTP mode

William Dauchy (1):
  BUG/MINOR: contrib/prometheus-exporter: decode parameter and value only

Willy Tarreau (23):
  DOC: move the "group" keyword at the right place
  BUG/MEDIUM: stream-int: don't subscribed for recv when we're trying to 
flush data
  BUG/MINOR: stream-int: avoid calling rcv_buf() when splicing is still 
possible
  BUG/MEDIUM: listener/thread: fix a race when pausing a listener
  BUG/MINOR: proxy: make soft_stop() also close FDs in LI_PAUSED state
  BUG/MINOR: listener/threads: always use atomic ops to clear the FD events
  BUG/MINOR: listener: also clear the error flag on a paused listener
  BUG/MEDIUM: listener/threads: fix a remaining race in the listener's 
accept()
  DOC: document the listener state transitions
  BUG/MAJOR: dns: add minimalist error processing on the Rx path
  BUG/MEDIUM: proto_udp/threads: recv() and send() must not be exclusive.
  DOC: listeners: add a few missing transitions
  BUG/MINOR: tasks: only requeue a task if it was already in the queue
  BUILD/MINOR: ssl: shut up a build warning about format truncation
  BUILD/MINOR: tools: shut up the format truncation warning in 
get_gmt_offset()
  BUILD: do not disable -Wformat-truncation anymore
  DOC: remove references to the outdated architecture.txt
  BUG/MINOR: log: fix minor resource leaks on logformat error path
  BUG/MINOR: mworker: properly pass SIGTTOU/SIGTTIN to workers
  BUG/MINOR: listener: do not immediately resume on transient error
  BUG/MINOR: server: make "agent-addr" work on default-server line
  BUG/MINOR: listener: fix off-by-one in state name check
  

[ANNOUNCE] haproxy-2.1.1

2019-12-11 Thread Willy Tarreau
Hi,

HAProxy 2.1.1 was released on 2019/12/11. It added 43 new commits
after version 2.1.0.

Overall I must say I'm particularly happy about the positive effect of the
long stabilization period prior to the final release. It's been slightly
more than two weeks since 2.1.0 and we didn't have a single regression yet,
all bugs fixed were already present in earlier releases. As a result there
is nothing dramatic, mostly some annoyances, so it's the right moment to
flush the pipe and provide a cleaner version to all those who had not yet
started to upgrade. Only one bug was tagged MAJOR, it's about the DNS. It's
been there for a while, is unlikely to strike but when it strikes it
requires a restart of the process and there is no workaround, which is why
it's tagged as such.

For the rest, thanks to a lot of quite detailed reports, we could address a
long tail of bugs causing threads or processes to spin like crazy. Some of
them were caused by a race condition in the listener code when using threads,
some by the DNS, others by health checks. Another issue can occasionally
cause server timeouts to be logged when using splicing. Overall nothing
alarming. I suggest every user to upgrade to avoid falling into such traps,
though there's no really no need to hurry.

Please find the usual URLs below :
   Site index   : http://www.haproxy.org/
   Discourse: http://discourse.haproxy.org/
   Slack channel: https://slack.haproxy.org/
   Issue tracker: https://github.com/haproxy/haproxy/issues
   Sources  : http://www.haproxy.org/download/2.1/src/
   Git repository   : http://git.haproxy.org/git/haproxy-2.1.git/
   Git Web browsing : http://git.haproxy.org/?p=haproxy-2.1.git
   Changelog: http://www.haproxy.org/download/2.1/src/CHANGELOG
   Cyril's HTML doc : http://cbonte.github.io/haproxy-dconv/

Willy
---
Complete changelog :
Christopher Faulet (9):
  BUG/MINOR: h1: Don't test the host header during response parsing
  BUG/MINOR: http-htx: Don't make http_find_header() fail if the value is 
empty
  BUG/MINOR: fcgi-app: Make the directive pass-header case insensitive
  BUG/MINOR: stats: Fix HTML output for the frontends heading
  BUG/MEDIUM: mux-h1: Never reuse H1 connection if a shutw is pending
  BUG/MINOR: mux-h1: Don't rely on CO_FL_SOCK_RD_SH to set H1C_F_CS_SHUTDOWN
  BUG/MINOR: mux-h1: Fix conditions to know whether or not we may receive 
data
  BUG/MINOR: mux-h1: Be sure to set CS_FL_WANT_ROOM when EOM can't be added
  BUG/MEDIUM: mux-fcgi: Handle cases where the HTX EOM block cannot be 
inserted

Emmanuel Hocdet (2):
  BUG/MINOR: ssl: fix SSL_CTX_set1_chain compatibility for openssl < 1.0.2
  BUG/MINOR: ssl: certificate choice can be unexpected with openssl >= 1.1.1

Julien Pivotto (2):
  DOC: Fix ordered list in summary
  DOC: proxies: HAProxy only supports 3 connection modes

Mathias Weiersmueller (1):
  DOC: clarify matching strings on binary fetches

Olivier Houchard (3):
  BUG/MEDIUM: tasks: Make sure we switch wait queues in task_set_affinity().
  BUG/MEDIUM: checks: Make sure we set the task affinity just before 
connecting.
  BUG/MEDIUM: kqueue: Make sure we report read events even when no data.

Tim Duesterhus (1):
  DOC: Clarify behavior of server maxconn in HTTP mode

William Dauchy (1):
  BUG/MINOR: contrib/prometheus-exporter: decode parameter and value only

William Lallemand (3):
  DOC: ssl/cli: set/commit/abort ssl cert
  BUG/MINOR: ssl/cli: 'ssl cert' cmd only usable w/ admin rights
  BUG/MINOR: ssl/cli: don't overwrite the filters variable

Willy Tarreau (21):
  BUILD/MINOR: trace: fix use of long type in a few printf format strings
  DOC: move the "group" keyword at the right place
  BUG/MEDIUM: stream-int: don't subscribed for recv when we're trying to 
flush data
  BUG/MINOR: stream-int: avoid calling rcv_buf() when splicing is still 
possible
  BUG/MEDIUM: listener/thread: fix a race when pausing a listener
  BUG/MINOR: proxy: make soft_stop() also close FDs in LI_PAUSED state
  BUG/MINOR: listener/threads: always use atomic ops to clear the FD events
  BUG/MINOR: listener: also clear the error flag on a paused listener
  BUG/MEDIUM: listener/threads: fix a remaining race in the listener's 
accept()
  DOC: document the listener state transitions
  BUG/MAJOR: dns: add minimalist error processing on the Rx path
  BUG/MEDIUM: proto_udp/threads: recv() and send() must not be exclusive.
  DOC: listeners: add a few missing transitions
  BUG/MINOR: tasks: only requeue a task if it was already in the queue
  DOC: remove references to the outdated architecture.txt
  BUG/MINOR: log: fix minor resource leaks on logformat error path
  BUG/MINOR: mworker: properly pass SIGTTOU/SIGTTIN to workers
  BUG/MINOR: listener: do not immediately resume on transient error
  BUG/MINOR: server: make "agent-addr" work on 

no log on http-keep-alive race

2019-12-11 Thread Michał Pasierb
Hello,

I have situation where HAProxy receives a request but doesn't log it. I
would like to know if this is desired behaviour or this is an actual bug.

To replicate this issue:
1) create HAProxy config which doesn't disable http-keep-alive
2) on backend server set http-keep-alive timeout to a low value of 2 seconds
3) run a script which sends requests in a loop and waits say 1.996 seconds
between each iteration

>From time to time, there will be no response from HAProxy. This is
understandable. Connecting directly to backend server, without using
HAProxy, gives same result. This is all desired.

However, there is no log indicating that HAProxy was processing a request
and that it failed on server side.

This issue has already been discussed at
https://discourse.haproxy.org/t/haproxy-1-7-11-intermittently-closes-connection-sends-empty-response-on-post-requests/3052/5

I understand this configuration is clearly wrong because the timeout on
server should be higher than on HAProxy. In this scenario, I'm responsible
for HAProxy configuration and somebody else is responsible for
configuration of servers in backends. I can not know if every server is
correctly configured therefore I would like to enable logging of such
requests.

I have tried version 1.7, 1.8 and 2.0 with different options like logasap.
All behave the same.

Regards,
Michael


BadREq

2019-12-11 Thread Marco Nietz

Hi,

i'm running Haproxy 1.8.21 on a Debian 9 Box.

We use stick-tables to track http-connections and http-error-rate and 
block clients (bots) that cause a high error-rate. But Today i recognize 
a lot of 400 / Bad Requests Error (~15/s) in the logs. The rate 
definitely exceeds the defined limit, but the ip-address wasn't blocked. 
Therefor my question, does HTTP-Error 400 not increase the http_err_rate 
Counter? The state is CR, so the requests do not reach our 
backend-servers, but my expectation is, that they get blocked at the 
loadbalancer. The below configuration works fine with 404 or 401 errors.


Here's a sample logfile entry:

Dec 11 11:56:40 lb01 haproxy[41488]: [REDACTED]:3641 
[11/Dec/2019:11:56:40.103] production~ production/ -1/-1/-1/-1/98 
400 0 - - CR-- 461/461/0/0/0 0/0 ""


Configuration:

stick-table type ip size 5M expire 2h store 
http_req_rate(60s),http_err_rate(60s),gpt0

http-request track-sc0 src unless { src -f /etc/haproxy/whitelist.acl }
http-request sc-set-gpt0(0) 1 if { sc_http_err_rate(0) ge LIMIT }
tcp-request connection reject if { src,table_gpt0(production) eq 1 }


Best Regards

Marco







Re: [PATCH] DOC: proxies: HAProxy only supports 3 connection modes

2019-12-11 Thread Willy Tarreau
On Wed, Dec 11, 2019 at 11:02:10AM +0100, Julien Pivotto wrote:
> I meant the opposite :)
> 
> Client <- close -> HAProxy <- keepalive -> Backend

On the frontend side it's particularly rare to want to close
considering the cost of establishing a new connection in terms
of timing. We've continued to implement close mainly to remain
compatible with our previous versions that were too limited in
fact. Do you have a use case where it would really make a
difference ? If that's rare enough, I suspect that simply
adding "http-response add-header connection close" should do
the trick, as it will flow to the mux which will close.

Willy



Re: Weird encryption errors with IIS

2019-12-11 Thread Aleksandar Lazic

Am 10.12.2019 um 00:47 schrieb NublaII Lists:




What's in the haproxy log at this time?


I was only logging errors at the time, and nothing showed up.


Okay can you add 'option httplog' to the default block, this should create more 
valueable logs.



Could this message hide dome overload state?


I doubt it... the load on these systems is fairly low.


What's your settings for the following parameters?

https://cbonte.github.io/haproxy-dconv/1.5/configuration.html#maxconn
https://cbonte.github.io/haproxy-dconv/1.5/configuration.html#maxconnrate
https://cbonte.github.io/haproxy-dconv/1.5/configuration.html#maxsessrate
https://cbonte.github.io/haproxy-dconv/1.5/configuration.html#maxsslconn
https://cbonte.github.io/haproxy-dconv/1.5/configuration.html#maxsslrate


See the conf I pasted underneath. The only one on it is maxconn. What are the 
default values for the other ones?



Timeout*

It's called Circuit Breaker pattern.
https://martinfowler.com/bliki/CircuitBreaker.html 


I'll send this to the devs to take a look at, thank you.


In general can you share your minimal config?



Here you have a streamlined version:

global
maxconn 5
ulimit-n 175000
nbproc 1
log /var/lib/haproxy/dev/log    local0 err
stats socket /var/run/haproxy.sock mode 0666 level admin
tune.maxrewrite 4096
tune.bufsize 65536
tune.ssl.default-dh-param 2048
         ssl-default-bind-ciphers 
ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS

         ssl-default-bind-options no-sslv3 no-tls-tickets
         ssl-default-server-ciphers 
ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS

         ssl-default-server-options no-sslv3 no-tls-tickets
spread-checks 4
daemon

defaults
         mode                    http
         balance                 roundrobin
         option                  tcpka
         option                  forwardfor
         option                  redispatch
         option                  contstats
         cookie                  SERVERID insert indirect
         retries                 10
         maxconn                 5
         timeout http-request    300s
         timeout client          1200s
         timeout server          1200s
         timeout connect         100s
         timeout tarpit          200s
         timeout http-keep-alive 300s
         timeout check           5s
         errorfile               408 /dev/null
         errorfile               403 /etc/haproxy/errors/403error.http
         errorfile               503 /etc/haproxy/errors/maintenance.http

frontend https
         bind 10.10.10.254:443 ssl crt 
/etc/ssl/private/star.domain.chain+dh.pem crt /etc/ssl/private/other.pem crt 
/etc/ssl/private/otherother.pem

         log                     global
         option                  forwardfor

use_backend             backend_api_https if { hdr(host) api.domain }
use_backend             backend_app_https if { hdr(host) app.domain }

backend backend_api_https
         mode                    http
         redirect                scheme https if !{ ssl_fc }
         option                  httpchk HEAD /status.html HTTP/1.0\r\nHost:\ 
api.domain\r\nUser-Agent:\ haproxy
         server                  api01 10.10.10.1:443  
cookie api01-https ssl verify none check inter 7000 fall 5 weight 20
         server                  api02 10.10.10.2:443  
cookie api02-https ssl verify none check inter 7000 fall 5 weight 20
         server                  api03 10.10.10.3:443  
cookie api03-https ssl verify none check inter 7000 fall 5 weight 20
         server                  api04 10.10.10.4:443  
cookie api04-https ssl verify none check inter 7000 fall 5 weight 20



Re: [PATCH] DOC: proxies: HAProxy only supports 3 connection modes

2019-12-11 Thread Julien Pivotto
On 11 Dec 10:51, Willy Tarreau wrote:
> On Wed, Dec 11, 2019 at 10:49:00AM +0100, Julien Pivotto wrote:
> > On 11 Dec 10:19, Willy Tarreau wrote:
> > > On Tue, Dec 10, 2019 at 01:11:17PM +0100, Julien Pivotto wrote:
> > > > The 4th one (forceclose) has been deprecated and deleted from the
> > > > documentation in 10c6c16cde0b0b473a1ab16e958a7d6b61ed36fc
> > > > 
> > > > Signed-off-by: Julien Pivotto 
> > > (...)
> > > 
> > > Applied, thank you Julien.
> > > Willy
> > 
> > I am wondering wether there would be a value in adding a 4th one where
> > we would keep alive the client connection but not the backend
> > connection.
> 
> That's exactly http-server-close. And yes it's very useful!


I meant the opposite :)

Client <- close -> HAProxy <- keepalive -> Backend

> 
> Willy

-- 
 (o-Julien Pivotto
 //\Open-Source Consultant
 V_/_   Inuits - https://www.inuits.eu


signature.asc
Description: PGP signature


Re: [PATCH] DOC: proxies: HAProxy only supports 3 connection modes

2019-12-11 Thread Willy Tarreau
On Wed, Dec 11, 2019 at 10:49:00AM +0100, Julien Pivotto wrote:
> On 11 Dec 10:19, Willy Tarreau wrote:
> > On Tue, Dec 10, 2019 at 01:11:17PM +0100, Julien Pivotto wrote:
> > > The 4th one (forceclose) has been deprecated and deleted from the
> > > documentation in 10c6c16cde0b0b473a1ab16e958a7d6b61ed36fc
> > > 
> > > Signed-off-by: Julien Pivotto 
> > (...)
> > 
> > Applied, thank you Julien.
> > Willy
> 
> I am wondering wether there would be a value in adding a 4th one where
> we would keep alive the client connection but not the backend
> connection.

That's exactly http-server-close. And yes it's very useful!

Willy



Re: [PATCH] DOC: proxies: HAProxy only supports 3 connection modes

2019-12-11 Thread Julien Pivotto
On 11 Dec 10:19, Willy Tarreau wrote:
> On Tue, Dec 10, 2019 at 01:11:17PM +0100, Julien Pivotto wrote:
> > The 4th one (forceclose) has been deprecated and deleted from the
> > documentation in 10c6c16cde0b0b473a1ab16e958a7d6b61ed36fc
> > 
> > Signed-off-by: Julien Pivotto 
> (...)
> 
> Applied, thank you Julien.
> Willy

I am wondering wether there would be a value in adding a 4th one where
we would keep alive the client connection but not the backend
connection.

-- 
 (o-Julien Pivotto
 //\Open-Source Consultant
 V_/_   Inuits - https://www.inuits.eu


signature.asc
Description: PGP signature


Re: kernel panics after updating to 2.0

2019-12-11 Thread Willy Tarreau
On Fri, Dec 06, 2019 at 11:45:47AM +0100, Pavlos Parissis wrote:
> On ?, 6 u? 2019 10:36:18 ?.?. CET Sander Hoentjen wrote:
> > 
> > On 12/6/19 10:20 AM, Pavlos Parissis wrote:
> > > On ?, 6 u? 2019 9:23:24 ?.?. CET Sander Hoentjen wrote:
> > >> Hi list,
> > >>
> > >> After updating from 1.8.13 to 2.0.5 (also with 2.0.10) we are seeing
> > >> kernel panics on our production servers. I haven't been able to trigger
> > >> them on a test server, and we rollbacked haproxy to 1.8 for now.
> > >>
> > >> I am attaching a panic log, hope something useful is in there.
> > >>
> > >> Anybody an idea what might be going on here?
> > >>
> > > Have you noticed any high CPU utilization prior the panic?
> > >
> > Nothing out of the ordinary, but I have only minute data, so I don't 
> > know for sure things about seconds before crash.
> > 
> 
> Then I suggest to configure sar tool to pull/store metrics every 1 second for 
> some period in order to
> see if the panic is the result of CPU(s) spinning at 100%, either at user or 
> system level. That will provide some hints
> to haproxy developers.
> 
> Another idea is to try haproxy version 2.1.x.

With this said, a kernel panic must happen and is either the result
of a hardware issue or a kernel issue. I'm seeing that something
seems to be blocked with a signal in epoll which seems to freeze
the whole system.

Sander, are you certain your kernel is up to date ? I'm seeing an RHEL
3.10 though I don't know their numbers. Hard lockups can be caused by
many different bugs unfortunately, it's not even reasonably to look for
them in a changelog. I've checked a few hard lockups there but none
seem related. It could however also be caused by some backports
specific to that kernel. In any case if your kernel is up to date, you
should file a bug at the distro to figure why it's happening. We can
possibly help if their kernel team has some questions about what
differs between 1.8 and 2.0 (a lot...).

Cheers,
Willy



Re: [PATCH] DOC: proxies: HAProxy only supports 3 connection modes

2019-12-11 Thread Willy Tarreau
On Tue, Dec 10, 2019 at 01:11:17PM +0100, Julien Pivotto wrote:
> The 4th one (forceclose) has been deprecated and deleted from the
> documentation in 10c6c16cde0b0b473a1ab16e958a7d6b61ed36fc
> 
> Signed-off-by: Julien Pivotto 
(...)

Applied, thank you Julien.
Willy



Re: Haproxy nbthreads + multi-threading lua?

2019-12-11 Thread Baptiste
On Mon, Dec 2, 2019 at 5:15 PM Dave Chiluk  wrote:

> Since 2.0 nbproc and nbthreads are now mutually exclusive, are there
> any ways to make lua multi-threaded?
>
> One of our proxy's makes heavy use of lua scripting.  I'm not sure if
> this is still the case, but in earlier versions of HAProxy lua was
> single threaded per process.  Because of this we were running that
> proxy with nbproc=4, and nbthread=4. This allowed us to scale without
> being limited by lua.
>
> Has lua single-threaded-ness now been solved?  Are there other options
> I should be aware of related to that?  What's the preferred way around
> this?
>
> Thanks,
> Dave.
>
>
Hi Dave,
(I think we met at kubecon)

What's your use case for Lua exactly?
Can't it be replaced by SPOE at some point? (which is compatible with
nbthread and can run heavy processing outside of the HAProxy process)?

You can answer me privately if you don't want such info to be public.

Baptiste