Re: [PATCH] BUG/MEDIUM: mworker: Fix re-exec when haproxy is started from PATH

2017-11-16 Thread William Lallemand
On Thu, Nov 16, 2017 at 05:30:05PM +0100, Tim Düsterhus wrote:
> William,
> 
> Am 15.11.2017 um 21:17 schrieb William Lallemand:
> > These problems have been fixed in the master with the following commits:
> > 
> > 75ea0a06b BUG/MEDIUM: mworker: does not close inherited FD
> > fade49d8f BUG/MEDIUM: mworker: does not deinit anymore
> > 2f8b31c2c BUG/MEDIUM: mworker: wait again for signals when execvp fail
> > 722d4ca0d MINOR: mworker: display an accurate error when the reexec fail
> > 
> > 
> > The master worker should be able to behave correctly on a execvp failure 
> > now :-) 
> > 
> 
> I took a look at your commits. While I don't know much about haproxy's
> internals they look good to me.
> 
> Just one thing: At the top of `static void mworker_reload(void)` the
> Environment is modified using:
> 
> > setenv("HAPROXY_MWORKER_REEXEC", "1", 1);
> 
> Is it necessary to reset that value in case of `execvp` failure? You
> don't seem to do so.
> 

It's not, this variable is only used at the start of the executable to know if
it's a restart or not, once it's started it should always be 1.

Cheers,

-- 
William Lallemand



Fw:

2017-11-16 Thread Patrick


Hello,

You can avail Flat 50% Off on all Airlines & Hotels.

There are no hidden costs etc.

We're able to offer you such a high discount because we buy airline miles and travel vouchers from market at very low rates.

Limited Promotion. Call Toll Free (855) 425-6766

Thanks,

Patrick

Travel USA





Re: [PATCHES] Fix TLS 1.3 session resumption, and 0RTT with threads.

2017-11-16 Thread Willy Tarreau
On Thu, Nov 16, 2017 at 06:33:35PM +0100, Olivier Houchard wrote:
> Hi,
> 
> The first patch attempts fo fix session resumption with TLS 1.3, when
> haproxy acts as a client, by storing the ASN1-encoded session in the struct
> server, instead of storing the SSL_SESSION *directly. Directly keeping
> SSL_SESSION doesn't seem to work well when concurrent connections are made
> using the same session.
> The second patch tries to make sure the SSL handshake is done before calling
> the shutw method. Not doing so may be result in getting errors, which
> ultimately leads to the client connection being closed, when it shouldn't be.
> This mostly happens when more than 1 thread is used.

Merged, thanks!

Willy



[PATCHES] Fix TLS 1.3 session resumption, and 0RTT with threads.

2017-11-16 Thread Olivier Houchard
Hi,

The first patch attempts fo fix session resumption with TLS 1.3, when
haproxy acts as a client, by storing the ASN1-encoded session in the struct
server, instead of storing the SSL_SESSION *directly. Directly keeping
SSL_SESSION doesn't seem to work well when concurrent connections are made
using the same session.
The second patch tries to make sure the SSL handshake is done before calling
the shutw method. Not doing so may be result in getting errors, which
ultimately leads to the client connection being closed, when it shouldn't be.
This mostly happens when more than 1 thread is used.

Regards,

Olivier
>From e32a831c1cbff1fcfb66565273ec98052f3a7f79 Mon Sep 17 00:00:00 2001
From: Olivier Houchard 
Date: Thu, 16 Nov 2017 17:42:52 +0100
Subject: [PATCH 1/2] MINOR: SSL: Store the ASN1 representation of client
 sessions.

Instead of storing the SSL_SESSION pointer directly in the struct server,
store the ASN1 representation, otherwise, session resumption is broken with
TLS 1.3, when multiple outgoing connections want to use the same session.
---
 include/types/server.h |  6 -
 src/ssl_sock.c | 60 --
 2 files changed, 49 insertions(+), 17 deletions(-)

diff --git a/include/types/server.h b/include/types/server.h
index 76225f7d3..92dcdbb5c 100644
--- a/include/types/server.h
+++ b/include/types/server.h
@@ -274,7 +274,11 @@ struct server {
char *sni_expr; /* Temporary variable to store a sample 
expression for SNI */
struct {
SSL_CTX *ctx;
-   SSL_SESSION **reused_sess;
+   struct {
+   unsigned char *ptr;
+   int size;
+   int allocated_size;
+   } * reused_sess;
char *ciphers;  /* cipher suite to use if 
non-null */
int options;/* ssl options */
struct tls_version_filter methods;  /* ssl methods */
diff --git a/src/ssl_sock.c b/src/ssl_sock.c
index 72d9b8aee..1162aa318 100644
--- a/src/ssl_sock.c
+++ b/src/ssl_sock.c
@@ -3856,19 +3856,35 @@ static int sh_ssl_sess_store(unsigned char *s_id, 
unsigned char *data, int data_
 static int ssl_sess_new_srv_cb(SSL *ssl, SSL_SESSION *sess)
 {
struct connection *conn = SSL_get_app_data(ssl);
+   struct server *s;
 
-   /* check if session was reused, if not store current session on server 
for reuse */
-   if (objt_server(conn->target)->ssl_ctx.reused_sess[tid]) {
-   
SSL_SESSION_free(objt_server(conn->target)->ssl_ctx.reused_sess[tid]);
-   objt_server(conn->target)->ssl_ctx.reused_sess[tid] = NULL;
-   }
+   s = objt_server(conn->target);
 
-   if (!(objt_server(conn->target)->ssl_ctx.options & SRV_SSL_O_NO_REUSE))
-   objt_server(conn->target)->ssl_ctx.reused_sess[tid] = 
SSL_get1_session(conn->xprt_ctx);
+   if (!(s->ssl_ctx.options & SRV_SSL_O_NO_REUSE)) {
+   int len;
+   unsigned char *ptr;
 
-   return 1;
+   len = i2d_SSL_SESSION(sess, NULL);
+   if (s->ssl_ctx.reused_sess[tid].ptr && 
s->ssl_ctx.reused_sess[tid].allocated_size >= len) {
+   ptr = s->ssl_ctx.reused_sess[tid].ptr;
+   } else {
+   free(s->ssl_ctx.reused_sess[tid].ptr);
+   ptr = s->ssl_ctx.reused_sess[tid].ptr = malloc(len);
+   s->ssl_ctx.reused_sess[tid].allocated_size = len;
+   }
+   if (s->ssl_ctx.reused_sess[tid].ptr) {
+   s->ssl_ctx.reused_sess[tid].size = i2d_SSL_SESSION(sess,
+   );
+   }
+   } else {
+   free(s->ssl_ctx.reused_sess[tid].ptr);
+   s->ssl_ctx.reused_sess[tid].ptr = NULL;
+   }
+
+   return 0;
 }
 
+
 /* SSL callback used on new session creation */
 int sh_ssl_sess_new_cb(SSL *ssl, SSL_SESSION *sess)
 {
@@ -4434,7 +4450,7 @@ int ssl_sock_prepare_srv_ctx(struct server *srv)
 
/* Initiate SSL context for current server */
if (!srv->ssl_ctx.reused_sess) {
-   if ((srv->ssl_ctx.reused_sess = calloc(1, 
global.nbthread*sizeof(SSL_SESSION*))) == NULL) {
+   if ((srv->ssl_ctx.reused_sess = calloc(1, 
global.nbthread*sizeof(*srv->ssl_ctx.reused_sess))) == NULL) {
Alert("Proxy '%s', server '%s' [%s:%d] out of 
memory.\n",
  curproxy->id, srv->id,
  srv->conf.file, srv->conf.line);
@@ -4923,10 +4939,15 @@ static int ssl_sock_init(struct connection *conn)
}
 
SSL_set_connect_state(conn->xprt_ctx);
-   if (objt_server(conn->target)->ssl_ctx.reused_sess) {
-   if(!SSL_set_session(conn->xprt_ctx, 
objt_server(conn->target)->ssl_ctx.reused_sess[tid])) {
-

Re: [PATCH] BUG/MEDIUM: mworker: Fix re-exec when haproxy is started from PATH

2017-11-16 Thread Tim Düsterhus
William,

Am 15.11.2017 um 21:17 schrieb William Lallemand:
> These problems have been fixed in the master with the following commits:
> 
> 75ea0a06b BUG/MEDIUM: mworker: does not close inherited FD
> fade49d8f BUG/MEDIUM: mworker: does not deinit anymore
> 2f8b31c2c BUG/MEDIUM: mworker: wait again for signals when execvp fail
> 722d4ca0d MINOR: mworker: display an accurate error when the reexec fail
> 
> 
> The master worker should be able to behave correctly on a execvp failure now 
> :-) 
> 

I took a look at your commits. While I don't know much about haproxy's
internals they look good to me.

Just one thing: At the top of `static void mworker_reload(void)` the
Environment is modified using:

> setenv("HAPROXY_MWORKER_REEXEC", "1", 1);

Is it necessary to reset that value in case of `execvp` failure? You
don't seem to do so.

Best regards
Tim Düsterhus



Re: HAProxy LB causes server to poll for request data for a long time

2017-11-16 Thread Илья Шипицин
Try

proxy_buffering off;
proxy_request_buffering off;

in nginx

On Nov 15, 2017 8:01 PM, "omer kirkagaclioglu"  wrote:

Hi,

I just put a service that has around  400 - 4K http / https mixed requests
per second behind haproxy. The endpoint with the highest rate of requests
is a POST request with a 1-8K json body. I used haproxy for mostly ssl
offloading since there is only one server. Immediately I started seeing log
lines similar to the one below for the said service in haproxy logs:

Nov 15 15:07:59 localhost.localdomain haproxy[22773]: 41.75.220.204:24716
[15/Nov/2017:15:06:59.772] http med-api/med1722 0/0/0/-1/6 502 204 3047
- - SH-- 1736/1736/13/13/0 0/0 "POST count HTTP/1.1"

The requests were timing out after 60 seconds on the application side.
After this I started logging requests with latency higher than a certain
threshold and saw that lots of requests were stuck waiting on reading the
request post body from haproxy. I did some research and found the option
http-buffer-request. I set it on my backend and the timeouts disappeared.
However my application server is still spending a lot of time polling for
request data compared to NGINX, which is almost non-existent. My
application is written in go and I can monitor the number of goroutines (
lightweight threads ) grouped by their tasks. If I proxy the traffic
through NGINX the number of goroutines waiting for request data is almost
not observable, but when I pass the traffic through haproxy this number
rises until 1200.

I am just trying to understand the difference in behaviour between NGINX
and haproxy and if is there any setting I can tweak to fix this issue. Hope
everything is clear this is my first question in the mailing lists, so
please let me know if I can make my question clearer.

HAPROXY 1.7.8
nbproc 30
2 process for http
28 process for https

Default section options from my config
 log global
   mode http
   option dontlognull
   option log-separate-errors
   option http-buffer-request
   timeout connect 5s
   timeout client 30s
   timeout server 60s
   timeout http-keep-alive 4s

Omer


Re: HAProxy LB causes server to poll for request data for a long time

2017-11-16 Thread Lukas Tribus
2017-11-16 16:24 GMT+01:00 omer kirkagaclioglu :
> Hi Lukas,
>
> Thanks for the quick answer. I am using haproxy on another service which
> consists of GET requests with very small query parameters. It load balances
> to a backend with 4 servers with  3K-20K requests per second. This time I
> see 3400K goroutines waiting for reading the request, although there are no
> POST bodies to stream this time. That is why I am thinking it is something
> other than slow uploading clients.
>
> The concurrency is not really the problem, but I want to save the
> application resources for application purposes only and want to handle
> everything low-level on load balancing side.

You can limit and tune the amount of concurrent in flight requests via
maxconn configuration (on the frontend, and on each backend server
individually). Tune it to the number of goroutines you would like to
see. But don't set it to 1 just because nginx artificially buffers
everything.


cheers,
lukas



Re: HAProxy LB causes server to poll for request data for a long time

2017-11-16 Thread omer kirkagaclioglu
Hi Lukas,

Thanks for the quick answer. I am using haproxy on another service which
consists of GET requests with very small query parameters. It load balances
to a backend with 4 servers with  3K-20K requests per second. This time I
see 3400K goroutines waiting for reading the request, although there are no
POST bodies to stream this time. That is why I am thinking it is something
other than slow uploading clients.

The concurrency is not really the problem, but I want to save the
application resources for application purposes only and want to handle
everything low-level on load balancing side.



On Thu, Nov 16, 2017 at 2:03 PM Lukas Tribus  wrote:

> Hello,
>
>
> 2017-11-15 15:58 GMT+01:00 omer kirkagaclioglu :
> > I am just trying to understand the difference in behaviour between NGINX
> and
> > haproxy and if is there any setting I can tweak to fix this issue.
>
> Nginx buffers the entire client request body and even writes it
> temporarily to disk:
>
> http://nginx.org/en/docs/http/ngx_http_core_module.html#client_body_buffer_size
>
> Haproxy never writes to the disk and forwards the traffic as soon as
> possible.
>
>
> I'm not sure you are approaching this front the correct angle. What is
> the problems with 1200 active go-routines? They are supposed to scale
> just fine, while blocking IO calls of hundreds of slow uploads in
> nginx - probably without threading, stall the entire (event-loop
> based) nginx process.
>
> Really it is a good thing to see 1200 active go-routines, this means
> your design will scale.
>
>
> If you prefer to hide concurrency from the backend and move "the
> problem" to the load-balancer, then at least use nginx with threading,
> so that it doesn't block the event-loop. But haproxy is not a
> webserver and it is not the right tool to buffer entire post bodies
> for the sole purpose of hiding slow uploads from the backend.
>
>
>
> cheers,
> lukas
>