Re: HAProxy Timeout Oddity WebKit XHR Replay

2017-07-25 Thread Liam Middlebrook
Hi Aleksandar,

Responses inline.

On 07/24/2017 11:57 PM, Aleksandar Lazic wrote:
> Hi Liam,
> 
> Liam Middlebrook wrote on 24.07.2017:
> 
>> Hi,
> 
>> I'm currently running HAProxy within an Openshift Origin cluster. Until
>> a recent update of Openshift I did not experience issues with connection
>> timeouts, the connections would last up until the specified timeout as
>> defined by the application.
>>
>> After an update to Openshift I changed HAProxy settings around to give a
>> global 600s timeout for client and server. However when I make a form
>> upload request the connection is killed after 30 seconds. When I signal
>> an XHR Replay in my network inspector the connection lasts longer than
>> the 30 seconds and is able to successfully upload the file.
> 
> This smells like this timeout.
> 
> ###
> ROUTER_DEFAULT_SERVER_TIMEOUT 30s
> Length of time within which a server has to acknowledge or send data. 
> (TimeUnits)
> ###
> 
> https://docs.openshift.org/latest/architecture/core_concepts/routes.html#env-variables
> 
> You can change it via.
> 
> I assume here that you have the openshift router in the default 
> namespace and the router is deployed as "router".
> 
> Too much routers here ;-)
> 
> oc env -n default dc/router ROUTER_DEFAULT_SERVER_TIMEOUT=1h

I wish this were the case. I've changed the router's deployment config
quite a bit. At first extending it to 5m, then I grew suspicious that
HAProxy settings were even changing and set it to a lower 15s.

I found that when adjusting this environment variable the timeout limit
does not change. This is especially odd since the generated
haproxy.config file appears to have the proper changed values.

sh-4.2$ grep -i timeout haproxy.config  -n |head -n 10
11:  stats timeout 2m
42:  timeout connect 5s
45:  timeout client 15s
48:  timeout server 15s
51:  timeout http-request 10s
54:  # Long timeout for WebSocket connections.
56:  timeout tunnel 1h
218:  timeout check 5000ms
259:  timeout check 5000ms
300:  timeout check 5000ms


All the timeout values after line 300 are the same as lines 218, 259,
and 300.

With this config I am still receiving connection timeouts at 30s.

I'm not sure if this is important to note, but the requests that are
timing out are HTTP file uploads.
> 
>> I asked in irc with no luck. Any ideas why this may be happening>
> Do you mean the #openshift-dev channel on Freenode?
> 
I had asked in the #haproxy channel a few weeks ago and have just
recently found time to re-explore this issue.

>> Thanks,
>>
>> Liam Middlebrook (loothelion)
> 


Thanks,

Liam Middlebrook (loothelion)



Re: HAProxy Timeout Oddity WebKit XHR Replay

2017-07-25 Thread Aleksandar Lazic
Hi Liam,

Liam Middlebrook wrote on 25.07.2017:

> Hi Aleksandar,

> Responses inline.

> On 07/24/2017 11:57 PM, Aleksandar Lazic wrote:
>> Hi Liam,
>> 
>> Liam Middlebrook wrote on 24.07.2017:
>> 
>>> Hi,
>> 
>>> I'm currently running HAProxy within an Openshift Origin cluster. Until
>>> a recent update of Openshift I did not experience issues with connection
>>> timeouts, the connections would last up until the specified timeout as
>>> defined by the application.
>>>
>>> After an update to Openshift I changed HAProxy settings around to give a
>>> global 600s timeout for client and server. However when I make a form
>>> upload request the connection is killed after 30 seconds. When I signal
>>> an XHR Replay in my network inspector the connection lasts longer than
>>> the 30 seconds and is able to successfully upload the file.
>> 
>> This smells like this timeout.
>> 
>> ###
>> ROUTER_DEFAULT_SERVER_TIMEOUT 30s
>> Length of time within which a server has to acknowledge or send data. 
>> (TimeUnits)
>> ###
>> 
>> https://docs.openshift.org/latest/architecture/core_concepts/routes.html#env-variables
>> 
>> You can change it via.
>> 
>> I assume here that you have the openshift router in the default 
>> namespace and the router is deployed as "router".
>> 
>> Too much routers here ;-)
>> 
>> oc env -n default dc/router ROUTER_DEFAULT_SERVER_TIMEOUT=1h

> I wish this were the case. I've changed the router's deployment config
> quite a bit. At first extending it to 5m, then I grew suspicious that
> HAProxy settings were even changing and set it to a lower 15s.

uff that's low, imho.

> I found that when adjusting this environment variable the timeout limit
> does not change. This is especially odd since the generated
> haproxy.config file appears to have the proper changed values.
>
> sh-4.2$ grep -i timeout haproxy.config  -n |head -n 10
> 11:  stats timeout 2m
> 42:  timeout connect 5s
> 45:  timeout client 15s
> 48:  timeout server 15s
> 51:  timeout http-request 10s
> 54:  # Long timeout for WebSocket connections.
> 56:  timeout tunnel 1h
> 218:  timeout check 5000ms
> 259:  timeout check 5000ms
> 300:  timeout check 5000ms
>
> All the timeout values after line 300 are the same as lines 218, 259,
> and 300.

That means the 'oc env ...' works, good.

> With this config I am still receiving connection timeouts at 30s.

> I'm not sure if this is important to note, but the requests that are
> timing out are HTTP file uploads.

Yep I have seen this line in one of your previous mail.

>Jul 24 23:51:08 proton.csh.rit.edu haproxy[127]: 67.188.94.238:43996
>[24/Jul/2017:23:50:38.543] fe_sni~
>be_edge_http_gallery_gallery/5792c687271726c3c4b5d54ae219aaa2
>85/0/1/-1/29913 -1 0 - - CHVN 1/0/0/0/0 0/0 "POST /upload HTTP/1.1"

###
CH   The client aborted while waiting for the server to start responding.
  It might be the server taking too long to respond or the client
  clicking the 'Stop' button too fast.

VN   A cookie was provided by the client, none was inserted in the
  response. This happens for most responses for which the client has
  already got a cookie.
###

The timing looks also odd.
http://cbonte.github.io/haproxy-dconv/1.5/configuration.html#8.4

What's in your gallery log?
Does ANY Upload works?

>>> I asked in irc with no luck. Any ideas why this may be happening>
>> Do you mean the #openshift-dev channel on Freenode?
>> 
> I had asked in the #haproxy channel a few weeks ago and have just
> recently found time to re-explore this issue.

Okay I don't know that this channel exists 8-O, every day is a learning 
day ;-)

>>> Thanks,
>>>
>>> Liam Middlebrook (loothelion)
>> 
> Thanks,
> Liam Middlebrook (loothelion)

-- 
Best Regards
Aleks




hot reloading configuration

2017-07-25 Thread Stéphane Cottin

Hi,

A blog article about hot reloading configurations.

https://www.clever-cloud.com/blog/engineering/2017/07/24/hot-reloading-configuration-why-and-how/

This company was using haproxy, and because of reloading configuration 
problems, among other aspects, they switch to their home made open 
sourced http reverse proxy.


Please, do not debate about C vs rust vs whatever or their choice to no 
longer use haproxy, keep this thread readable, my concerns here is 
_only_ about hot reloading configuration and haproxy.


Recent changes ( reloading improvements, basic dns resolver ) seems to 
go this way, but AFAIK we still cannot manage frontends, backends, and 
many other configuration aspects using the admin socket.


So, what about haproxy and high frequency full/partial hot 
reconfiguration ?


Stéphane

Re: Passing SNI value ( ssl_fc_sni ) to backend's verifyhost.

2017-07-25 Thread Willy Tarreau
Hi again Kevin,

On Tue, Jul 25, 2017 at 07:26:07AM +0200, Willy Tarreau wrote:
> > frontend www-https
> > bind :::443 v4v6 ssl crt /etc/haproxy/certs/default.example.ca.pem crt
> > /etc/haproxy/certs/
> > use_backend www-backend-https
> > 
> > backend www-backend-https
> > server app default.example.ca:443 ssl verify required sni ssl_fc_sni
> > ca-file /etc/ssl/certs/ca-certificates.crt check check-ssl
> > 
> > If you visit https://should-be-broken.example.ca you will get the page for
> > default.example.ca, but the browser/visitor will show the
> > should-be-broken.example.ca cert from the haproxy and the page will appear
> > secure, despite the backend apache instance having no access to
> > should-be-broken's virtual host or certificate and serving a certificate for
> > default.example.ca to the haproxy.
> 
> Thanks, I'll retry it. I'm surprized because what you describe here is
> *exactly* what I did and it worked fine for me, I remember getting a 503
> when connecting with the wrong name. But obviously there must be a
> difference so I'll try to find it.

So I tried again to replicate it and cannot confirm your issues. Here's
what I've done :
  - I'm having haproxy serve as the origin because I don't have an apache
instance running and don't know how to set it up so I'm not going to
waste my time on it ;

  - this origin server responds on 3 different domain names and thus
serves 3 different certificates (dom{1,2,3}.example.com).

  - a front gateway responds on a dummy cert, and connects to the server
passing the front connection's SNI to the server.

  - the client connects to this front gateway with 4 different names, the
3 supported ones and an unsupported one

What I'm seeing is that the first 3 domains work well and the 4th fails.

Here's the config :

  listen gateway
mode http
bind :4430 ssl crt rsa2048.pem
server app 127.0.0.1:4431 ssl sni ssl_fc_sni verify required ca-file 
ca.pem check check-ssl

  frontend origin
mode http
bind :4431 ssl crt dom1.example.com.pem crt dom2.example.com.pem crt 
dom3.example.com.pem
http-request redirect location /called-with-%[ssl_fc_sni]

Command to start this and output :
  $ ./haproxy -d -f sni-srv-bug.cfg

Test with dom1..dom3 :
  $ printf "GET / HTTP/1.0\r\n\r\n" | openssl s_client -connect 127.0.0.1:4430 
-quiet -servername dom1.example.com

Haproxy's output :
  0004:origin.accept(0005)=0007 from [127.0.0.1:36664] ALPN=
  0004:origin.clicls[0007:]
  0004:origin.closed[0007:]
  0005:gateway.accept(0004)=0006 from [127.0.0.1:56942] ALPN=
  0005:gateway.clireq[0006:]: GET / HTTP/1.0
  0006:origin.accept(0005)=0008 from [127.0.0.1:36668] ALPN=
  0006:origin.clireq[0008:]: GET / HTTP/1.0
  0006:origin.clicls[0008:]
  0006:origin.closed[0008:]
  0005:gateway.srvrep[0006:0007]: HTTP/1.1 302 Found
  0005:gateway.srvhdr[0006:0007]: Cache-Control: no-cache
  0005:gateway.srvhdr[0006:0007]: Content-length: 0
  0005:gateway.srvhdr[0006:0007]: Location: /called-with-dom1.example.com
  0005:gateway.srvhdr[0006:0007]: Connection: close
  0005:gateway.srvcls[0006:0007]
  0005:gateway.clicls[0006:0007]
  0005:gateway.closed[0006:0007]
  0007:origin.accept(0005)=0007 from [127.0.0.1:36670] ALPN=
  0007:origin.clicls[0007:]
  0007:origin.closed[0007:]

OpenSSL output :
  depth=0 C = FR, ST = Some-State, O = test, CN = localhost
  verify error:num=18:self signed certificate
  verify return:1
  depth=0 C = FR, ST = Some-State, O = test, CN = localhost
  verify return:1
  HTTP/1.1 302 Found
  Cache-Control: no-cache
  Content-length: 0
  Location: /called-with-dom1.example.com
  Connection: close
  
Test with dom4:
  $ printf "GET / HTTP/1.0\r\n\r\n" | openssl s_client -connect 127.0.0.1:4430 
-quiet -servername dom4.example.com

Haproxy's output :
  :origin.accept(0005)=0007 from [127.0.0.1:36640] ALPN=
  :origin.clicls[0007:]
  :origin.closed[0007:]
  0001:gateway.accept(0004)=0006 from [127.0.0.1:56918] ALPN=
  0001:gateway.clireq[0006:]: GET / HTTP/1.0
  fd[0007] OpenSSL error[0x14090086] ssl3_get_server_certificate: certificate 
verify failed
  fd[0008] OpenSSL error[0x14094438] ssl3_read_bytes: tlsv1 alert internal error
  0002:origin.accept(0005)=0008 from [127.0.0.1:36646] ALPN=
  0002:origin.clicls[0008:]
  0002:origin.closed[0008:]
  fd[0007] OpenSSL error[0x14090086] ssl3_get_server_certificate: certificate 
verify failed
  fd[0008] OpenSSL error[0x14094438] ssl3_read_bytes: tlsv1 alert internal error
  fd[0007] OpenSSL error[0x14090086] ssl3_get_server_certificate: certificate 
verify failed
  fd[0008] OpenSSL error[0x14094438] ssl3_read_bytes: tlsv1 alert internal error
  0003:origin.accept(0005)=0008 from [127.0.0.1:36652] ALPN=
  0003:origin.clicls[

X-Forwarded-For Balancing

2017-07-25 Thread Trenton Dyck
Hi,

Is it possible to balance, via X-Forwarded-For header?  We have come across an 
issue with sticky-sessions and server weight that I can't seem to find the 
answer to online (Unbalanced traffic).  I think stick-tables with this acl 
option would be nice to have for a future version.

Please keep met CCed for responses since I'm not subscribed.

Thanks,
Trent


Re: Fix building haproxy with recent LibreSSL

2017-07-25 Thread Bernard Spil

On 2017-07-04 10:18, Willy Tarreau wrote:

On Tue, Jul 04, 2017 at 11:12:20AM +0300, Dmitry Sivachenko wrote:

>> https://www.mail-archive.com/haproxy@formilux.org/msg25819.html
>
>
> Do you know if the patch applies to 1.8 (it was mangled so I didn't try).


Sorry, hit reply too fast:  no, one chunk fails against 1.8-dev2 (the 
one
dealing with #ifdef SSL_CTX_get_tlsext_status_arg, it requires 
analysis

because it is not simple surrounding context change).


OK thanks. Bernard, care to have a look and ensure it works for you ?

Thanks,
Willy


I've just committed a patch to FreeBSD's ports tree for haproxy-devel 
(1.8-dev2). This would be a good candidate to include.


Not sure if attachments work for the mailing-list...
https://svnweb.freebsd.org/ports/head/net/haproxy-devel/files/patch-src_ssl__sock.c

Cheers,

Bernard.--- src/ssl_sock.c.orig	2017-06-02 13:59:51 UTC
+++ src/ssl_sock.c
@@ -56,7 +56,7 @@
 #include 
 #endif
 
-#if OPENSSL_VERSION_NUMBER >= 0x101fL
+#if (OPENSSL_VERSION_NUMBER >= 0x101fL) && !defined(LIBRESSL_VERSION_NUMBER)
 #include 
 #endif
 
@@ -362,7 +362,7 @@ fail_get:
 }
 #endif
 
-#if OPENSSL_VERSION_NUMBER >= 0x101fL
+#if (OPENSSL_VERSION_NUMBER >= 0x101fL) && !defined(LIBRESSL_VERSION_NUMBER)
 /*
  * openssl async fd handler
  */
@@ -1034,10 +1034,13 @@ static int ssl_sock_load_ocsp(SSL_CTX *c
 		ocsp = NULL;
 
 #ifndef SSL_CTX_get_tlsext_status_cb
-# define SSL_CTX_get_tlsext_status_cb(ctx, cb) \
-	*cb = (void (*) (void))ctx->tlsext_status_cb;
+#ifndef SSL_CTRL_GET_TLSEXT_STATUS_REQ_CB
+#define SSL_CTRL_GET_TLSEXT_STATUS_REQ_CB 128
 #endif
+	callback = SSL_CTX_ctrl(ctx, SSL_CTRL_GET_TLSEXT_STATUS_REQ_CB, 0, callback);
+#else
 	SSL_CTX_get_tlsext_status_cb(ctx, &callback);
+#endif
 
 	if (!callback) {
 		struct ocsp_cbk_arg *cb_arg = calloc(1, sizeof(*cb_arg));
@@ -1063,7 +1066,10 @@ static int ssl_sock_load_ocsp(SSL_CTX *c
 		int key_type;
 		EVP_PKEY *pkey;
 
-#ifdef SSL_CTX_get_tlsext_status_arg
+#if defined(SSL_CTX_get_tlsext_status_arg) || (LIBRESSL_VERSION_NUMBER >= 0x2050100fL)
+#ifndef SSL_CTRL_GET_TLSEXT_STATUS_REQ_CB_ARG
+#define SSL_CTRL_GET_TLSEXT_STATUS_REQ_CB_ARG 129
+#endif
 		SSL_CTX_ctrl(ctx, SSL_CTRL_GET_TLSEXT_STATUS_REQ_CB_ARG, 0, &cb_arg);
 #else
 		cb_arg = ctx->tlsext_status_arg;
@@ -3403,7 +3409,7 @@ int ssl_sock_load_cert_list_file(char *f
 #define SSL_MODE_SMALL_BUFFERS 0
 #endif
 
-#if (OPENSSL_VERSION_NUMBER < 0x101fL) && !defined(OPENSSL_IS_BORINGSSL)
+#if (OPENSSL_VERSION_NUMBER < 0x101fL) && !defined(OPENSSL_IS_BORINGSSL) || defined(LIBRESSL_VERSION_NUMBER)
 static void ssl_set_SSLv3_func(SSL_CTX *ctx, int is_server)
 {
 #if SSL_OP_NO_SSLv3
@@ -3560,7 +3566,7 @@ ssl_sock_initial_ctx(struct bind_conf *b
 		options &= ~SSL_OP_CIPHER_SERVER_PREFERENCE;
 	SSL_CTX_set_options(ctx, options);
 
-#if OPENSSL_VERSION_NUMBER >= 0x101fL
+#if (OPENSSL_VERSION_NUMBER >= 0x101fL) && !defined(LIBRESSL_VERSION_NUMBER)
 	if (global_ssl.async)
 		mode |= SSL_MODE_ASYNC;
 #endif
@@ -4010,7 +4016,7 @@ int ssl_sock_prepare_srv_ctx(struct serv
 		options |= SSL_OP_NO_TICKET;
 	SSL_CTX_set_options(ctx, options);
 
-#if OPENSSL_VERSION_NUMBER >= 0x101fL
+#if (OPENSSL_VERSION_NUMBER >= 0x101fL) && !defined(LIBRESSL_VERSION_NUMBER)
 	if (global_ssl.async)
 		mode |= SSL_MODE_ASYNC;
 #endif
@@ -4526,7 +4532,7 @@ int ssl_sock_handshake(struct connection
 fd_cant_recv(conn->t.sock.fd);
 return 0;
 			}
-#if OPENSSL_VERSION_NUMBER >= 0x101fL
+#if (OPENSSL_VERSION_NUMBER >= 0x101fL) && !defined(LIBRESSL_VERSION_NUMBER)
 			else if (ret == SSL_ERROR_WANT_ASYNC) {
 ssl_async_process_fds(conn, conn->xprt_ctx);
 return 0;
@@ -4610,7 +4616,7 @@ int ssl_sock_handshake(struct connection
 			fd_cant_recv(conn->t.sock.fd);
 			return 0;
 		}
-#if OPENSSL_VERSION_NUMBER >= 0x101fL
+#if (OPENSSL_VERSION_NUMBER >= 0x101fL) && !defined(LIBRESSL_VERSION_NUMBER)
 		else if (ret == SSL_ERROR_WANT_ASYNC) {
 			ssl_async_process_fds(conn, conn->xprt_ctx);
 			return 0;
@@ -4802,7 +4808,7 @@ static int ssl_sock_to_buf(struct connec
 fd_cant_recv(conn->t.sock.fd);
 break;
 			}
-#if OPENSSL_VERSION_NUMBER >= 0x101fL
+#if (OPENSSL_VERSION_NUMBER >= 0x101fL) && !defined(LIBRESSL_VERSION_NUMBER)
 			else if (ret == SSL_ERROR_WANT_ASYNC) {
 ssl_async_process_fds(conn, conn->xprt_ctx);
 break;
@@ -4910,7 +4916,7 @@ static int ssl_sock_from_buf(struct conn
 __conn_sock_want_recv(conn);
 break;
 			}
-#if OPENSSL_VERSION_NUMBER >= 0x101fL
+#if (OPENSSL_VERSION_NUMBER >= 0x101fL) && !defined(LIBRESSL_VERSION_NUMBER)
 			else if (ret == SSL_ERROR_WANT_ASYNC) {
 ssl_async_process_fds(conn, conn->xprt_ctx);
 break;
@@ -4933,7 +4939,7 @@ static int ssl_sock_from_buf(struct conn
 static void ssl_sock_close(struct connection *conn) {
 
 	if (conn->xprt_ctx) {
-#if OPENSSL_VERSION_NUMBER >= 0x101fL
+#if (OPENSSL_VERSION_NUMBER >= 0x101fL) && !defined(LIBRESSL_VERSION_NUMBER)
 		if (globa

Re: X-Forwarded-For Balancing

2017-07-25 Thread Aleksandar Lazic
Hi Trenton,

Trenton Dyck wrote on 25.07.2017:

> Hi,
>  
> Is it possible to balance, via X-Forwarded-For header?  We have come
> across an issue with sticky-sessions and server weight that I can’t
> seem to find the answer to online (Unbalanced traffic).  I think
> stick-tables with this acl option  would be nice to have for a future version.

http://cbonte.github.io/haproxy-dconv/1.7/configuration.html#4-balance

Something like this

balance hdr(X-Forwarded-For)

Does it make sense to balance based on this header?!
What's the issue you want to solve?

What do you mean with "stick-tables with this acl option"?

> Please keep met CCed for responses since I’m not subscribed.
>  
> Thanks,
>
> Trent

-- 
Best Regards
Aleks




Re: [PATCH] Support proxies with identical names in Lua core.proxies

2017-07-25 Thread Willy Tarreau
On Mon, Jul 24, 2017 at 02:38:41PM +0200, Willy Tarreau wrote:
> On Mon, Jul 24, 2017 at 02:04:16PM +0200, Thierry FOURNIER wrote:
> > On Thu, 20 Jul 2017 15:26:52 +0200
> > You will found in attchement a patch which add the proxy name as member
> > of the proxy object.
> > 
> > Willy, can you apply it ?
> 
> I'd like to but there's no attachment, so even trying hard I'm failing to :-)

Now merged, thanks!
Willy



RE: X-Forwarded-For Balancing

2017-07-25 Thread Trenton Dyck
Hi Alek,

I want to balance via round-robin, but I want stick-tables to use the 
X-Forwarded-For header instead of src ip.  It makes sense in our use case 
because a vast majority of our clients are behind a NAT and have the same 
source IP, but the X-Forwarded-For header is unique to them.

Thanks,
Trent

-Original Message-
From: Aleksandar Lazic [mailto:al-hapr...@none.at] 
Sent: Tuesday, July 25, 2017 11:20 AM
To: Trenton Dyck
Cc: haproxy@formilux.org
Subject: Re: X-Forwarded-For Balancing

Hi Trenton,

Trenton Dyck wrote on 25.07.2017:

> Hi,
>  
> Is it possible to balance, via X-Forwarded-For header?  We have come 
> across an issue with sticky-sessions and server weight that I can't 
> seem to find the answer to online (Unbalanced traffic).  I think 
> stick-tables with this acl option  would be nice to have for a future version.

http://cbonte.github.io/haproxy-dconv/1.7/configuration.html#4-balance

Something like this

balance hdr(X-Forwarded-For)

Does it make sense to balance based on this header?!
What's the issue you want to solve?

What do you mean with "stick-tables with this acl option"?

> Please keep met CCed for responses since I'm not subscribed.
>  
> Thanks,
>
> Trent

--
Best Regards
Aleks




Re: X-Forwarded-For Balancing

2017-07-25 Thread Andrew Smalley
Hi Trenton

I hope the below example will help you with X-Forward-For + Stick table +
replication

listen VIP_Name
bind 192.168.100.50:65435 transparent
mode http
balance roundrobin
option forwardfor if-none
stick on hdr(X-Forwarded-For,-1)  # Note the ,-1 is to move the
XFF header back one place in the list.
stick on src
stick-table type string len 64 size 10240k expire 30m peers
loadbalancer_replication
server backup 127.0.0.1:9081 backup  non-stick
option http-keep-alive
timeout http-request 5s
option redispatch
option abortonclose
maxconn 4
server RIP_Name 192.168.100.200:80  weight 100  check  inter 500
rise 1  fall 1  minconn 0  maxconn 0  on-marked-down shutdown-sessions
server RIP_Name-1 192.168.100.255:80  weight 100  check  inter 500
 rise 1  fall 1  minconn 0  maxconn 0  on-marked-down
shutdown-sessions


Andruw Smalley

Loadbalancer.org Ltd.
www.loadbalancer.org 






+1 888 867 9504 / +44 (0)330 380 1064
asmal...@loadbalancer.org

Leave a Review
 | Deployment
Guides

| Blog 

On 25 July 2017 at 17:36, Trenton Dyck  wrote:

> Hi Alek,
>
> I want to balance via round-robin, but I want stick-tables to use the
> X-Forwarded-For header instead of src ip.  It makes sense in our use case
> because a vast majority of our clients are behind a NAT and have the same
> source IP, but the X-Forwarded-For header is unique to them.
>
> Thanks,
> Trent
>
> -Original Message-
> From: Aleksandar Lazic [mailto:al-hapr...@none.at]
> Sent: Tuesday, July 25, 2017 11:20 AM
> To: Trenton Dyck
> Cc: haproxy@formilux.org
> Subject: Re: X-Forwarded-For Balancing
>
> Hi Trenton,
>
> Trenton Dyck wrote on 25.07.2017:
>
> > Hi,
> >
> > Is it possible to balance, via X-Forwarded-For header?  We have come
> > across an issue with sticky-sessions and server weight that I can't
> > seem to find the answer to online (Unbalanced traffic).  I think
> > stick-tables with this acl option  would be nice to have for a future
> version.
>
> http://cbonte.github.io/haproxy-dconv/1.7/configuration.html#4-balance
>
> Something like this
>
> balance hdr(X-Forwarded-For)
>
> Does it make sense to balance based on this header?!
> What's the issue you want to solve?
>
> What do you mean with "stick-tables with this acl option"?
>
> > Please keep met CCed for responses since I'm not subscribed.
> >
> > Thanks,
> >
> > Trent
>
> --
> Best Regards
> Aleks
>
>
>


Re: X-Forwarded-For Balancing

2017-07-25 Thread Andrew Smalley
I just wanted to add a quick apology for the HTML footer.

Andruw Smalley

Loadbalancer.org Ltd.
www.loadbalancer.org 






+1 888 867 9504 / +44 (0)330 380 1064
asmal...@loadbalancer.org

Leave a Review
 | Deployment
Guides

| Blog 

On 25 July 2017 at 17:54, Andrew Smalley  wrote:

> Hi Trenton
>
> I hope the below example will help you with X-Forward-For + Stick table +
> replication
>
> listen VIP_Name
> bind 192.168.100.50:65435 transparent
> mode http
> balance roundrobin
> option forwardfor if-none
> stick on hdr(X-Forwarded-For,-1)  # Note the ,-1 is to move the XFF 
> header back one place in the list.
> stick on src
> stick-table type string len 64 size 10240k expire 30m peers 
> loadbalancer_replication
> server backup 127.0.0.1:9081 backup  non-stick
> option http-keep-alive
> timeout http-request 5s
> option redispatch
> option abortonclose
> maxconn 4
> server RIP_Name 192.168.100.200:80  weight 100  check  inter 500  rise 1  
> fall 1  minconn 0  maxconn 0  on-marked-down shutdown-sessions
> server RIP_Name-1 192.168.100.255:80  weight 100  check  inter 500  rise 
> 1  fall 1  minconn 0  maxconn 0  on-marked-down shutdown-sessions
>
>
> Andruw Smalley
>
> Loadbalancer.org Ltd.
> www.loadbalancer.org 
>
> 
> 
> 
> 
> 
> +1 888 867 9504 / +44 (0)330 380 1064
> asmal...@loadbalancer.org
>
> Leave a Review
>  | Deployment
> Guides
> 
> | Blog 
>
> On 25 July 2017 at 17:36, Trenton Dyck 
> wrote:
>
>> Hi Alek,
>>
>> I want to balance via round-robin, but I want stick-tables to use the
>> X-Forwarded-For header instead of src ip.  It makes sense in our use case
>> because a vast majority of our clients are behind a NAT and have the same
>> source IP, but the X-Forwarded-For header is unique to them.
>>
>> Thanks,
>> Trent
>>
>> -Original Message-
>> From: Aleksandar Lazic [mailto:al-hapr...@none.at]
>> Sent: Tuesday, July 25, 2017 11:20 AM
>> To: Trenton Dyck
>> Cc: haproxy@formilux.org
>> Subject: Re: X-Forwarded-For Balancing
>>
>> Hi Trenton,
>>
>> Trenton Dyck wrote on 25.07.2017:
>>
>> > Hi,
>> >
>> > Is it possible to balance, via X-Forwarded-For header?  We have come
>> > across an issue with sticky-sessions and server weight that I can't
>> > seem to find the answer to online (Unbalanced traffic).  I think
>> > stick-tables with this acl option  would be nice to have for a future
>> version.
>>
>> http://cbonte.github.io/haproxy-dconv/1.7/configuration.html#4-balance
>>
>> Something like this
>>
>> balance hdr(X-Forwarded-For)
>>
>> Does it make sense to balance based on this header?!
>> What's the issue you want to solve?
>>
>> What do you mean with "stick-tables with this acl option"?
>>
>> > Please keep met CCed for responses since I'm not subscribed.
>> >
>> > Thanks,
>> >
>> > Trent
>>
>> --
>> Best Regards
>> Aleks
>>
>>
>>
>


Re: Passing SNI value ( ssl_fc_sni ) to backend's verifyhost.

2017-07-25 Thread Kevin McArthur

Hi Willy,

I cant replicate your results here

I cloned from git and built the package with the debian/ubuntu build 
scripts from https://launchpad.net/~vbernat/+archive/ubuntu/haproxy-1.7 
... updating the changelog to add a 1.8-dev2 version and calling 
./debian/rules binary to build the package.


The git log shows:

   commit 2ab88675ecbf960a6f33ffe9c6a27f264150b201
   Author: Willy Tarreau 
   Date:   Wed Jul 5 18:23:03 2017 +0200

MINOR: ssl: compare server certificate names to the SNI on
   outgoing connections


So I'm sure its in there unless a  ./debian/rules binary build is 
breaking something.


this is my config.

haproxy-min-sni.cfg

global
ca-base /etc/ssl/certs
crt-base /etc/ssl/private

ssl-default-bind-ciphers 
ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS

ssl-default-bind-options no-sslv3

defaults
modehttp
optionhttplog
optiondontlognull
optionforwardfor
optionhttp-server-close
option  log-health-checks
timeout connect 5000
timeout client  5
timeout server  5

frontend www-https
bind :::443 v4v6 ssl crt /etc/haproxy/certs/www.example.ca.pem crt 
/etc/haproxy/certs/

reqadd X-Forwarded-Proto:\ https
use_backend www-backend-https

backend www-backend-https
http-response set-header X-Server %s
balance roundrobin
server app2 10.10.0.5:443 ssl verify required sni ssl_fc_sni 
ca-file /etc/ssl/certs/ca-certificates.crt check check-ssl


--

/usr/sbin/haproxy -d -f haproxy-min-sni.cfg

--

Loading ssltest-broken.example.ca (that the backend server has no cert 
for and so serves from the default tls vhost (app2.example.ca in this 
case))... This shows a secure page in the browser, however the 
connection to the backend cannot be secure.


[WARNING] 205/165327 (16816) : Health check for server 
www-backend-https/app2 succeeded, reason: Layer6 check passed, check 
duration: 5ms, status: 3/3 UP.
:www-https.accept(0004)=0007 from [::::36565] 
ALPN=
0001:www-https.accept(0004)=0006 from [::::45955] 
ALPN=
0002:www-https.accept(0004)=0005 from [::::44474] 
ALPN=

:www-https.clireq[0007:]: GET / HTTP/1.1
:www-https.clihdr[0007:]: Host: ssltest-broken.example.ca
:www-https.clihdr[0007:]: Connection: keep-alive
:www-https.clihdr[0007:]: Cache-Control: max-age=0
:www-https.clihdr[0007:]: Upgrade-Insecure-Requests: 1
:www-https.clihdr[0007:]: User-Agent: Mozilla/5.0 
(Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like 
Gecko) Chrome/59.0.3071.115 Safari/537.36
:www-https.clihdr[0007:]: Accept: 
text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8

:www-https.clihdr[0007:]: Accept-Encoding: gzip, deflate, br
:www-https.clihdr[0007:]: Accept-Language: en-US,en;q=0.8
:www-backend-https.srvrep[0007:0008]: HTTP/1.1 200 OK
:www-backend-https.srvhdr[0007:0008]: Date: Tue, 25 Jul 2017 
16:49:19 GMT

:www-backend-https.srvhdr[0007:0008]: Server: Apache
:www-backend-https.srvhdr[0007:0008]: Vary: Accept-Encoding
:www-backend-https.srvhdr[0007:0008]: Content-Encoding: gzip
:www-backend-https.srvhdr[0007:0008]: Content-Length: 515
:www-backend-https.srvhdr[0007:0008]: Connection: close
:www-backend-https.srvhdr[0007:0008]: Content-Type: text/html; 
charset=UTF-8

:www-backend-https.srvcls[0007:0008]


Loading ssltest.example.ca
[WARNING] 205/165327 (16816) : Health check for server 
www-backend-https/app2 succeeded, reason: Layer6 check passed, check 
duration: 5ms, status: 3/3 UP.
:www-https.accept(0004)=0005 from [::::45095] 
ALPN=
0001:www-https.accept(0004)=0006 from [::::41897] 
ALPN=
0002:www-https.accept(0004)=0007 from [::::37526] 
ALPN=

:www-https.clireq[0005:]: GET / HTTP/1.1
:www-https.clihdr[0005:]: Host: ssltest.example.ca
:www-https.clihdr[0005:]: Connection: keep-alive
:www-https.clihdr[0005:]: Upgrade-Insecure-Requests: 1
:www-https.clihdr[0005:]: User-Agent: Mozilla/5.0 
(Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like 
Gecko) Chrome/59.0.3071.115 Safari/537.36
:www-https.clihdr[0005:]: Accept: 
text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8

:www-https.clihdr[0005:]: Accept-Encoding: gzip, deflate, br
:www-https.clihdr[0005:]: Accept-Language: en-US,en;q=0.8
:www-backend-https.srvrep[0005:0008]: HTTP/1.1 200 OK
:www-backend-https.srvhdr[0005:0008]: Date: Tue, 25 Jul 2017 
16:53:33 GMT

:www-backend-https.srvhdr[0005:0008]: Server: Apache
:www-backend-https.srvhdr[0005:0008]: V

Re: HAProxy Timeout Oddity WebKit XHR Replay

2017-07-25 Thread Liam Middlebrook
Responses inline.

On 07/25/2017 02:23 AM, Aleksandar Lazic wrote:
> Hi Liam,
> 
> Liam Middlebrook wrote on 25.07.2017:
> 
>> Hi Aleksandar,
> 
>> Responses inline.
> 
>> On 07/24/2017 11:57 PM, Aleksandar Lazic wrote:
>>> Hi Liam,
>>>
>>> Liam Middlebrook wrote on 24.07.2017:
>>>
 Hi,
>>>
 I'm currently running HAProxy within an Openshift Origin cluster. Until
 a recent update of Openshift I did not experience issues with connection
 timeouts, the connections would last up until the specified timeout as
 defined by the application.

 After an update to Openshift I changed HAProxy settings around to give a
 global 600s timeout for client and server. However when I make a form
 upload request the connection is killed after 30 seconds. When I signal
 an XHR Replay in my network inspector the connection lasts longer than
 the 30 seconds and is able to successfully upload the file.
>>>
>>> This smells like this timeout.
>>>
>>> ###
>>> ROUTER_DEFAULT_SERVER_TIMEOUT 30s
>>> Length of time within which a server has to acknowledge or send data. 
>>> (TimeUnits)
>>> ###
>>>
>>> https://docs.openshift.org/latest/architecture/core_concepts/routes.html#env-variables
>>>
>>> You can change it via.
>>>
>>> I assume here that you have the openshift router in the default 
>>> namespace and the router is deployed as "router".
>>>
>>> Too much routers here ;-)
>>>
>>> oc env -n default dc/router ROUTER_DEFAULT_SERVER_TIMEOUT=1h
> 
>> I wish this were the case. I've changed the router's deployment config
>> quite a bit. At first extending it to 5m, then I grew suspicious that
>> HAProxy settings were even changing and set it to a lower 15s.
> 
> uff that's low, imho.

Yeah, I just set it that low as a proof of concept from my suspicion
that the timeout values weren't changing the upload behavior.
> 
>> I found that when adjusting this environment variable the timeout limit
>> does not change. This is especially odd since the generated
>> haproxy.config file appears to have the proper changed values.
>>
>> sh-4.2$ grep -i timeout haproxy.config  -n |head -n 10
>> 11:  stats timeout 2m
>> 42:  timeout connect 5s
>> 45:  timeout client 15s
>> 48:  timeout server 15s
>> 51:  timeout http-request 10s
>> 54:  # Long timeout for WebSocket connections.
>> 56:  timeout tunnel 1h
>> 218:  timeout check 5000ms
>> 259:  timeout check 5000ms
>> 300:  timeout check 5000ms
>>
>> All the timeout values after line 300 are the same as lines 218, 259,
>> and 300.
> 
> That means the 'oc env ...' works, good.
> 
>> With this config I am still receiving connection timeouts at 30s.
> 
>> I'm not sure if this is important to note, but the requests that are
>> timing out are HTTP file uploads.
> 
> Yep I have seen this line in one of your previous mail.
> 
>> Jul 24 23:51:08 proton.csh.rit.edu haproxy[127]: 67.188.94.238:43996
>> [24/Jul/2017:23:50:38.543] fe_sni~
>> be_edge_http_gallery_gallery/5792c687271726c3c4b5d54ae219aaa2
>> 85/0/1/-1/29913 -1 0 - - CHVN 1/0/0/0/0 0/0 "POST /upload HTTP/1.1"
> 
> ###
> CH   The client aborted while waiting for the server to start responding.
>   It might be the server taking too long to respond or the client
>   clicking the 'Stop' button too fast.
> 
> VN   A cookie was provided by the client, none was inserted in the
>   response. This happens for most responses for which the client has
>   already got a cookie.
> ###
> 
> The timing looks also odd.
> http://cbonte.github.io/haproxy-dconv/1.5/configuration.html#8.4
> 
> What's in your gallery log?
> Does ANY Upload works?
> 

Uploads that don't last 30 seconds appear to work fine. Uploads that are
canceled due to the timeout do not appear in the application logs at
all. (Normally the logs are output at the completion of the request).

For the record this is a Flask application running via gunicorn. I have
the gunicorn timeout set to 600s.
 I asked in irc with no luck. Any ideas why this may be happening>
>>> Do you mean the #openshift-dev channel on Freenode?
>>>
>> I had asked in the #haproxy channel a few weeks ago and have just
>> recently found time to re-explore this issue.
> 
> Okay I don't know that this channel exists 8-O, every day is a learning 
> day ;-)
> 
 Thanks,

 Liam Middlebrook (loothelion)
>>>
>> Thanks,
>> Liam Middlebrook (loothelion)
> 



Re: Passing SNI value ( ssl_fc_sni ) to backend's verifyhost.

2017-07-25 Thread Willy Tarreau
On Tue, Jul 25, 2017 at 10:37:10AM -0700, Kevin McArthur wrote:
> Hi Willy,
> 
> I cant replicate your results here
> 
> I cloned from git and built the package with the debian/ubuntu build scripts
> from https://launchpad.net/~vbernat/+archive/ubuntu/haproxy-1.7 ... updating
> the changelog to add a 1.8-dev2 version and calling ./debian/rules binary to
> build the package.
> 
> The git log shows:
> 
>commit 2ab88675ecbf960a6f33ffe9c6a27f264150b201
>Author: Willy Tarreau 
>Date:   Wed Jul 5 18:23:03 2017 +0200
> 
> MINOR: ssl: compare server certificate names to the SNI on
>outgoing connections
> 
> 
> So I'm sure its in there unless a  ./debian/rules binary build is breaking
> something.

OK that's already a good thing.

> this is my config.
(...)
> So as you can see, ssltest-broken is hitting the app2 default vhost/cert.
> The backend server has no knowledge of the ssltest-broken certificate. The
> verifyhost is /not/ checked between the backend and the haproxy. Further, I
> think the health check should probably fail too because its trying to load
> via the ip-as-hostname and the cert im using doesnt have the IP in it. So
> that should fail hostname check too.

No, the health check doesn't present any SNI since there's no ssl_fc_sni
for it. But anyway I don't understand the difference in the setup.

> I'm confident that the verifyhost is not being done...  I suspect your test
> case is failing because the dom4 is totally unknown to the haproxy, whereas
> in my case, the haproxy has a cert for ssltest-broken but the backend does
> not.

No, it's irrelevant, we're only relying on SNI here, the backend has no
info on the knowledge of what lies on the front or not. I'll try with the
same certs on the front for completeness. But in my case I clearly see
verifyhost fail on a mismatched name between the server and the client,
so I'll have to investigate further.

Willy



Re: Passing SNI value ( ssl_fc_sni ) to backend's verifyhost.

2017-07-25 Thread Kevin McArthur



On 2017-07-25 10:51 AM, Willy Tarreau wrote:

On Tue, Jul 25, 2017 at 10:37:10AM -0700, Kevin McArthur wrote:

Hi Willy,

I cant replicate your results here

I cloned from git and built the package with the debian/ubuntu build scripts
from https://launchpad.net/~vbernat/+archive/ubuntu/haproxy-1.7 ... updating
the changelog to add a 1.8-dev2 version and calling ./debian/rules binary to
build the package.

The git log shows:

commit 2ab88675ecbf960a6f33ffe9c6a27f264150b201
Author: Willy Tarreau 
Date:   Wed Jul 5 18:23:03 2017 +0200

 MINOR: ssl: compare server certificate names to the SNI on
outgoing connections


So I'm sure its in there unless a  ./debian/rules binary build is breaking
something.

OK that's already a good thing.


this is my config.

(...)

So as you can see, ssltest-broken is hitting the app2 default vhost/cert.
The backend server has no knowledge of the ssltest-broken certificate. The
verifyhost is /not/ checked between the backend and the haproxy. Further, I
think the health check should probably fail too because its trying to load
via the ip-as-hostname and the cert im using doesnt have the IP in it. So
that should fail hostname check too.

No, the health check doesn't present any SNI since there's no ssl_fc_sni
for it. But anyway I don't understand the difference in the setup.
If the health check is connecting but not presenting any SNI, it would, 
in my setup, be getting back a cert for app2.example.ca. That obviously 
wont hostname match to 10.10.0.5 ip in the server line and thus the 
health check connection should fail. If i add an explicit verifyhost the 
health checks do indeed fail on anything but a static app2.example.ca 
string.



I'm confident that the verifyhost is not being done...  I suspect your test
case is failing because the dom4 is totally unknown to the haproxy, whereas
in my case, the haproxy has a cert for ssltest-broken but the backend does
not.

No, it's irrelevant, we're only relying on SNI here, the backend has no
info on the knowledge of what lies on the front or not. I'll try with the
same certs on the front for completeness. But in my case I clearly see
verifyhost fail on a mismatched name between the server and the client,
so I'll have to investigate further.
In my test case the backend is presenting a valid cert with 
SAN=app2.example.ca to the frontend when it is asked for a sni name it 
does not know. The frontend haproxy having the cert, serves the correct 
and valid cert (ssltest-broken.example.ca) to the browser. The content 
of the served page is the app2 default page. Perhaps where your test 
might be going sideways is that lack of an otherwise valid but 
incorrectly hostnamed cert coming from the backend? Ie, the cert from 
the backend from dom4 needs to pass verifypeer check but fail the 
verifyhost check to replicate the condition.




Willy

--
Kevin