Re: haproxy doesn't reuse server connections

2018-07-27 Thread Baptiste
In other words, you may want to enable "option prefer-last-server". But in
such case, you won't load-balance anymore (all requests should go to the
same server.

Baptiste

On Fri, Jul 27, 2018 at 7:09 PM, Cyril Bonté  wrote:

> Hi Alessandro,
>
>
> Le 27/07/2018 à 17:50, Alessandro Gherardi a écrit :
>
>> Hi,
>> I'm running haproxy 1.8.12 on Ubuntu 14.04. For some reason, haproxy does
>> not reuse connections to backend servers. For testing purposes, I'm sending
>> the same HTTP request multiple times over the same TCP connection.
>>
>> The servers do not respond with Connection: close and do not close the
>> connections. The wireshark capture shows haproxy RST-ing the connections  a
>> few hundred milliseconds after the servers reply. The servers send no FIN
>> nor RST to haproxy.
>>
>> I tried various settings (http-reuse always, option http-keep-alive, both
>> at global and backend level), no luck.
>>
>> The problem goes away if I have a single backend server, but obviously
>> that's not a viable option in real life.
>>
>> Here's my haproxy.cfg:
>>
>> global
>>  #daemon
>>  maxconn 256
>>
>> defaults
>>  mode http
>>  timeout connect 5000ms
>>  timeout client 5ms
>>  timeout server 5ms
>>
>>  option http-keep-alive
>>  timeout http-keep-alive 30s
>>  http-reuse always
>>
>> frontend http-in
>>  bind 10.220.178.236:80
>>  default_backend servers
>>
>> backend servers
>>  server server1 10.220.178.194:80 maxconn 32
>>  server server2 10.220.232.132:80 maxconn 32
>>
>> Any suggestions?
>>
>
> Well, you've not configured any persistence option nor any load balancing
> algorithm. So, the default is to do a roundrobin between the 2 backend
> servers. If there's no traffic, it's very likely that there's no connection
> to reuse when switching to the second server for the second request.
>
>
>> Thanks in advance,
>> Alessandro
>>
>
>
> --
> Cyril Bonté
>
>


Re: SNI matching issue when hostname ends with trailing dot

2018-07-27 Thread Sander Klein
Hi Warren,

As far as I know this is by design. If you do not want this behavior you need 
to use strict-sni in your bind statement. 

Regards

Sander


> On 27 Jul 2018, at 12:47, Warren Rohner  wrote:
> 
> Hi HAProxy list
> 
> Just thought I'd resend this report from May in case it was missed. If it's a 
> non-issue, I apologise.
> 
> Regards
> Warren
> 
> At 15:47 2018/05/22, Warren Rohner wrote:
>> Hi HAProxy list
>> 
>> We use an HAProxy 1.7.11 instance to terminate SSL and load balance 100+ 
>> websites.
>> 
>> The simplified bind line below specifies a default cert (i.e. 
>> secure.example.com.pem) as required in this HAProxy version, and a directory 
>> path to all other certs (i.e. ./):
>> 
>> bind 127.0.0.1:443 ssl crt secure.example.com.pem crt ./
>> 
>> This configuration works as expected. HAProxy finds all certs and the 
>> correct one is used when TLS SNI extension is provided. For example, 
>> visiting https://secure.example.com/ and https://www.example.com/ (with SNI 
>> capable web browser) both work perfectly.
>> 
>> The other day I inadvertently appended a trailing dot to the hostname for 
>> one of our sites (e.g. https://www.example.com.), and when I did this 
>> HAProxy returned the default cert to the browser rather than the expected 
>> cert for that particular site. I'm not certain, but could this be a possible 
>> bug in the HAProxy code that matches servername provided by browser's TLS 
>> SNI extension against all loaded certificates?
>> 
>> As a further example of problem, I note that the issue can be reproduced on 
>> the haproxy.org website as follows using OpenSSL client:
>> 
>> Works as expected, HAProxy returns correct cert for haproxy.org:
>> openssl s_client -connect www.haproxy.org:443 -servername www.haproxy.org
>> 
>> With trailing dot on servername, HAProxy returns what I think is the default 
>> cert (an invalid StarrCom-issued cert for formilux.org):
>> openssl s_client -connect www.haproxy.org:443 -servername www.haproxy.org .
>> 
>> Please let me know if I should provide any further information.
>> 
>> Regards
>> Warren


Re: haproxy doesn't reuse server connections

2018-07-27 Thread Cyril Bonté

Hi Alessandro,

Le 27/07/2018 à 17:50, Alessandro Gherardi a écrit :

Hi,
I'm running haproxy 1.8.12 on Ubuntu 14.04. For some reason, haproxy 
does not reuse connections to backend servers. For testing purposes, I'm 
sending the same HTTP request multiple times over the same TCP connection.


The servers do not respond with Connection: close and do not close the 
connections. The wireshark capture shows haproxy RST-ing the 
connections  a few hundred milliseconds after the servers reply. The 
servers send no FIN nor RST to haproxy.


I tried various settings (http-reuse always, option http-keep-alive, 
both at global and backend level), no luck.


The problem goes away if I have a single backend server, but obviously 
that's not a viable option in real life.


Here's my haproxy.cfg:

global
         #daemon
         maxconn 256

defaults
         mode http
         timeout connect 5000ms
         timeout client 5ms
         timeout server 5ms

         option http-keep-alive
         timeout http-keep-alive 30s
         http-reuse always

frontend http-in
         bind 10.220.178.236:80
         default_backend servers

backend servers
         server server1 10.220.178.194:80 maxconn 32
         server server2 10.220.232.132:80 maxconn 32

Any suggestions?


Well, you've not configured any persistence option nor any load 
balancing algorithm. So, the default is to do a roundrobin between the 2 
backend servers. If there's no traffic, it's very likely that there's no 
connection to reuse when switching to the second server for the second 
request.




Thanks in advance,
Alessandro



--
Cyril Bonté



[PATCH] MEDIUM: proxy_protocol: Convert IPs to v6 when protocols are mixed

2018-07-27 Thread Tim Duesterhus
Willy,

attached is an updated patch that:

1. Only converts the addresses to IPv6 if at least one of them is IPv6.
   But it does not convert them to IPv4 if both of them can be converted to 
IPv4.
2. Does not copy the whole `struct connection`, but performs the conversion 
inside
   `make_proxy_line_v?`.

I'm not sure whether I like this better than my first attempt at it. Proxy 
protocol
v2 was rather easy to modify, but proxy protocol v1 required a complete 
restructuring
to not create a new case for each of the 4 address combinations (44, 46, 64, 
66).

I performed a manual test running using both send-proxy as well as 
send-proxy-v2 inside
of valgrind. It sent the expected values. Valgrind did not report any memory 
corruption
or memory leaks.

So I believe this patch is good, but you want to double check my logic. 
Especially inside
`make_proxy_line_v1`.

Apply with `git am --scissors` to automatically cut the commit message.
-- >8 --

http-request set-src possibly creates a situation where src and dst
are from different address families. Convert both addresses to IPv6
to avoid a PROXY UNKNOWN.

This patch should be backported to haproxy 1.8.
---
 src/connection.c | 173 +++
 1 file changed, 98 insertions(+), 75 deletions(-)

diff --git a/src/connection.c b/src/connection.c
index 4b1e066e..8826706f 100644
--- a/src/connection.c
+++ b/src/connection.c
@@ -964,73 +964,71 @@ int make_proxy_line(char *buf, int buf_len, struct server 
*srv, struct connectio
 int make_proxy_line_v1(char *buf, int buf_len, struct sockaddr_storage *src, 
struct sockaddr_storage *dst)
 {
int ret = 0;
-
-   if (src && dst && src->ss_family == dst->ss_family && src->ss_family == 
AF_INET) {
-   ret = snprintf(buf + ret, buf_len - ret, "PROXY TCP4 ");
-   if (ret >= buf_len)
-   return 0;
-
-   /* IPv4 src */
-   if (!inet_ntop(src->ss_family, &((struct sockaddr_in 
*)src)->sin_addr, buf + ret, buf_len - ret))
-   return 0;
-
-   ret += strlen(buf + ret);
+   char * protocol;
+   char src_str[MAX(INET_ADDRSTRLEN, INET6_ADDRSTRLEN)];
+   char dst_str[MAX(INET_ADDRSTRLEN, INET6_ADDRSTRLEN)];
+   in_port_t src_port;
+   in_port_t dst_port;
+
+   if (   !src
+   || !dst
+   || (src->ss_family != AF_INET && src->ss_family != AF_INET6)
+   || (dst->ss_family != AF_INET && dst->ss_family != AF_INET6)) {
+   /* unknown family combination */
+   ret = snprintf(buf, buf_len, "PROXY UNKNOWN\r\n");
if (ret >= buf_len)
return 0;
 
-   buf[ret++] = ' ';
-
-   /* IPv4 dst */
-   if (!inet_ntop(dst->ss_family, &((struct sockaddr_in 
*)dst)->sin_addr, buf + ret, buf_len - ret))
-   return 0;
+   return ret;
+   }
 
-   ret += strlen(buf + ret);
-   if (ret >= buf_len)
+   /* IPv4 for both src and dst */
+   if (src->ss_family == AF_INET && dst->ss_family == AF_INET) {
+   protocol = "TCP4";
+   if (!inet_ntop(AF_INET, &((struct sockaddr_in *)src)->sin_addr, 
src_str, sizeof(src_str)))
return 0;
-
-   /* source and destination ports */
-   ret += snprintf(buf + ret, buf_len - ret, " %u %u\r\n",
-   ntohs(((struct sockaddr_in *)src)->sin_port),
-   ntohs(((struct sockaddr_in *)dst)->sin_port));
-   if (ret >= buf_len)
+   src_port = ((struct sockaddr_in *)src)->sin_port;
+   if (!inet_ntop(AF_INET, &((struct sockaddr_in *)dst)->sin_addr, 
dst_str, sizeof(dst_str)))
return 0;
+   dst_port = ((struct sockaddr_in *)dst)->sin_port;
}
-   else if (src && dst && src->ss_family == dst->ss_family && 
src->ss_family == AF_INET6) {
-   ret = snprintf(buf + ret, buf_len - ret, "PROXY TCP6 ");
-   if (ret >= buf_len)
-   return 0;
-
-   /* IPv6 src */
-   if (!inet_ntop(src->ss_family, &((struct sockaddr_in6 
*)src)->sin6_addr, buf + ret, buf_len - ret))
-   return 0;
+   /* IPv6 for at least one of src and dst */
+   else {
+   struct in6_addr tmp;
 
-   ret += strlen(buf + ret);
-   if (ret >= buf_len)
-   return 0;
+   protocol = "TCP6";
 
-   buf[ret++] = ' ';
+   if (src->ss_family == AF_INET) {
+   /* Convert src to IPv6 */
+   v4tov6(, &((struct sockaddr_in *)src)->sin_addr);
+   src_port = ((struct sockaddr_in *)src)->sin_port;
+   }
+   else {
+   tmp = ((struct sockaddr_in6 

haproxy doesn't reuse server connections

2018-07-27 Thread Alessandro Gherardi
Hi,I'm running haproxy 1.8.12 on Ubuntu 14.04. For some reason, haproxy does 
not reuse connections to backend servers. For testing purposes, I'm sending the 
same HTTP request multiple times over the same TCP connection.
The servers do not respond with Connection: close and do not close the 
connections. The wireshark capture shows haproxy RST-ing the connections  a few 
hundred milliseconds after the servers reply. The servers send no FIN nor RST 
to haproxy.

I tried various settings (http-reuse always, option http-keep-alive, both at 
global and backend level), no luck.
The problem goes away if I have a single backend server, but obviously that's 
not a viable option in real life.
Here's my haproxy.cfg:
global        #daemon        maxconn 256
defaults        mode http        timeout connect 5000ms        timeout client 
5ms        timeout server 5ms
        option http-keep-alive        timeout http-keep-alive 30s        
http-reuse always
frontend http-in        bind 10.220.178.236:80        default_backend servers
backend servers        server server1 10.220.178.194:80 maxconn 32        
server server2 10.220.232.132:80 maxconn 32
Any suggestions?
Thanks in advance,Alessandro

Possibility to modify PROXY protocol header

2018-07-27 Thread bjun...@gmail.com
Hi,

is there any possibilty to modify the client ip in the PROXY Protocol
header before it is send to a backend server?

My use case is a local integration/functional testing suite (multiple local
docker containers for testing the whole stack - haproxy, cache layer,
webserver, etc.).

I would like to test functionalities which are dependent of/need specific
IP ranges or IP addresses.


Best Regards / Mit freundlichen Grüßen

Bjoern


Performance of using lua calls for map manipulation on every request

2018-07-27 Thread Sachin Shetty
Hi,

We are doing about 10K requests/minute on a single haproxy server, we have
enough CPUs and memory. Right now each requests looks up a map for backend
info. It works well.

Now we  need to build some expire logic around the map. Like ignore some
entries in the map entries after some time. I could do this in lua, but it
woud mean that every request would make a lua call to look up a map value
and make a decision.

My lua method looks like this:

function get_proxy_from_map(txn)
local host = txn.http:req_get_headers()["host"][0]
local value = proxy_map_v2:lookup(host)
if value then
local values = split(value, ",")
local proxy = values[1]
local time = values[2]
if os.time() > tonumber(time) then
core.Alert("Expired: returning nil: " .. host)
return
else
return proxy
end
end
return
end


Any suggestions on how this would impact performance, our tests looks ok.

Thanks
Sachin


Re: SNI matching issue when hostname ends with trailing dot

2018-07-27 Thread Warren Rohner

Hi HAProxy list

Just thought I'd resend this report from May in case it was missed. 
If it's a non-issue, I apologise.


Regards
Warren

At 15:47 2018/05/22, Warren Rohner wrote:

Hi HAProxy list

We use an HAProxy 1.7.11 instance to terminate SSL and load balance 
100+ websites.


The simplified bind line below specifies a default cert (i.e. 
secure.example.com.pem) as required in this HAProxy version, and a 
directory path to all other certs (i.e. ./):


bind 127.0.0.1:443 ssl crt secure.example.com.pem crt ./

This configuration works as expected. HAProxy finds all certs and 
the correct one is used when TLS SNI extension is provided. For 
example, visiting https://secure.example.com/ and 
https://www.example.com/ (with SNI capable web browser) both work perfectly.


The other day I inadvertently appended a trailing dot to the 
hostname for one of our sites (e.g. https://www.example.com.), and 
when I did this HAProxy returned the default cert to the browser 
rather than the expected cert for that particular site. I'm not 
certain, but could this be a possible bug in the HAProxy code that 
matches servername provided by browser's TLS SNI extension against 
all loaded certificates?


As a further example of problem, I note that the issue can be 
reproduced on the haproxy.org website as follows using OpenSSL client:


Works as expected, HAProxy returns correct cert for haproxy.org:
openssl s_client -connect www.haproxy.org:443 -servername www.haproxy.org

With trailing dot on servername, HAProxy returns what I think is the 
default cert (an invalid StarrCom-issued cert for formilux.org):

openssl s_client -connect www.haproxy.org:443 -servername www.haproxy.org.

Please let me know if I should provide any further information.

Regards
Warren


Link Addition Request

2018-07-27 Thread Lisa James
Hey! I have a quick request for you.



I'm just reaching out because I came across your domain where you have
mentioned list of tools and domains that work on internet security and
privacy.


Must say you have done an amazing work.


I was super impressed by it and wanted to reach out because the website I
work for vpnranks.com published a list of Best VPN for use. The website has
been working on providing solutions for internet security and safety
online.



If it was any good, might you consider including a link to it in your piece?



Our team has put a lot of time and effort into doing a complete test of the
VPN services listed in our guide that work to provide internet security and
safety online to the users. and I believe it will add value to users on
your website as well.



I'll let you be the judge though... Here's the link to the guide:
https://www.vpnranks.com/best-vpn/


Regards,
Lisa


Re: Connections stuck in CLOSE_WAIT state with h2

2018-07-27 Thread Willy Tarreau
On Fri, Jul 27, 2018 at 10:28:36AM +0200, Milan Petruzelka wrote:
> after 2 days I also have no blocked connections. There's no need to wait
> until Monday as I suggested yesterday.

Perfect, many thanks Milan.

Willy



Re: Connections stuck in CLOSE_WAIT state with h2

2018-07-27 Thread Milan Petruželka
On Fri, 27 Jul 2018 at 10:08, Willy Tarreau  wrote:

> Hi Olivier,
>
> On Fri, Jul 27, 2018 at 09:04:04AM +0200, Olivier Doucet wrote:
> > 24 hours later, still no issue to be reported. All sessions are expiring
> > just fine. I think you can merge :)
>
> Yes I think you're right, I'll do this, it will at least help all the users
> who don't want to patch their versions. We'll probably emit another 1.8
> soon.


Hi,

after 2 days I also have no blocked connections. There's no need to wait
until Monday as I suggested yesterday.

Milan


Re: Cannot unsubscribe

2018-07-27 Thread Willy Tarreau
Hi John,

On Fri, Jul 27, 2018 at 07:54:19AM +, John Lanigan wrote:
> Hi,
> 
> I would like to unsubscribe but from this list but cannot, we have changed
> email domains and while I can receive on the old one I cannot send on it.
> 
> 
> I tried mailing haproxy+h...@formilux.org
> but just got back an automated response that said "Hello,"
> 
> 
> Can one of the list owners assist please?

I guess you were subscribed with your @coresoftware address. At least
I hope it was you because I deleted this line ;-)

Cheers,
Willy



Re: Connections stuck in CLOSE_WAIT state with h2

2018-07-27 Thread Willy Tarreau
Hi Olivier,

On Fri, Jul 27, 2018 at 09:04:04AM +0200, Olivier Doucet wrote:
> 24 hours later, still no issue to be reported. All sessions are expiring
> just fine. I think you can merge :)

Yes I think you're right, I'll do this, it will at least help all the users
who don't want to patch their versions. We'll probably emit another 1.8 soon.

Thanks!
Willy



Cannot unsubscribe

2018-07-27 Thread John Lanigan
Hi,

I would like to unsubscribe but from this list but cannot, we have changed 
email domains and while I can receive on the old one I cannot send on it.


I tried mailing haproxy+h...@formilux.org but 
just got back an automated response that said "Hello,"


Can one of the list owners assist please?

Kind regards,

John Lanigan.



Re: lua socket settimeout has no effect

2018-07-27 Thread Sachin Shetty
Thankyou Cyril, your patch fixed the connect issue.

Read timeout still seems a bit weird though, at settimeout(1), readtimeout
kicks in at about 4 seconds, and at settimeout(2), readtimeout kicks in at
about 8 seconds.

is that expected? I couldn't find read timeout explicitly set anywhere in
the same source file.

Thanks
Sachin

On Fri, Jul 27, 2018 at 5:18 AM, Cyril Bonté  wrote:

> Hi,
>
> Le 26/07/2018 à 19:54, Sachin Shetty a écrit :
>
>> Hi,
>>
>> We are using a http-req lua action to dynamically set some app specific
>> metadata headers. The lua handler connects to a upstream memcache like
>> service over tcp to fetch additional metadata.
>>
>> Functionally everything works ok, but I am seeing that socket.settimeout
>> has no effect. Irrespective of what I set in settimeout if the upstream
>> service is unreachable, connect always timesout at 5 seconds, and read
>> timeout around 10 seconds. It seems like  settimeout has no effect and it
>> always picks defaults of 5 seconds for connect timeout and 10 seconds for
>> read timeout.
>>
>
> For the connect timeout, it seems this is a hardcoded default value in
> src/hlua.c:
>   socket_proxy.timeout.connect = 5000; /* By default the timeout
> connection is 5s. */
>
> If it's possible, can you try the patch attached (for the 1.7.x branch) ?
> But please don't use it in production yet ;-)
>
>
>> Haproxy conf call:
>>
>> http-request lua.get_proxy
>>
>> Lua code sample:
>>
>> function get_proxy(txn)
>>  local sock = core.tcp()
>>  sock:settimeout(2)
>>  status, error = sock:connect(gds_host, gds_port)
>>  if not status then
>>  core.Alert("1 Error in connecting:" .. key .. ":" .. error)
>>  return result, "Error: " .. error
>>  end
>>  sock:send(key .. "\r\n")
>>  
>>  
>>
>>
>> core.register_action("get_proxy", { "http-req" }, get_proxy)
>>
>> Haproxy version:
>>
>> HA-Proxy version 1.7.8 2017/07/07
>> Copyright 2000-2017 Willy Tarreau > wi...@haproxy.org>>
>>
>>
>> Build options :
>>TARGET  = linux2628
>>CPU = generic
>>CC  = gcc
>>CFLAGS  = -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement
>> -fwrapv -DTCP_USER_TIMEOUT=18
>>OPTIONS = USE_LINUX_TPROXY=1 USE_ZLIB=1 USE_REGPARM=1 USE_OPENSSL=1
>> USE_LUA=1 USE_PCRE=1
>>
>> Default settings :
>>maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200
>>
>> Encrypted password support via crypt(3): yes
>> Built with zlib version : 1.2.7
>> Running on zlib version : 1.2.7
>> Compression algorithms supported : identity("identity"),
>> deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
>> Built with OpenSSL version : OpenSSL 1.0.1e-fips 11 Feb 2013
>> Running on OpenSSL version : OpenSSL 1.0.1e-fips 11 Feb 2013
>> OpenSSL library supports TLS extensions : yes
>> OpenSSL library supports SNI : yes
>> OpenSSL library supports prefer-server-ciphers : yes
>> Built with PCRE version : 8.32 2012-11-30
>> Running on PCRE version : 8.32 2012-11-30
>> PCRE library supports JIT : no (USE_PCRE_JIT not set)
>> Built with Lua version : Lua 5.3.2
>> Built with transparent proxy support using: IP_TRANSPARENT
>> IPV6_TRANSPARENT IP_FREEBIND
>>
>> Available polling systems :
>>epoll : pref=300,  test result OK
>> poll : pref=200,  test result OK
>>   select : pref=150,  test result OK
>> Total: 3 (3 usable), will use epoll.
>>
>> Available filters :
>>  [COMP] compression
>>  [TRACE] trace
>>  [SPOE] spoe
>>
>>
>>
>> Thanks
>> Sachin
>>
>
>
> --
> Cyril Bonté
>


Re: [PATCH] MINOR: ssl: BoringSSL matches OpenSSL 1.1.0

2018-07-27 Thread Willy Tarreau
Hi Manu,

On Wed, Jul 25, 2018 at 10:34:46AM +0200, Emmanuel Hocdet wrote:
> It's ok because this function is inserted upper in the patch.
> 
> As said, it's only a revert from 019f9b10 patches for openssl-compat.h.
> From:
> # Functions introduced in OpenSSL 1.1.0 and not yet present in LibreSSL / 
> BoringSSL
> # Functions introduced in OpenSSL 1.1.0 and not yet present in LibreSSL
> To:
> # Functions introduced in OpenSSL 1.1.0 and not yet present in LibreSSL

OK thanks for the explanation, I've just merged your latest version.

Willy



Re: Connections stuck in CLOSE_WAIT state with h2

2018-07-27 Thread Olivier Doucet
Hello,


2018-07-26 11:09 GMT+02:00 Willy Tarreau :

> Hi Olivier,
>
> On Thu, Jul 26, 2018 at 10:53:33AM +0200, Olivier Doucet wrote:
> > Previous build:
> > https://tof.cx/images/2018/07/26/f31243bfede22e20a7a991ae6c39506d.png
> > (we can clearly see when reload happens :p)
> >
> > New build:
> > https://tof.cx/images/2018/07/26/e402d7fe15604d50418891071628019b.png
>
> Impressive!
>
> > Seems you found what was wrong :) Great work ! Thank you to Milan too for
> > first raising the issue.
>
> Yeah, and to you as well for confirming, that's always pleasant!
>
> > I will keep this new binary for a few days, just to check that every
> cases
> > are handled correctly.
>

24 hours later, still no issue to be reported. All sessions are expiring
just fine. I think you can merge :)

Olivier