Re: Odd behaviour with option forwardfor.

2017-07-24 Thread Aleksandar Lazic
Hi Willy Tarreau,

Willy Tarreau wrote on 24.07.2017:

> Hi Aleks,

> On Sun, Jul 23, 2017 at 09:50:41AM +0200, Aleksandar Lazic wrote:
>> >  Personally I use 2 rules similar to the following to append to   
>> > X-Forwarded-For:
>> >   
>> >    http-request set-header X-Forwarded-For
>> > %[req.fhdr(X-Forwarded-For)],\ %[src] if { req.fhdr(X-Forwarded-For)   -m 
>> > found }
>> >    http-request set-header X-Forwarded-For %[src] if !{
>> > req.fhdr(X-Forwarded-For) -m found }
>> >   
>> >  -Patrick
>> 
>> But doesn't haproxy do this already?
>> 
>> http://git.haproxy.org/?p=haproxy-1.7.git;a=blob;f=src/proto_http.c;h=94c8d639f6f777241109f605e1e1742f9a39bf33;hb=HEAD#l4639

> It always adds a new header field. But this is strictly equivalent to
> adding a new value to an existing entry, it's just much less expensive
> (no need to scan the list to find one, nor to move bytes around to insert
> a new value).

Thanks willy for confirmation.

> Willy

-- 
Best Regards
Aleks




Re: [PATCH] Support proxies with identical names in Lua core.proxies

2017-07-24 Thread Thierry FOURNIER
On Thu, 20 Jul 2017 15:26:52 +0200
Adis Nezirovic  wrote:

> On 07/20/2017 02:55 PM, Willy Tarreau wrote:
> > So you can have :
> >   0 or 1 "listen"
> >   0 or 1 "frontend" + 0 or 1 "backend"
> > 
> > Just a few ideas come to my mind :
> >   - is it possible to store arrays into arrays ? I mean, could we have
> > for example core.proxies["foo"].side[FRONT|BACK] where each side is
> > set only when the two differ ?
> > 
> >   - or is it to return a list (1 or 2 elements) ? That would be even more
> > convenient to use. If you have "p = core.proxies[foo]" and can use p
> > then next(p), you're sure to always scan everything.
> > 
> >   - otherwise I like the principle of being able to force the type using
> > a prefix "@f:" or "@b:" in front of the name. But we must be careful
> > not to insert duplicates. So probably by default only "listen" instances
> > would be created without the "@" prefix. My initial idea was to be able
> > to always return the plain form and point to whichever exists, but for
> > enumeration it would require to keep only the "@" form, which might be
> > more complicated.
> 
> We currently use a variant of second proposal, when working internally
> with backend/frontends with same name, type/side should still be
> indicated. Simple list/array might be easiest thing to implement.


Ok. After brainstorm, I think that the it will be netter to keep the
current behaviour to avoid breaking existing Lua implementations.

Adding other entries with prefix "@f:" and "@b:" in the same list that
the existing will disturb existing Lua which browse the lists.
Returning a list (point 1 & 2) also break it.

I think that the most reliable way is adding anoter tree. We keep the
"proxies" tree base with existing, and we add two trees "frontends" and
"backends" which contains respecticely the list of frontends and
backends.

Or another tree called "all_proxies" which contains names and a list of
frontend/backend with the same name.


> >> Also, apply the same logic to the 'Listener' and 'Server' classes.
> > 
> > These ones do not support duplicate names so we should not need to impact
> > them.
> 
> This is just for convenience and uniformity, Proxy.servers/listeners
> returns a table/hash of objects with names as keys, but for example when
> I want to pass such object to some other Lua function I have to manually
> copy the name (or wrap the object), since the object itself doesn't
> expose name info.


You're right, it will be better adding the name in the object. I will
do this.

Thierry



Re: [PATCH] Support proxies with identical names in Lua core.proxies

2017-07-24 Thread Willy Tarreau
Hi Thierry,

On Mon, Jul 24, 2017 at 01:30:23PM +0200, Thierry FOURNIER wrote:
> Ok. After brainstorm, I think that the it will be netter to keep the
> current behaviour to avoid breaking existing Lua implementations.
> 
> Adding other entries with prefix "@f:" and "@b:" in the same list that
> the existing will disturb existing Lua which browse the lists.
> Returning a list (point 1 & 2) also break it.
> 
> I think that the most reliable way is adding anoter tree. We keep the
> "proxies" tree base with existing, and we add two trees "frontends" and
> "backends" which contains respecticely the list of frontends and
> backends.

Initially I wasn't much fond of this one but after some thinking I
convinced myself that it's probably the best solution. In fact you
never need to dereference a proxy by its name, unless :
  - you know its type (eg: for use_backend or to list servers)
  - you want to reference its table, which is already forbidden in
duplicate names.

So in the end, being able to spot a proxy by its type+name, or being
able to lookup its name regardless of the type in case of a table seems
the best solution.  Hence frontends+backends+proxies.

> Or another tree called "all_proxies" which contains names and a list of
> frontend/backend with the same name.

It would add more confusion.

> > >> Also, apply the same logic to the 'Listener' and 'Server' classes.
> > > 
> > > These ones do not support duplicate names so we should not need to impact
> > > them.
> > 
> > This is just for convenience and uniformity, Proxy.servers/listeners
> > returns a table/hash of objects with names as keys, but for example when
> > I want to pass such object to some other Lua function I have to manually
> > copy the name (or wrap the object), since the object itself doesn't
> > expose name info.
> 
> 
> You're right, it will be better adding the name in the object. I will
> do this.
 
Thanks,
Willy



Re: [PATCH] Support proxies with identical names in Lua core.proxies

2017-07-24 Thread Thierry FOURNIER
On Thu, 20 Jul 2017 15:26:52 +0200
Adis Nezirovic  wrote:

> On 07/20/2017 02:55 PM, Willy Tarreau wrote:
> > So you can have :
> >   0 or 1 "listen"
> >   0 or 1 "frontend" + 0 or 1 "backend"
> > 
> > Just a few ideas come to my mind :
> >   - is it possible to store arrays into arrays ? I mean, could we have
> > for example core.proxies["foo"].side[FRONT|BACK] where each side is
> > set only when the two differ ?
> > 
> >   - or is it to return a list (1 or 2 elements) ? That would be even more
> > convenient to use. If you have "p = core.proxies[foo]" and can use p
> > then next(p), you're sure to always scan everything.
> > 
> >   - otherwise I like the principle of being able to force the type using
> > a prefix "@f:" or "@b:" in front of the name. But we must be careful
> > not to insert duplicates. So probably by default only "listen" instances
> > would be created without the "@" prefix. My initial idea was to be able
> > to always return the plain form and point to whichever exists, but for
> > enumeration it would require to keep only the "@" form, which might be
> > more complicated.
> 
> We currently use a variant of second proposal, when working internally
> with backend/frontends with same name, type/side should still be
> indicated. Simple list/array might be easiest thing to implement.
> 
> >> Also, apply the same logic to the 'Listener' and 'Server' classes.
> > 
> > These ones do not support duplicate names so we should not need to impact
> > them.
> 
> This is just for convenience and uniformity, Proxy.servers/listeners
> returns a table/hash of objects with names as keys, but for example when
> I want to pass such object to some other Lua function I have to manually
> copy the name (or wrap the object), since the object itself doesn't
> expose name info.


You will found in attchement a patch which add the proxy name as member
of the proxy object.

Willy, can you apply it ?


> Best regards,
> Adis



Re: [PATCH] Support proxies with identical names in Lua core.proxies

2017-07-24 Thread Adis Nezirovic
On 07/24/2017 01:30 PM, Thierry FOURNIER wrote:
> I think that the most reliable way is adding anoter tree. We keep the
> "proxies" tree base with existing, and we add two trees "frontends" and
> "backends" which contains respecticely the list of frontends and
> backends.

This would work for me too, maybe just add a note in docs regarding that
bug in core.proxies (masking frontends/backends with the same name).

>> This is just for convenience and uniformity, Proxy.servers/listeners
>> returns a table/hash of objects with names as keys, but for example when
>> I want to pass such object to some other Lua function I have to manually
>> copy the name (or wrap the object), since the object itself doesn't
>> expose name info.
> 
> 
> You're right, it will be better adding the name in the object. I will
> do this.

Great, that would be convenient, thanks.


Best regards,
Adis



Re: [PATCH] Support proxies with identical names in Lua core.proxies

2017-07-24 Thread Thierry FOURNIER
On Mon, 24 Jul 2017 14:03:30 +0200
Willy Tarreau  wrote:

> Hi Thierry,
> 
> On Mon, Jul 24, 2017 at 01:30:23PM +0200, Thierry FOURNIER wrote:
> > Ok. After brainstorm, I think that the it will be netter to keep the
> > current behaviour to avoid breaking existing Lua implementations.
> > 
> > Adding other entries with prefix "@f:" and "@b:" in the same list that
> > the existing will disturb existing Lua which browse the lists.
> > Returning a list (point 1 & 2) also break it.
> > 
> > I think that the most reliable way is adding anoter tree. We keep the
> > "proxies" tree base with existing, and we add two trees "frontends" and
> > "backends" which contains respecticely the list of frontends and
> > backends.
> 
> Initially I wasn't much fond of this one but after some thinking I
> convinced myself that it's probably the best solution. In fact you
> never need to dereference a proxy by its name, unless :
>   - you know its type (eg: for use_backend or to list servers)
>   - you want to reference its table, which is already forbidden in
> duplicate names.
> 
> So in the end, being able to spot a proxy by its type+name, or being
> able to lookup its name regardless of the type in case of a table seems
> the best solution.  Hence frontends+backends+proxies.


an other case pop in my mind: with this solution, the "listen" proxies
will be declared in both lists. I think that it is the expected behaviour,
but I have some doubt about the usage.

I can add a "listens" list.

Thierry


> 
> > Or another tree called "all_proxies" which contains names and a list of
> > frontend/backend with the same name.
> 
> It would add more confusion.
> 
> > > >> Also, apply the same logic to the 'Listener' and 'Server' classes.
> > > > 
> > > > These ones do not support duplicate names so we should not need to 
> > > > impact
> > > > them.
> > > 
> > > This is just for convenience and uniformity, Proxy.servers/listeners
> > > returns a table/hash of objects with names as keys, but for example when
> > > I want to pass such object to some other Lua function I have to manually
> > > copy the name (or wrap the object), since the object itself doesn't
> > > expose name info.
> > 
> > 
> > You're right, it will be better adding the name in the object. I will
> > do this.
>  
> Thanks,
> Willy



Re: [PATCH] Support proxies with identical names in Lua core.proxies

2017-07-24 Thread Willy Tarreau
On Mon, Jul 24, 2017 at 02:27:03PM +0200, Thierry FOURNIER wrote:
> an other case pop in my mind: with this solution, the "listen" proxies
> will be declared in both lists. I think that it is the expected behaviour,
> but I have some doubt about the usage.

Yes I think it's desirable.

> I can add a "listens" list.

I'd rather avoid this, which would add further confusion. The initial
purpose of allowing the same name in different proxies was to make it
seamless to break a full proxy ("listen") into a frontend and backend.
By having a separate list, it would become a problem again.

Willy



Re: [PATCH] Support proxies with identical names in Lua core.proxies

2017-07-24 Thread Willy Tarreau
On Mon, Jul 24, 2017 at 02:04:16PM +0200, Thierry FOURNIER wrote:
> On Thu, 20 Jul 2017 15:26:52 +0200
> You will found in attchement a patch which add the proxy name as member
> of the proxy object.
> 
> Willy, can you apply it ?

I'd like to but there's no attachment, so even trying hard I'm failing to :-)

Willy



[PATCH] Handle SMP_T_METH samples in smp_dup/smp_is_safe/smp_is_rw

2017-07-24 Thread Christopher Faulet

Willy,

Here are small patches with minor changes about samples.

--
Christopher Faulet
>From 364139ba3764294acbad413a4cdde94a6ea1289b Mon Sep 17 00:00:00 2001
From: Christopher Faulet 
Date: Mon, 24 Jul 2017 16:24:39 +0200
Subject: [PATCH 3/3] MINOR: samples: Don't allocate memory for SMP_T_METH
 sample when method is known

For known methods (GET,POST...), in samples, an enum is used instead of a chunk
to reference the method. So there is no needs to allocate memory when a variable
is stored with this kind of sample.
---
 src/vars.c | 9 ++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/src/vars.c b/src/vars.c
index e5448e52..8cc08399 100644
--- a/src/vars.c
+++ b/src/vars.c
@@ -95,7 +95,7 @@ unsigned int var_clear(struct var *var)
 		free(var->data.u.str.str);
 		size += var->data.u.str.len;
 	}
-	else if (var->data.type == SMP_T_METH) {
+	else if (var->data.type == SMP_T_METH && var->data.u.meth.meth == HTTP_METH_OTHER) {
 		free(var->data.u.meth.str.str);
 		size += var->data.u.meth.str.len;
 	}
@@ -309,7 +309,7 @@ static int sample_store(struct vars *vars, const char *name, struct sample *smp)
 			free(var->data.u.str.str);
 			var_accounting_diff(vars, smp->sess, smp->strm, -var->data.u.str.len);
 		}
-		else if (var->data.type == SMP_T_METH) {
+		else if (var->data.type == SMP_T_METH && var->data.u.meth.meth == HTTP_METH_OTHER) {
 			free(var->data.u.meth.str.str);
 			var_accounting_diff(vars, smp->sess, smp->strm, -var->data.u.meth.str.len);
 		}
@@ -358,6 +358,10 @@ static int sample_store(struct vars *vars, const char *name, struct sample *smp)
 		memcpy(var->data.u.str.str, smp->data.u.str.str, var->data.u.str.len);
 		break;
 	case SMP_T_METH:
+		var->data.u.meth.meth = smp->data.u.meth.meth;
+		if (smp->data.u.meth.meth != HTTP_METH_OTHER)
+			break;
+
 		if (!var_accounting_add(vars, smp->sess, smp->strm, smp->data.u.meth.str.len)) {
 			var->data.type = SMP_T_BOOL; /* This type doesn't use additional memory. */
 			return 0;
@@ -368,7 +372,6 @@ static int sample_store(struct vars *vars, const char *name, struct sample *smp)
 			var->data.type = SMP_T_BOOL; /* This type doesn't use additional memory. */
 			return 0;
 		}
-		var->data.u.meth.meth = smp->data.u.meth.meth;
 		var->data.u.meth.str.len = smp->data.u.meth.str.len;
 		var->data.u.meth.str.size = smp->data.u.meth.str.len;
 		memcpy(var->data.u.meth.str.str, smp->data.u.meth.str.str, var->data.u.meth.str.len);
-- 
2.13.3

>From 8d1d40f9d3a86fdc52f88b10320419f7e7decb45 Mon Sep 17 00:00:00 2001
From: Christopher Faulet 
Date: Mon, 24 Jul 2017 16:07:12 +0200
Subject: [PATCH 2/3] MINOR: samples: Handle the type SMP_T_METH in smp_is_safe
 and smp_is_rw

For all known methods, samples are considered as safe and rewritable. For
unknowns, we handle them like strings (SMP_T_STR).
---
 include/proto/sample.h | 10 ++
 1 file changed, 10 insertions(+)

diff --git a/include/proto/sample.h b/include/proto/sample.h
index 4319278a..94226d2d 100644
--- a/include/proto/sample.h
+++ b/include/proto/sample.h
@@ -86,6 +86,11 @@ static inline
 int smp_is_safe(struct sample *smp)
 {
 	switch (smp->data.type) {
+	case SMP_T_METH:
+		if (smp->data.u.meth.meth != HTTP_METH_OTHER)
+			return 1;
+		/* Fall through */
+
 	case SMP_T_STR:
 		if ((smp->data.u.str.len < 0) ||
 		(smp->data.u.str.size && smp->data.u.str.len >= smp->data.u.str.size))
@@ -133,6 +138,11 @@ int smp_is_rw(struct sample *smp)
 		return 0;
 
 	switch (smp->data.type) {
+	case SMP_T_METH:
+		if (smp->data.u.meth.meth != HTTP_METH_OTHER)
+			return 1;
+		/* Fall through */
+
 	case SMP_T_STR:
 		if (!smp->data.u.str.size ||
 		smp->data.u.str.len < 0 ||
-- 
2.13.3

>From b3a215635168a6f97d461bb8365b8a2daed531c6 Mon Sep 17 00:00:00 2001
From: Christopher Faulet 
Date: Mon, 24 Jul 2017 15:38:41 +0200
Subject: [PATCH 1/3] MINOR: samples: Handle the type SMP_T_METH when we
 duplicate a sample in smp_dup

First, the type SMP_T_METH was not handled by smp_dup function. It was never
called with this kind of samples, so it's not really a problem. But, this could
be useful in future.

For all known HTTP methods (GET, POST...), there is no extra space allocated for
a sample of type SMP_T_METH. But for unkown methods, it uses a chunk. So, like
for strings, we duplicate data, using a trash chunk.
---
 src/sample.c | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/src/sample.c b/src/sample.c
index 20a59bea..28a5fcb2 100644
--- a/src/sample.c
+++ b/src/sample.c
@@ -658,6 +658,11 @@ int smp_dup(struct sample *smp)
 		/* These type are not const. */
 		break;
 
+	case SMP_T_METH:
+		if (smp->data.u.meth.meth != HTTP_METH_OTHER)
+			break;
+		/* Fall through */
+
 	case SMP_T_STR:
 		trash = get_trash_chunk();
 		trash->len = smp->data.u.str.len;
@@ -678,6 +683,7 @@ int smp_dup(struct sample *smp)
 		memcpy(trash->str, smp->data.u.str.str, trash->len);
 		smp->data.u.str = *trash;
 		break;
+
 	default:
 		/* Other cases are unexpected. */
 		return 0;
-- 
2.1

Re: [PATCH 2/2] BUG/MINOR: lua: Correctly use INET6_ADDRSTRLEN in Server.get_addr()

2017-07-24 Thread Aleksandar Lazic
Hi Nenad Merdanovic,

Nenad Merdanovic wrote on 24.07.2017:

> The get_addr() method of the Lua Server class incorrectly used
> INET_ADDRSTRLEN for IPv6 addresses resulting in failing to convert
> longer IPv6 addresses to strings.

> This fix should be backported to 1.7.
> ---
>  src/hlua_fcn.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)

> diff --git a/src/hlua_fcn.c b/src/hlua_fcn.c
> index 0df9025c..420c7664 100644
> --- a/src/hlua_fcn.c
> +++ b/src/hlua_fcn.c
> @@ -550,7 +550,7 @@ int hlua_server_get_addr(lua_State *L)
> break;
> case AF_INET6:
> inet_ntop(AF_INET6, &((struct sockaddr_in6 
> *)&srv->addr)->sin6_addr,
> - addr, INET_ADDRSTRLEN);
> + addr, INET6_ADDRSTRLEN);
> luaL_addstring(&b, addr);
> luaL_addstring(&b, ":");
> snprintf(addr, INET_ADDRSTRLEN, "%d", srv->svc_port);^

Shouldn't there also be a INET6_ADDRSTRLEN?
I think the port should be added into the len.

But I'm not that fit in networking and lua, just a wild gues.

Regards
Aleks
-- 
Best Regards
Aleks




Re: [PATCH 2/2] BUG/MINOR: lua: Correctly use INET6_ADDRSTRLEN in Server.get_addr()

2017-07-24 Thread Nenad Merdanovic

Aleksandar,

On 7/24/2017 5:07 PM, Aleksandar Lazic wrote:

Hi Nenad Merdanovic,

Nenad Merdanovic wrote on 24.07.2017:


The get_addr() method of the Lua Server class incorrectly used
INET_ADDRSTRLEN for IPv6 addresses resulting in failing to convert
longer IPv6 addresses to strings.



This fix should be backported to 1.7.
---
 src/hlua_fcn.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)



diff --git a/src/hlua_fcn.c b/src/hlua_fcn.c
index 0df9025c..420c7664 100644
--- a/src/hlua_fcn.c
+++ b/src/hlua_fcn.c
@@ -550,7 +550,7 @@ int hlua_server_get_addr(lua_State *L)
break;
case AF_INET6:
inet_ntop(AF_INET6, &((struct sockaddr_in6 
*)&srv->addr)->sin6_addr,
- addr, INET_ADDRSTRLEN);
+ addr, INET6_ADDRSTRLEN);
luaL_addstring(&b, addr);
luaL_addstring(&b, ":");
snprintf(addr, INET_ADDRSTRLEN, "%d", srv->svc_port);^


Shouldn't there also be a INET6_ADDRSTRLEN?
I think the port should be added into the len.

But I'm not that fit in networking and lua, just a wild gues.


This is just the maximum size that's gonna be copied into the buffer and 
we know svc_port is much shorter than this.


Regards,
Nenad



Re: [PATCH] Handle SMP_T_METH samples in smp_dup/smp_is_safe/smp_is_rw

2017-07-24 Thread Willy Tarreau
On Mon, Jul 24, 2017 at 05:01:31PM +0200, Christopher Faulet wrote:
> Willy,
> 
> Here are small patches with minor changes about samples.

Applied, thanks!
Willy



Re: [PATCH 2/2] BUG/MINOR: lua: Correctly use INET6_ADDRSTRLEN in Server.get_addr()

2017-07-24 Thread Aleksandar Lazic
Hi Nenad.

Nenad wrote on 24.07.2017:

> Aleksandar,

> On 7/24/2017 5:07 PM, Aleksandar Lazic wrote:
>> Hi Nenad Merdanovic,
>>
>> Nenad Merdanovic wrote on 24.07.2017:
>>
>>> The get_addr() method of the Lua Server class incorrectly used
>>> INET_ADDRSTRLEN for IPv6 addresses resulting in failing to convert
>>> longer IPv6 addresses to strings.
>>
>>> This fix should be backported to 1.7.
>>> ---
>>>  src/hlua_fcn.c | 2 +-
>>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>
>>> diff --git a/src/hlua_fcn.c b/src/hlua_fcn.c
>>> index 0df9025c..420c7664 100644
>>> --- a/src/hlua_fcn.c
>>> +++ b/src/hlua_fcn.c
>>> @@ -550,7 +550,7 @@ int hlua_server_get_addr(lua_State *L)
>>> break;
>>> case AF_INET6:
>>> inet_ntop(AF_INET6, &((struct sockaddr_in6 
>>> *)&srv->addr)->sin6_addr,
>>> - addr, INET_ADDRSTRLEN);
>>> + addr, INET6_ADDRSTRLEN);
>>> luaL_addstring(&b, addr);
>>> luaL_addstring(&b, ":");
>>> snprintf(addr, INET_ADDRSTRLEN, "%d", srv->svc_port);^
>> 
>> Shouldn't there also be a INET6_ADDRSTRLEN?
>> I think the port should be added into the len.
>>
>> But I'm not that fit in networking and lua, just a wild gues.

> This is just the maximum size that's gonna be copied into the buffer and
> we know svc_port is much shorter than this.

Ah okay thanks.

> Regards,
> Nenad

-- 
Best Regards
Aleks




BUG: Lua service timeouts while sending data (after 0194897e540cec67d7d1e9281648b70efe403f08)

2017-07-24 Thread Adis Nezirovic
Hello guys,

I've noticed that a Lua service timeouts in DATA phase, for outputs
equal or bigger than 8k (approx).

After the timeout (timeout client), it returns the full response.
(Termination state is cD--)

I've attached the minimal configuration and a Lua script to trigger the
problem. You might need to tweak the length of test string, I even have
different behavior depending on the Lua code:

-- variant a) loop with hardcoded bounds:
for i = 1, 7492 do

-- variant b) using local variable
local body_len = 7491
for i = 1, body_len do

Makeflags:
make TARGET=linux2628 USE_GETADDRINFO=1 USE_ZLIB=1 USE_OPENSSL=1
USE_LUA=1 USE_PCRE=1 USE_PCRE_JIT=1


git bisect tells me the bug appears after the following commit:

commit 0194897e540cec67d7d1e9281648b70efe403f08
Author: Emeric Brun 
Date:   Thu Mar 30 15:37:25 2017 +0200

MAJOR: task: task scheduler rework.


Best regards,
Adis
local function http_response(applet, code, data, content_type)
applet:set_status(code)
applet:add_header("Content-Length", string.len(data))
applet:add_header("Content-Type", content_type)
applet:start_response()
applet:send(data)
end

local function main(applet)
local c = core.concat()

-- variant a) 
for i = 1, 7492 do
-- variant b)
-- local body_len = 7491
-- for i = 1, body_len do
c:add("#")
end

http_response(applet, 200, c:dump(), "text/plain")
end

core.register_service("block", "http", main)
global
log /dev/log local0 debug
lua-load /etc/haproxy/lua/block.lua

defaults
log global
mode http
option httplog
timeout connect 5s
timeout client 7s
timeout server 10s

listen block
bind 127.0.0.1:9001
http-request use-service lua.block


Re: BUG: Lua service timeouts while sending data (after 0194897e540cec67d7d1e9281648b70efe403f08)

2017-07-24 Thread Willy Tarreau
Hi Adis,

On Mon, Jul 24, 2017 at 06:30:18PM +0200, Adis Nezirovic wrote:
> Hello guys,
> 
> I've noticed that a Lua service timeouts in DATA phase, for outputs
> equal or bigger than 8k (approx).
> 
> After the timeout (timeout client), it returns the full response.
> (Termination state is cD--)
> 
> I've attached the minimal configuration and a Lua script to trigger the
> problem. You might need to tweak the length of test string, I even have
> different behavior depending on the Lua code:
> 
> -- variant a) loop with hardcoded bounds:
> for i = 1, 7492 do
> 
> -- variant b) using local variable
> local body_len = 7491
> for i = 1, body_len do
> 
> Makeflags:
> make TARGET=linux2628 USE_GETADDRINFO=1 USE_ZLIB=1 USE_OPENSSL=1
> USE_LUA=1 USE_PCRE=1 USE_PCRE_JIT=1
> 
> 
> git bisect tells me the bug appears after the following commit:
> 
> commit 0194897e540cec67d7d1e9281648b70efe403f08
> Author: Emeric Brun 
> Date:   Thu Mar 30 15:37:25 2017 +0200
> 
> MAJOR: task: task scheduler rework.

Hehe I've just committed the fixes a few minutes ago :-) We'had quite a
long head scratching session with Thierry, Christopher and Emeric on
this one. It's sometimes impressive how some sleeping bugs can patiently
wait for a subtle change to join efforts to annoy us!

Just pull the latest master, you shouldn't face the problem anymore.

Cheers,
Willy



Re: BUG: Lua service timeouts while sending data (after 0194897e540cec67d7d1e9281648b70efe403f08)

2017-07-24 Thread Adis Nezirovic
On 07/24/2017 06:36 PM, Willy Tarreau wrote:
> Hehe I've just committed the fixes a few minutes ago :-) We'had quite a
> long head scratching session with Thierry, Christopher and Emeric on
> this one. It's sometimes impressive how some sleeping bugs can patiently
> wait for a subtle change to join efforts to annoy us!
> 
> Just pull the latest master, you shouldn't face the problem anymore.

Yep, it totally works now, what are the odds for you to fix just right
now :-)

Thanks, great work!

Best regards,
Adis



Re: AWS ELB as a backend

2017-07-24 Thread DHAVAL JAISWAL
With the following change its working on Haproxy.

backend mybackend
server server1
internal-testinelbtomcat-193184.ap-southeast-1.elb.amazonaws.com:8080


However, when i tried following config its throwing following error on
Haproxy 1.7

could not resolve address 'check'

resolvers myresolver
  nameserver dns1 8.8.8.8:53
  resolve_retries   30
  timeout retry 1s
  hold valid   10s

backend mybackend
server
internal-testinelbtomcat-193184.ap-southeast-1.elb.amazonaws.com:8080
check resolvers myresolver


haproxy -vv
HA-Proxy version 1.7.8 2017/07/07
Copyright 2000-2017 Willy Tarreau 

Build options :
  TARGET  = linux26
  CPU = generic
  CC  = gcc
  CFLAGS  = -m64 -march=x86-64 -O2 -g -fno-strict-aliasing
-Wdeclaration-after-statement -fwrapv
  OPTIONS = USE_OPENSSL=1 USE_PCRE=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Encrypted password support via crypt(3): yes
Built without compression support (neither USE_ZLIB nor USE_SLZ are set)
Compression algorithms supported : identity("identity")
Built with OpenSSL version : OpenSSL 1.0.1k-fips 8 Jan 2015
Running on OpenSSL version : OpenSSL 1.0.1k-fips 8 Jan 2015
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes
Built with PCRE version : 8.21 2011-12-12
Running on PCRE version : 8.21 2011-12-12
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Built without Lua support
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT
IP_FREEBIND

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available filters :
[COMP] compression
[TRACE] trace
[SPOE] spoe

On Sun, Jul 23, 2017 at 1:01 PM, Aleksandar Lazic 
wrote:

> Hi DHAVAL JAISWAL,
>
> DHAVAL JAISWAL wrote on 21.07.2017:
>
> > I have used ELB (public) as a front of Haproxy and ELB (internal) as a
> > backend for the apps server.
> >
> > so structure is like as follows. Currently using Haproxy 1.7.
> > However, request is not going to the backend server.
> >
> > ELB ->> HAPROXY -> ELB -> APPS server.
> >
> > Following config in my haproxy.  Let me know what i am doing wrong.
> >
> > backend mybackend
> >
> > server server1
> > internal-testinelbtomcat-193184.ap-southeast-1.elb.amazonaws.com
>
> Do you have set a resolver?
>
> http://cbonte.github.io/haproxy-dconv/1.7/configuration.html#5.3
>
> Please post also the following data, thanks.
>
> haproxy -vv
> anonymized haproxy conf
> some error and access logs
>
> --
> Best Regards
> Aleks
>
>


-- 
Thanks & Regards
Dhaval Jaiswal


[SPAM] Your account will be blocked after 48 hours due our new security update.

2017-07-24 Thread Support








 
  
  
  
  PayPal
  
 
  Your account will be blocked after 48 hours due our new security update.
  
  Dear valued customer our service has been updated for new work and we will block your account after 48 hours if you dont update your information from link below. Now all you need to do is Just follow these steps to activate your account. 
  Click Here To Update 

  
  Sincerely
  PayPal.
  ___
  This is an automatically generated email, please do not reply.
  .
  

  
  

  
  
 









RE: AWS ELB as a backend

2017-07-24 Thread Norman Branitsky
You dropped “server1” from the server line.
So it’s reading the server address as the server-name and “check” as the 
server-address:
server server-name server-address [check] [resolvers resolver-name]

From: DHAVAL JAISWAL [mailto:dhava...@gmail.com]
Sent: July-24-17 12:56 PM
To: Aleksandar Lazic 
Cc: haproxy@formilux.org
Subject: Re: AWS ELB as a backend

With the following change its working on Haproxy.

backend mybackend
server server1 
internal-testinelbtomcat-193184.ap-southeast-1.elb.amazonaws.com:8080


However, when i tried following config its throwing following error on Haproxy 
1.7

could not resolve address 'check'

resolvers myresolver
  nameserver dns1 8.8.8.8:53
  resolve_retries   30
  timeout retry 1s
  hold valid   10s

backend mybackend
server 
internal-testinelbtomcat-193184.ap-southeast-1.elb.amazonaws.com:8080
 check resolvers myresolver


haproxy -vv
HA-Proxy version 1.7.8 2017/07/07
Copyright 2000-2017 Willy Tarreau mailto:wi...@haproxy.org>>

Build options :
  TARGET  = linux26
  CPU = generic
  CC  = gcc
  CFLAGS  = -m64 -march=x86-64 -O2 -g -fno-strict-aliasing 
-Wdeclaration-after-statement -fwrapv
  OPTIONS = USE_OPENSSL=1 USE_PCRE=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Encrypted password support via crypt(3): yes
Built without compression support (neither USE_ZLIB nor USE_SLZ are set)
Compression algorithms supported : identity("identity")
Built with OpenSSL version : OpenSSL 1.0.1k-fips 8 Jan 2015
Running on OpenSSL version : OpenSSL 1.0.1k-fips 8 Jan 2015
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes
Built with PCRE version : 8.21 2011-12-12
Running on PCRE version : 8.21 2011-12-12
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Built without Lua support
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT 
IP_FREEBIND

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available filters :
[COMP] compression
[TRACE] trace
[SPOE] spoe

On Sun, Jul 23, 2017 at 1:01 PM, Aleksandar Lazic 
mailto:al-hapr...@none.at>> wrote:
Hi DHAVAL JAISWAL,

DHAVAL JAISWAL wrote on 21.07.2017:
> I have used ELB (public) as a front of Haproxy and ELB (internal) as a
> backend for the apps server.
>
> so structure is like as follows. Currently using Haproxy 1.7.
> However, request is not going to the backend server.
>
> ELB ->> HAPROXY -> ELB -> APPS server.
>
> Following config in my haproxy.  Let me know what i am doing wrong.
>
> backend mybackend
>
> server server1
> internal-testinelbtomcat-193184.ap-southeast-1.elb.amazonaws.com
Do you have set a resolver?

http://cbonte.github.io/haproxy-dconv/1.7/configuration.html#5.3

Please post also the following data, thanks.

haproxy -vv
anonymized haproxy conf
some error and access logs

--
Best Regards
Aleks



--
Thanks & Regards
Dhaval Jaiswal


HAProxy Timeout Oddity WebKit XHR Replay

2017-07-24 Thread Liam Middlebrook
Hi,

I'm currently running HAProxy within an Openshift Origin cluster. Until
a recent update of Openshift I did not experience issues with connection
timeouts, the connections would last up until the specified timeout as
defined by the application.

After an update to Openshift I changed HAProxy settings around to give a
global 600s timeout for client and server. However when I make a form
upload request the connection is killed after 30 seconds. When I signal
an XHR Replay in my network inspector the connection lasts longer than
the 30 seconds and is able to successfully upload the file.

I asked in irc with no luck. Any ideas why this may be happening?


Thanks,


Liam Middlebrook (loothelion)



Re: HAProxy Timeout Oddity WebKit XHR Replay

2017-07-24 Thread Aaron West
Hi Liam,

Can we get the config and version number that you are running?

Nothing springs to mind although someone cleverer than me on the list
may have an instant suggestion.

Aaron West

Loadbalancer.org

www.loadbalancer.org
+1 888 867 9504 / +44 (0)330 380 1064
aa...@loadbalancer.org

LEAVE A REVIEW | DEPLOYMENT GUIDES | BLOG


On 24 July 2017 at 19:59, Liam Middlebrook  wrote:
> Hi,
>
> I'm currently running HAProxy within an Openshift Origin cluster. Until
> a recent update of Openshift I did not experience issues with connection
> timeouts, the connections would last up until the specified timeout as
> defined by the application.
>
> After an update to Openshift I changed HAProxy settings around to give a
> global 600s timeout for client and server. However when I make a form
> upload request the connection is killed after 30 seconds. When I signal
> an XHR Replay in my network inspector the connection lasts longer than
> the 30 seconds and is able to successfully upload the file.
>
> I asked in irc with no luck. Any ideas why this may be happening?
>
>
> Thanks,
>
>
> Liam Middlebrook (loothelion)
>



Re: HAProxy Timeout Oddity WebKit XHR Replay

2017-07-24 Thread Liam Middlebrook
HA-Proxy version 1.5.18 2016/05/10

And I'll try and get the config cleaned up to what should be relevant
but it's pretty large so some specifics to get would be nice, I can say
for sure the timeout settings are as follows:


  timeout connect 5s


  timeout client 5m


  timeout server 5m


  timeout http-request 10s


  # Long timeout for WebSocket connections.

  timeout tunnel 1h

  # defined for each app
  timeout check 5000ms


Thanks,

Liam Middlebrook (loothelion)
On 07/24/2017 12:02 PM, Aaron West wrote:
> Hi Liam,
> 
> Can we get the config and version number that you are running?
> 
> Nothing springs to mind although someone cleverer than me on the list
> may have an instant suggestion.
> 
> Aaron West
> 
> Loadbalancer.org
> 
> www.loadbalancer.org
> +1 888 867 9504 / +44 (0)330 380 1064
> aa...@loadbalancer.org
> 
> LEAVE A REVIEW | DEPLOYMENT GUIDES | BLOG
> 
> 
> On 24 July 2017 at 19:59, Liam Middlebrook  wrote:
>> Hi,
>>
>> I'm currently running HAProxy within an Openshift Origin cluster. Until
>> a recent update of Openshift I did not experience issues with connection
>> timeouts, the connections would last up until the specified timeout as
>> defined by the application.
>>
>> After an update to Openshift I changed HAProxy settings around to give a
>> global 600s timeout for client and server. However when I make a form
>> upload request the connection is killed after 30 seconds. When I signal
>> an XHR Replay in my network inspector the connection lasts longer than
>> the 30 seconds and is able to successfully upload the file.
>>
>> I asked in irc with no luck. Any ideas why this may be happening?
>>
>>
>> Thanks,
>>
>>
>> Liam Middlebrook (loothelion)
>>



Re: HAProxy Timeout Oddity WebKit XHR Replay

2017-07-24 Thread Aaron West
Liam,

Still not seeing anything jump out, your timeout settings look fine to
me at least.

Do you use the stats page and if so do you see errors incrementing there?

Also, do you have the log lines for these connections?

Aaron West

Loadbalancer.org

www.loadbalancer.org
+1 888 867 9504 / +44 (0)330 380 1064
aa...@loadbalancer.org

LEAVE A REVIEW | DEPLOYMENT GUIDES | BLOG


On 24 July 2017 at 20:17, Liam Middlebrook  wrote:
> HA-Proxy version 1.5.18 2016/05/10
>
> And I'll try and get the config cleaned up to what should be relevant
> but it's pretty large so some specifics to get would be nice, I can say
> for sure the timeout settings are as follows:
>
>
>   timeout connect 5s
>
>
>   timeout client 5m
>
>
>   timeout server 5m
>
>
>   timeout http-request 10s
>
>
>   # Long timeout for WebSocket connections.
>
>   timeout tunnel 1h
>
>   # defined for each app
>   timeout check 5000ms
>
>
> Thanks,
>
> Liam Middlebrook (loothelion)
> On 07/24/2017 12:02 PM, Aaron West wrote:
>> Hi Liam,
>>
>> Can we get the config and version number that you are running?
>>
>> Nothing springs to mind although someone cleverer than me on the list
>> may have an instant suggestion.
>>
>> Aaron West
>>
>> Loadbalancer.org
>>
>> www.loadbalancer.org
>> +1 888 867 9504 / +44 (0)330 380 1064
>> aa...@loadbalancer.org
>>
>> LEAVE A REVIEW | DEPLOYMENT GUIDES | BLOG
>>
>>
>> On 24 July 2017 at 19:59, Liam Middlebrook  wrote:
>>> Hi,
>>>
>>> I'm currently running HAProxy within an Openshift Origin cluster. Until
>>> a recent update of Openshift I did not experience issues with connection
>>> timeouts, the connections would last up until the specified timeout as
>>> defined by the application.
>>>
>>> After an update to Openshift I changed HAProxy settings around to give a
>>> global 600s timeout for client and server. However when I make a form
>>> upload request the connection is killed after 30 seconds. When I signal
>>> an XHR Replay in my network inspector the connection lasts longer than
>>> the 30 seconds and is able to successfully upload the file.
>>>
>>> I asked in irc with no luck. Any ideas why this may be happening?
>>>
>>>
>>> Thanks,
>>>
>>>
>>> Liam Middlebrook (loothelion)
>>>



Re: Passing SNI value ( ssl_fc_sni ) to backend's verifyhost.

2017-07-24 Thread Kevin McArthur

Hi Willy,

I can confirm the following line does _not_ verify the hostname on the 
backend.


server app2 ssltest.example.ca:443 ssl verify required sni 
ssl_fc_sni ca-file /etc/ssl/certs/ca-certificates.crt check check-ssl


I setup a default https vhost on the backend server, that responds to 
ssltest.example.ca ... in the above configuration the health checks pass 
and if you visit a sni-configured site (otherhost.example.ca) it will 
also work. However, you load another SNI virtualhost that only the 
haproxy has a cert for but for which the backend responds with a 
mismatching ssltest certificate, it will happily load that site, present 
it as TLS protected to the web browser and *NOT check the verifyhost 
against the backend SNI. *


That is to say, without verifyhost  no hostname verification 
is done between the haproxy and the cert presented by the backend and 
any valid certificate (even the default-configured ssltest one) will 
work on the backend.


--

Kevin McArthur


On 2017-07-23 9:40 PM, Willy Tarreau wrote:

Hi Kevin,

On Fri, Jul 21, 2017 at 02:06:52PM -0700, Kevin McArthur wrote:

Further... the odd/broken behavior might be being caused related to no sni
indication on the health checks...

This config sort of works:


*server app2 ssltest.example.ca:443 ssl verify required /verifyhost
ssltest.example.ca/ sni ssl_fc_sni ca-file
/etc/ssl/certs/ca-certificates.crt check check-ssl*

This lets me load ssltest.example.ca via the proxy.


*server app2 anotherdomain.example.ca:443 ssl verify required /verifyhost
anotherdomain.example.ca/ sni ssl_fc_sni ca-file
/etc/ssl/certs/ca-certificates.crt check check-ssl*

Jul 21 20:57:55 haproxy1 haproxy[8371]: Health check for server
www-backend-https/app2 failed, reason: Layer6 invalid response, info: "SSL
handshake failure", check duration: 3ms, status: 0/2 DOWN.

Fails health check (no sni) verifyhost match (anotherdomain.example.ca isnt
the default on the backend server). So ends up in "No server is available to
handle this request."


*server app2 ssltest.example.ca:443 ssl verify required /verifyhost
ssl_fc_sni/ sni ssl_fc_sni ca-file /etc/ssl/certs/ca-certificates.crt check
check-ssl*

Jul 21 20:57:55 haproxy1 haproxy[8371]: Health check for server
www-backend-https/app2 failed, reason: Layer6 invalid response, info: "SSL
handshake failure", check duration: 3ms, status: 0/2 DOWN.

This fails health check.


*server app2 ssltest.example.ca:443 ssl verify required sni ssl_fc_sni
ca-file /etc/ssl/certs/ca-certificates.crt check check-ssl*

This works, but without verifying the host properly. Can load
anotherdomain.example.ca and the sni is passed along properly.


Perhaps its the host checks sni support and not this patch that are not
working correctly?

The "verifyhost" directive *forces* the host name to be checked and ignores
the SNI. By just removing it from your "server" lines, it must be OK. Your
last example above suggests it works. Why do you say that the host is not
properly verified ? Have you checked that you can connect to a server
presenting the wrong cert ? For me it refuses it and only accepts the
correct cert (the one having the same name as asked in the SNI).

Willy




[SPAM] 4 Signs Your Heart Is In Trouble…(and death is near)

2017-07-24 Thread Arthur Smith
HEALTH & FITNESS




You might be minutes from a heart attack


When this happens to your eyes- one is coming
Its
 scary to think about but you could be playing with your kids one seconds, then 
dead on the floor because your heart stopped ticking.
Knowing these 6 signs will prevent yours
.




Its important everyone knows this







of man two those only boy his long appalling a eight OPE slightly my on village 
had and in a fearful the way rather state no agreed might heard and part long 
milk only the at man fine them must growing old his As All and in records 
profound began coarser debating hopelessly the shadows sir trembling her 
bruised Well me sister of and who with an bed and a way help he Pip a begun him 
an and that to line I all soaped a believed never in square




This correspondence-is an adzTyping your preference on this screen
will
 approve your removal from our group of subcribers218 E. Bearss Ave., Ste. 203, 
Tampa, FL 33613



Subscribe

2017-07-24 Thread Daniel Story




Re: Passing SNI value ( ssl_fc_sni ) to backend's verifyhost.

2017-07-24 Thread Kevin McArthur

To replicate my results:

Generate 3 ssl certificates (letsenc? I used a dns-01 challenge...)..

default.example.ca
working.example.ca
should-be-broken.example.ca

Configure an apache instance to serve only the first two via https. 
default.example.ca and working.example.ca; don't configure any 
virtualhost for should-be-broken.example.ca.


Configure the haproxy instance with all 3 certificates in the haproxy 
format with the intermediates and private keys included in a single 
file. Name the files like default.example.ca.pem, 
working.example.ca.pem, should-be-broken.example.ca.pem and toss em in 
/etc/haproxy/certs...


Install the ca-certificates package if you're on debian/ubuntu 
(otherwise adjust the ca-certificates location to whatever distro you're 
running)...


Then:

haproxy.cfg:

frontend www-https
bind :::443 v4v6 ssl crt /etc/haproxy/certs/default.example.ca.pem 
crt /etc/haproxy/certs/

use_backend www-backend-https

backend www-backend-https
server app default.example.ca:443 ssl verify required sni 
ssl_fc_sni ca-file /etc/ssl/certs/ca-certificates.crt check check-ssl


If you visit https://should-be-broken.example.ca you will get the page 
for default.example.ca, but the browser/visitor will show the 
should-be-broken.example.ca cert from the haproxy and the page will 
appear secure, despite the backend apache instance having no access to 
should-be-broken's virtual host or certificate and serving a certificate 
for default.example.ca to the haproxy.


--
Kevin




On 2017-07-24 3:25 PM, Kevin McArthur wrote:


Hi Willy,

I can confirm the following line does _not_ verify the hostname on the 
backend.


server app2 ssltest.example.ca:443 ssl verify required sni 
ssl_fc_sni ca-file /etc/ssl/certs/ca-certificates.crt check check-ssl


I setup a default https vhost on the backend server, that responds to 
ssltest.example.ca ... in the above configuration the health checks 
pass and if you visit a sni-configured site (otherhost.example.ca) it 
will also work. However, you load another SNI virtualhost that only 
the haproxy has a cert for but for which the backend responds with a 
mismatching ssltest certificate, it will happily load that site, 
present it as TLS protected to the web browser and *NOT check the 
verifyhost against the backend SNI. *


That is to say, without verifyhost  no hostname 
verification is done between the haproxy and the cert presented by the 
backend and any valid certificate (even the default-configured ssltest 
one) will work on the backend.


--

Kevin McArthur


On 2017-07-23 9:40 PM, Willy Tarreau wrote:

Hi Kevin,

On Fri, Jul 21, 2017 at 02:06:52PM -0700, Kevin McArthur wrote:

Further... the odd/broken behavior might be being caused related to no sni
indication on the health checks...

This config sort of works:


*server app2 ssltest.example.ca:443 ssl verify required /verifyhost
ssltest.example.ca/ sni ssl_fc_sni ca-file
/etc/ssl/certs/ca-certificates.crt check check-ssl*

This lets me load ssltest.example.ca via the proxy.


*server app2 anotherdomain.example.ca:443 ssl verify required /verifyhost
anotherdomain.example.ca/ sni ssl_fc_sni ca-file
/etc/ssl/certs/ca-certificates.crt check check-ssl*

Jul 21 20:57:55 haproxy1 haproxy[8371]: Health check for server
www-backend-https/app2 failed, reason: Layer6 invalid response, info: "SSL
handshake failure", check duration: 3ms, status: 0/2 DOWN.

Fails health check (no sni) verifyhost match (anotherdomain.example.ca isnt
the default on the backend server). So ends up in "No server is available to
handle this request."


*server app2 ssltest.example.ca:443 ssl verify required /verifyhost
ssl_fc_sni/ sni ssl_fc_sni ca-file /etc/ssl/certs/ca-certificates.crt check
check-ssl*

Jul 21 20:57:55 haproxy1 haproxy[8371]: Health check for server
www-backend-https/app2 failed, reason: Layer6 invalid response, info: "SSL
handshake failure", check duration: 3ms, status: 0/2 DOWN.

This fails health check.


*server app2 ssltest.example.ca:443 ssl verify required sni ssl_fc_sni
ca-file /etc/ssl/certs/ca-certificates.crt check check-ssl*

This works, but without verifying the host properly. Can load
anotherdomain.example.ca and the sni is passed along properly.


Perhaps its the host checks sni support and not this patch that are not
working correctly?

The "verifyhost" directive *forces* the host name to be checked and ignores
the SNI. By just removing it from your "server" lines, it must be OK. Your
last example above suggests it works. Why do you say that the host is not
properly verified ? Have you checked that you can connect to a server
presenting the wrong cert ? For me it refuses it and only accepts the
correct cert (the one having the same name as asked in the SNI).

Willy






Re: HAProxy Timeout Oddity WebKit XHR Replay

2017-07-24 Thread Liam Middlebrook
I don't see any errors incrementing (HAProxy's config gets reloaded
every couple minutes by openshift)

Here's the line of the log file in relation to my timeout error.

Jul 24 23:51:08 proton.csh.rit.edu haproxy[127]: 67.188.94.238:43996
[24/Jul/2017:23:50:38.543] fe_sni~
be_edge_http_gallery_gallery/5792c687271726c3c4b5d54ae219aaa2
85/0/1/-1/29913 -1 0 - - CHVN 1/0/0/0/0 0/0 "POST /upload HTTP/1.1"


On 07/24/2017 12:36 PM, Aaron West wrote:
> Liam,
> 
> Still not seeing anything jump out, your timeout settings look fine to
> me at least.
> 
> Do you use the stats page and if so do you see errors incrementing there?
> 
> Also, do you have the log lines for these connections?
> 
> Aaron West
> 
> Loadbalancer.org
> 
> www.loadbalancer.org
> +1 888 867 9504 / +44 (0)330 380 1064
> aa...@loadbalancer.org
> 
> LEAVE A REVIEW | DEPLOYMENT GUIDES | BLOG
> 
> 
> On 24 July 2017 at 20:17, Liam Middlebrook  wrote:
>> HA-Proxy version 1.5.18 2016/05/10
>>
>> And I'll try and get the config cleaned up to what should be relevant
>> but it's pretty large so some specifics to get would be nice, I can say
>> for sure the timeout settings are as follows:
>>
>>
>>   timeout connect 5s
>>
>>
>>   timeout client 5m
>>
>>
>>   timeout server 5m
>>
>>
>>   timeout http-request 10s
>>
>>
>>   # Long timeout for WebSocket connections.
>>
>>   timeout tunnel 1h
>>
>>   # defined for each app
>>   timeout check 5000ms
>>
>>
>> Thanks,
>>
>> Liam Middlebrook (loothelion)
>> On 07/24/2017 12:02 PM, Aaron West wrote:
>>> Hi Liam,
>>>
>>> Can we get the config and version number that you are running?
>>>
>>> Nothing springs to mind although someone cleverer than me on the list
>>> may have an instant suggestion.
>>>
>>> Aaron West
>>>
>>> Loadbalancer.org
>>>
>>> www.loadbalancer.org
>>> +1 888 867 9504 / +44 (0)330 380 1064
>>> aa...@loadbalancer.org
>>>
>>> LEAVE A REVIEW | DEPLOYMENT GUIDES | BLOG
>>>
>>>
>>> On 24 July 2017 at 19:59, Liam Middlebrook  wrote:
 Hi,

 I'm currently running HAProxy within an Openshift Origin cluster. Until
 a recent update of Openshift I did not experience issues with connection
 timeouts, the connections would last up until the specified timeout as
 defined by the application.

 After an update to Openshift I changed HAProxy settings around to give a
 global 600s timeout for client and server. However when I make a form
 upload request the connection is killed after 30 seconds. When I signal
 an XHR Replay in my network inspector the connection lasts longer than
 the 30 seconds and is able to successfully upload the file.

 I asked in irc with no luck. Any ideas why this may be happening?


 Thanks,


 Liam Middlebrook (loothelion)




Re: Passing SNI value ( ssl_fc_sni ) to backend's verifyhost.

2017-07-24 Thread Willy Tarreau
Hi Kevin,

On Mon, Jul 24, 2017 at 04:00:04PM -0700, Kevin McArthur wrote:
> To replicate my results:
> 
> Generate 3 ssl certificates (letsenc? I used a dns-01 challenge...)..
> 
> default.example.ca
> working.example.ca
> should-be-broken.example.ca
> 
> Configure an apache instance to serve only the first two via https.
> default.example.ca and working.example.ca; don't configure any virtualhost
> for should-be-broken.example.ca.
> 
> Configure the haproxy instance with all 3 certificates in the haproxy format
> with the intermediates and private keys included in a single file. Name the
> files like default.example.ca.pem, working.example.ca.pem,
> should-be-broken.example.ca.pem and toss em in /etc/haproxy/certs...
> 
> Install the ca-certificates package if you're on debian/ubuntu (otherwise
> adjust the ca-certificates location to whatever distro you're running)...
> 
> Then:
> 
> haproxy.cfg:
> 
> frontend www-https
> bind :::443 v4v6 ssl crt /etc/haproxy/certs/default.example.ca.pem crt
> /etc/haproxy/certs/
> use_backend www-backend-https
> 
> backend www-backend-https
> server app default.example.ca:443 ssl verify required sni ssl_fc_sni
> ca-file /etc/ssl/certs/ca-certificates.crt check check-ssl
> 
> If you visit https://should-be-broken.example.ca you will get the page for
> default.example.ca, but the browser/visitor will show the
> should-be-broken.example.ca cert from the haproxy and the page will appear
> secure, despite the backend apache instance having no access to
> should-be-broken's virtual host or certificate and serving a certificate for
> default.example.ca to the haproxy.

Thanks, I'll retry it. I'm surprized because what you describe here is
*exactly* what I did and it worked fine for me, I remember getting a 503
when connecting with the wrong name. But obviously there must be a
difference so I'll try to find it.

Willy



RE: X-Real-IP = X-Forwarded-For

2017-07-24 Thread Andrey Zakabluk
Hi! 
YES
http request already have X-Forwarded-For header and I want haproxy to set 
X-Client-IP, same value that incoming X-Forwarded-For.

-Original Message-
From: Jarno Huuskonen [mailto:jarno.huusko...@uef.fi] 
Sent: Thursday, July 20, 2017 4:18 PM
To: Andrey Zakabluk 
Cc: haproxy@formilux.org
Subject: Re: X-Real-IP = X-Forwarded-For

Hi,

On Thu, Jul 20, Andrey Zakabluk wrote:
> frontend http-in
> bind *:4016
> default_backend servers
> mode http
> option httplog
> log global
>option forwardfor
>capture cookie SERVERID len 32
>capture request header Host len 15
>capture request header X-Forwarded-For len 15
>capture request header Referrer len 15
>capture response header Content-length len 9
>capture response header Location len 15
>http-request set-header X-Client-IP 
> req.hdr_ip([X-Forwarded-For])]

Can you explain what you're trying to do ?

Do you want haproxy to set X-Forwarded-For and X-Client-IP headers to clients 
src ip address ? (BTW you're mixing X-Client-IP and X-Real-IP).
Try: http-request set-header X-Client-IP %[src] or Does the http request 
already have X-Forwarded-For header and you want haproxy to set X-Client-IP to 
the same value that incoming X-Forwarded-For has ?

> -- but not help.

How it doesn't work ? X-Client-IP value is empty ?

-Jarno

> Andrey Zakabluk wrote on 12.07.2017:
> 
> > Hi! I Use
> > HA-Proxy version 1.5.12 2015/05/02
> > .
> > Need add in http package option X-Real-IP.  X-Real-IP should be 
> > equal X-Forwarded-For. X-Forwarded-For be in package.
> > I tried
> 
> > frontend http-in
> > bind *:4016
> > default_backend servers
> > mode http
> > option httplog
> > log global
> >capture cookie SERVERID len 32
> >capture request header Host len 15
> >capture request header X-Forwarded-For len 15
> >capture request header Referrer len 15
> >capture response header Content-length len 9
> >capture response header Location len 15
> >http-request set-header X-Client-IP 
> > req.hdr_ip([X-Forwarded-For])]
> 
> My naive solution would be
> 
>http-request set-header X-Real-IP req.hdr_ip([X-Forwarded-For])]

--
Jarno Huuskonen



Re: HAProxy Timeout Oddity WebKit XHR Replay

2017-07-24 Thread Aleksandar Lazic
Hi Liam,

Liam Middlebrook wrote on 24.07.2017:

> Hi,

> I'm currently running HAProxy within an Openshift Origin cluster. Until
> a recent update of Openshift I did not experience issues with connection
> timeouts, the connections would last up until the specified timeout as
> defined by the application.
>
> After an update to Openshift I changed HAProxy settings around to give a
> global 600s timeout for client and server. However when I make a form
> upload request the connection is killed after 30 seconds. When I signal
> an XHR Replay in my network inspector the connection lasts longer than
> the 30 seconds and is able to successfully upload the file.

This smells like this timeout.

###
ROUTER_DEFAULT_SERVER_TIMEOUT 30s
Length of time within which a server has to acknowledge or send data. 
(TimeUnits)
###

https://docs.openshift.org/latest/architecture/core_concepts/routes.html#env-variables

You can change it via.

I assume here that you have the openshift router in the default 
namespace and the router is deployed as "router".

Too much routers here ;-)

oc env -n default dc/router ROUTER_DEFAULT_SERVER_TIMEOUT=1h

> I asked in irc with no luck. Any ideas why this may be happening?

Do you mean the #openshift-dev channel on Freenode?

> Thanks,
>
> Liam Middlebrook (loothelion)

-- 
Best Regards
Aleks