tune.h2.initial-window-size not applied to connection windows

2018-01-22 Thread klzgrad
Hi,

I was testing Chromium uploading to HAProxy and found the upload was
quickly stalled. I tried to set tune.h2.initial-window-size larger but
it was not effective. Wireshark confirmed SETTINGS_INITIAL_WINDOW_SIZE
frames were being sent as expected, but Chromium's net log showed the
connection window size remained the default (stream window size did
get the larger value).

Upon reading Chromium's source I found SETTINGS_INITIAL_WINDOW_SIZE
frames would only adjust the per stream send window size, not the
connection window size (called session_send_window_size_ in Chromium).

Actually RFC 7530 6.9.2 has specified this,

> The connection flow-control window can only be changed using WINDOW_UPDATE 
> frames.

> Similarly, the connection flow-control window is set to the default initial 
> window size until a WINDOW_UPDATE frame is received.

> A SETTINGS frame cannot alter the connection flow-control window.

It appears WINDOW_UPDATE frames were not being sent to update the
connection windows.

This tuning knob is probably only useful if it's also applied to the
overall connection windows in addition to stream windows.

-klzgrad



Re: Warnings when using dynamic cookies and server-template

2018-01-22 Thread William Dauchy
Hello Olivier,

On Wed, Jan 17, 2018 at 05:43:02PM +0100, Olivier Houchard wrote:
> Ok you got me convinced, the attached patch don't check for duplicate
> cookies for disabled server, until we enable them.

I took the time to test it on top of 1.8.x and it works as expected,
removing the warnings.

Thanks,

> From cfc333d2b04686a3c488fdcb495cba64dbfec14b Mon Sep 17 00:00:00 2001
> From: Olivier Houchard 
> Date: Wed, 17 Jan 2018 17:39:34 +0100
> Subject: [PATCH] MINOR: servers: Don't report duplicate dyncookies for
>  disabled servers.
>
> Especially with server-templates, it can happen servers starts with a
> placeholder IP, in the disabled state. In this case, we don't want to report
> that the same cookie was generated for multiple servers. So defer the test
> until the server is enabled.
>
> This should be backported to 1.8.

Reported-by: Pierre Cheynier 
Tested-by: William Dauchy 

> ---
>  src/server.c | 50 +++---
>  1 file changed, 35 insertions(+), 15 deletions(-)
>
> diff --git a/src/server.c b/src/server.c
> index a37e91968..3901e7d8b 100644
> --- a/src/server.c
> +++ b/src/server.c
> @@ -86,10 +86,34 @@ int srv_getinter(const struct check *check)
>   return (check->fastinter)?(check->fastinter):(check->inter);
>  }
>
> -void srv_set_dyncookie(struct server *s)
> +/*
> + * Check that we did not get a hash collision.
> + * Unlikely, but it can happen.
> + */
> +static inline void srv_check_for_dup_dyncookie(struct server *s)
>  {
>   struct proxy *p = s->proxy;
>   struct server *tmpserv;
> +
> + for (tmpserv = p->srv; tmpserv != NULL;
> +tmpserv = tmpserv->next) {
> + if (tmpserv == s)
> + continue;
> + if (tmpserv->next_admin & SRV_ADMF_FMAINT)
> + continue;
> + if (tmpserv->cookie &&
> +strcmp(tmpserv->cookie, s->cookie) == 0) {
> + ha_warning("We generated two equal cookies for two different servers.\n"
> +   "Please change the secret key for '%s'.\n",
> +   s->proxy->id);
> + }
> + }
> +
> +}
> +
> +void srv_set_dyncookie(struct server *s)
> +{
> + struct proxy *p = s->proxy;
>   char *tmpbuf;
>   unsigned long long hash_value;
>   size_t key_len;
> @@ -136,21 +160,13 @@ void srv_set_dyncookie(struct server *s)
>   if (!s->cookie)
>   return;
>   s->cklen = 16;
> - /*
> - * Check that we did not get a hash collision.
> - * Unlikely, but it can happen.
> +
> + /* Don't bother checking if the dyncookie is duplicated if
> + * the server is marked as "disabled", maybe it doesn't have
> + * its real IP yet, but just a place holder.
>   */
> - for (tmpserv = p->srv; tmpserv != NULL;
> -tmpserv = tmpserv->next) {
> - if (tmpserv == s)
> - continue;
> - if (tmpserv->cookie &&
> -strcmp(tmpserv->cookie, s->cookie) == 0) {
> - ha_warning("We generated two equal cookies for two different servers.\n"
> -   "Please change the secret key for '%s'.\n",
> -   s->proxy->id);
> - }
> - }
> + if (!(s->next_admin & SRV_ADMF_FMAINT))
> + srv_check_for_dup_dyncookie(s);
>  }
>
>  /*
> @@ -4398,6 +4414,10 @@ static int cli_parse_enable_server(char **args, struct 
> appctx *appctx, void *pri
>   return 1;
>
>   srv_adm_set_ready(sv);
> + if (!(sv->flags & SRV_F_COOKIESET)
> +&& (sv->proxy->ck_opts & PR_CK_DYNAMIC) &&
> +sv->cookie)
> + srv_check_for_dup_dyncookie(sv);
>   return 1;
>  }
>
> --
> 2.14.3



Difference between variables and sample fetches?

2018-01-22 Thread Tim Düsterhus
Hi

what are the differences between variables and sample fetches? Some
values can be retrieved using both. For example the src IP address can
be retrieved using  both `%ci` as well as `%[src]`.

One difference I noticed is that I don't think I am able to use
converters (e.g. ipmask) for variables (e.g. %ci).

Are there any other differences?

Best regards
Tim Düsterhus



Re: [BUG] 100% cpu on each threads

2018-01-22 Thread Willy Tarreau
On Mon, Jan 22, 2018 at 05:47:55PM +0100, Willy Tarreau wrote:
> > strace: Process 12166 attached
> > [pid 12166] set_robust_list(0x7ff9bc9aa9e0, 24 
> > [pid 12166] <... set_robust_list resumed> ) = 0
> > [pid 12166] gettimeofday({1516289044, 684014}, NULL) = 0
> > [pid 12166] mmap(NULL, 134217728, PROT_NONE, 
> > MAP_PRIVATE|MAP_ANONYMOUS|MAP_NORESERVE, -1, 0 
> > [pid 12166] <... mmap resumed> )= 0x7ff9ac00
> > [pid 12166] munmap(0x7ff9b000, 67108864) = 0
> > [pid 12166] mprotect(0x7ff9ac00, 135168, PROT_READ|PROT_WRITE 
> > 
> > [pid 12166] <... mprotect resumed> )= 0
> > [pid 12166] mmap(NULL, 8003584, PROT_READ|PROT_WRITE, 
> > MAP_PRIVATE|MAP_ANONYMOUS, -1, 0 
> > [pid 12166] <... mmap resumed> )= 0x7ff9baa65000
> > [pid 12166] close(16 
> > [pid 12166] <... close resumed> )   = 0
> > [pid 12166] fcntl(15, F_SETFL, O_RDONLY|O_NONBLOCK 
> > [pid 12166] <... fcntl resumed> )   = 0
> 
> Here it's getting obvious that it was a shared file descriptor :-(

So I have a suspect here :

   - run_thread_poll_loop() runs after the threads are created
   - first thing it does is to close the master-worker pipe FD :

(...)
if (global.mode & MODE_MWORKER)
mworker_pipe_register(mworker_pipe);
(...)

 void mworker_pipe_register(int pipefd[2])
 {
close(mworker_pipe[1]); /* close the write end of the master pipe in 
the children */
fcntl(mworker_pipe[0], F_SETFL, O_NONBLOCK);
(...)
 }

 Looks familiar with the trace above ?

So I guess your config works in master-worker mode, am I right ?

Note that I'm bothered with the call to protocol_enable_all() as
well in this function since it will start the proxies multiple times
in a possibly unsafe mode. That may explain a lot of things suddenly!

I think the attached patch works around it, but I'd like your
confirmation before cleaning it up.

Thanks,
Willy

diff --git a/src/haproxy.c b/src/haproxy.c
index 20b18f8..66639fc 100644
--- a/src/haproxy.c
+++ b/src/haproxy.c
@@ -2339,7 +2339,11 @@ void mworker_pipe_handler(int fd)
 
 void mworker_pipe_register(int pipefd[2])
 {
+   if (mworker_pipe[1] < 0)
+   return;
+
close(mworker_pipe[1]); /* close the write end of the master pipe in 
the children */
+   mworker_pipe[1] = -1;
 
fcntl(mworker_pipe[0], F_SETFL, O_NONBLOCK);
fdtab[mworker_pipe[0]].owner = mworker_pipe;
@@ -2408,6 +2412,7 @@ static void *run_thread_poll_loop(void *data)
 {
struct per_thread_init_fct   *ptif;
struct per_thread_deinit_fct *ptdf;
+   static __maybe_unused HA_SPINLOCK_T start_lock;
 
tid = *((unsigned int *)data);
tid_bit = (1UL << tid);
@@ -2420,10 +2425,12 @@ static void *run_thread_poll_loop(void *data)
}
}
 
+   HA_SPIN_LOCK(LISTENER_LOCK, _lock);
if (global.mode & MODE_MWORKER)
mworker_pipe_register(mworker_pipe);
 
protocol_enable_all();
+   HA_SPIN_UNLOCK(LISTENER_LOCK, _lock);
THREAD_SYNC_ENABLE();
run_poll_loop();
 


Re: [BUG] 100% cpu on each threads

2018-01-22 Thread Willy Tarreau
Hi Marc,

On Mon, Jan 22, 2018 at 03:18:20PM +0100, Marc Fournier wrote:
> Cyril Bonté  writes:
> 
> Hello,
> 
> > Im' not sure you saw Samuel Reed's mail.
> > He reported a similar issue some hours ago (High load average under
> > 1.8 with multiple draining processes). It would be interesting to find
> > a common configuration to reproduce the issue, so I add him to the thread.
> 
> I've been observing the same error messages Emmanuel reports, using
> 1.8.3 and nbthread. I tried to simplify & anonymize my configuration so
> I could share a version which reproduces the problem, but didn't
> succeed: the problem disappears at some point in the process, and I'm
> unable figure out exactly which change makes the difference :-(

We've done some work over the week-end to address an issue related to
FDs and threads : in short, it's impossible to let a thread sleep when
there's some activity on another one because they share the same epoll_fd.

We've sent Samuel a copy of patches to test (I'm attaching them here in
case you're interested to try as well, to add on top of latest 1.8, though
1.8.3 will be OK). Since you seem to be able to reproduce the bug on a
full config, you may be tempted to try htem.

> - when exposed to client requests, I only observed high CPU load on one
>   instance out of the three I have, which receeded after a restart of
>   haproxy. When working in isolation (no client requests), I never
>   noticed high CPU load.

So this could indicate an uncaught error on a specific fd. A "show fd"
on the CLI may give some useful information about this. And the patches
above also add "show activity", to run twice one second apart, and which
will indicate the various causes of wakeups.

> - the more complex the config gets, the easiest it is to reproduce the
>   issue. By "complex" I mean: more frontends, backend and servers
>   defined, conditionnally routing traffic to each other based on ACLs,
>   SSL enabled, dns resolver enabled and used in server statements,
>   various healthchecks on servers.

This could match the same root cause we've been working on, but it may
also more easily trigger a bug in one such area and cause the problem
to reveal itself.

> - at some point when simplyfing the config, the problem becomes
>   transient, then eventually stops happening. But there doesn't seem to
>   be exacly one configuration keyword which triggers the issue.

Which may possibly rule out the single isolated bug theory and fuel the
FD one a bit more.

> - I also noticed a few log messages go missing from time to time. Not
>   sure about this though, it could also be a problem further downstream
>   in my logging pipeline.

OK.

> - I've seen the problem happen on systems both with and without the
>   spectre/meltdown kernel patches.

Good point, you're right to indicate it since we've had a doubt at some
point about the coincidence of the update (though it can come from various
causes).

> Last but not least, by continuously reloading haproxy (SIGUSR2) and
> running strace against it until the problem occured, I was able to get
> this sequence of events (slightly redacted, with a couple of comments
> in-line), which seem to show some incorrect actions on file descriptiors
> between concurrent threads:
> 
> # thread 12167 opens an UDP socket to the DNS server defined in the resolvers
> # section of my config, and starts sending queries:
> 
> [pid 12167] socket(PF_INET, SOCK_DGRAM, IPPROTO_UDP) = 16
> [pid 12167] connect(16, {sa_family=AF_INET, sin_port=htons(53), 
> sin_addr=inet_addr("10.10.0.2")}, 16) = 0
> [pid 12167] fcntl(16, F_SETFL, O_RDONLY|O_NONBLOCK) = 0
> [...]
> [pid 12167] sendto(16, 
> "\37\350\1\0\0\1\0\0\0\0\0\1\tprivate-0\10backends\4"..., 78, 0, NULL, 0) = 78
> [pid 12167] sendto(16, 
> "\265\327\1\0\0\1\0\0\0\0\0\1\tprivate-1\10backends\4"..., 78, 0, NULL, 0) = 
> 78
> [...]
> [pid 12167] sendto(16, 
> "\341\21\1\0\0\1\0\0\0\0\0\1\nprivate-23\10backends"..., 79, 0, NULL, 0) = 79
> [pid 12167] sendto(16, 
> "\\\223\1\0\0\1\0\0\0\0\0\1\nprivate-24\10backends"..., 79, 0, NULL, 0) = 79
>
> # thread 12166 gets created, and closes an fd it didn't create, which
> # happens to be the socket opened to the DNS server:
> 
> strace: Process 12166 attached
> [pid 12167] sendto(16, 
> "\316\352\1\0\0\1\0\0\0\0\0\1\nprivate-25\10backends"..., 79, 0, NULL, 0 
> 
> [pid 12166] set_robust_list(0x7ff9bc9aa9e0, 24 
> [pid 12167] <... sendto resumed> )  = 79
> [pid 12166] <... set_robust_list resumed> ) = 0
> [pid 12166] gettimeofday({1516289044, 684014}, NULL) = 0
> [pid 12167] sendto(16, 
> "\37\367\1\0\0\1\0\0\0\0\0\1\nprivate-26\10backends"..., 79, 0, NULL, 0 
> 
> [pid 12166] mmap(NULL, 134217728, PROT_NONE, 
> MAP_PRIVATE|MAP_ANONYMOUS|MAP_NORESERVE, -1, 0 
> [pid 12167] <... sendto resumed> )  = 79
> [pid 12167] sendto(16, 
> "\224\10\1\0\0\1\0\0\0\0\0\1\nprivate-27\10backends"..., 79, 0, NULL, 0) = 79
> [pid 12166] <... mmap resumed> )= 0x7ff9ac00
> [pid 

Re: [PATCH] BUILD/SMALL Fixed build on macOS with lua

2018-01-22 Thread Kirill A. Korinsky
Hey,

Sorry for long response.

Your way much better.

I will prepare a patch ASAP.

-- 
wbr, Kirill


> On 4 Jan 2018, at 19:24, Thierry Fournier  wrote:
> 
> 
>> On 4 Jan 2018, at 15:16, Kirill A. Korinsky  wrote:
>> 
>> Honestly, I didn't.
>> 
>> If I right understand how export-dynamic works and how haproxy use 
>> integrated LUA, it shouldn't have any impact.
>> 
>> Honestly I see only one case when export-dynamic requests: when some 
>> application load haproxy over dlopen, and use some function from haproxy 
>> binary object.
>> 
>> I expect that it isn't true, is it?
> 
> 
> 
> Hi Kirill,
> 
> This option is usefull for load Lua extensions as .so files. Something
> like the openSSL Lua bindings is provided as .so Lua module. These kind
> of modules requires some Lua symbol which are not used by HAProxy, so
> without the option “ --export-dynamic” the load of these libraries fail
> with a message explaining that some symbols are missing.
> 
> Note: I’m not specialised in the compilation options, and maybe something
> following is wrong.
> 
> The flag --export-dynamic force all the symbols (of the Lua library) to
> be exported in the ELF binary. I guess that this option force the linker
> to embed also the unused symbols from the Lua library.
> 
> The man on my mac compiler shows the option “-export_dynamic” (with
> underscore in place of dash). Maybe a solution is to detect the platform
> and set the right option.
> 
> br,
> Thierry
> 
>> 
>> -- 
>> wbr, Kirill
>> 
>> 
>>> On 4 Jan 2018, at 01:10, Willy Tarreau  wrote:
>>> 
>>> Hi Kirill,
>>> 
>>> On Thu, Dec 28, 2017 at 04:13:38AM +0400, Kirill A. Korinsky wrote:
 Last macOS hasn't support export-dynamic, just remove it
 ---
 Makefile | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
 
 diff --git a/Makefile b/Makefile
 index 2acf5028..19234897 100644
 --- a/Makefile
 +++ b/Makefile
 @@ -630,7 +630,7 @@ check_lua_inc = $(shell if [ -d $(2)$(1) ]; then echo 
 $(2)$(1); fi;)
 
 BUILD_OPTIONS   += $(call ignore_implicit,USE_LUA)
 OPTIONS_CFLAGS  += -DUSE_LUA $(if $(LUA_INC),-I$(LUA_INC))
 -LUA_LD_FLAGS := -Wl,--export-dynamic $(if $(LUA_LIB),-L$(LUA_LIB))
 +LUA_LD_FLAGS := $(if $(LUA_LIB),-L$(LUA_LIB))
>>> 
>>> Hmmm how can you be sure you didn't break anything else ? I'm pretty
>>> sure that there was a reason for adding this --export-dynamic, maybe
>>> certain things will still not work on your platform, or others won't
>>> work at all. We need to run some checks before taking this one.
>>> 
>>> I'm CCing Thierry in case he reminds why we need this.
>>> 
>>> Regards,
>>> Willy
>> 
> 



Re: [BUG] 100% cpu on each threads

2018-01-22 Thread Marc Fournier
Cyril Bonté  writes:

Hello,

> Im' not sure you saw Samuel Reed's mail.
> He reported a similar issue some hours ago (High load average under
> 1.8 with multiple draining processes). It would be interesting to find
> a common configuration to reproduce the issue, so I add him to the thread.

I've been observing the same error messages Emmanuel reports, using
1.8.3 and nbthread. I tried to simplify & anonymize my configuration so
I could share a version which reproduces the problem, but didn't
succeed: the problem disappears at some point in the process, and I'm
unable figure out exactly which change makes the difference :-(

So here are all the observations I gathered, hoping this will help move
a step further:

- disabling "nbthread", as well as setting "nbthread 1", makes the
  problem go away.

- when exposed to client requests, I only observed high CPU load on one
  instance out of the three I have, which receeded after a restart of
  haproxy. When working in isolation (no client requests), I never
  noticed high CPU load.

- the more complex the config gets, the easiest it is to reproduce the
  issue. By "complex" I mean: more frontends, backend and servers
  defined, conditionnally routing traffic to each other based on ACLs,
  SSL enabled, dns resolver enabled and used in server statements,
  various healthchecks on servers.

- at some point when simplyfing the config, the problem becomes
  transient, then eventually stops happening. But there doesn't seem to
  be exacly one configuration keyword which triggers the issue.

- I also noticed a few log messages go missing from time to time. Not
  sure about this though, it could also be a problem further downstream
  in my logging pipeline.

- I've seen the problem happen on systems both with and without the
  spectre/meltdown kernel patches.

Last but not least, by continuously reloading haproxy (SIGUSR2) and
running strace against it until the problem occured, I was able to get
this sequence of events (slightly redacted, with a couple of comments
in-line), which seem to show some incorrect actions on file descriptiors
between concurrent threads:

# thread 12167 opens an UDP socket to the DNS server defined in the resolvers
# section of my config, and starts sending queries:

[pid 12167] socket(PF_INET, SOCK_DGRAM, IPPROTO_UDP) = 16
[pid 12167] connect(16, {sa_family=AF_INET, sin_port=htons(53), 
sin_addr=inet_addr("10.10.0.2")}, 16) = 0
[pid 12167] fcntl(16, F_SETFL, O_RDONLY|O_NONBLOCK) = 0
[...]
[pid 12167] sendto(16, 
"\37\350\1\0\0\1\0\0\0\0\0\1\tprivate-0\10backends\4"..., 78, 0, NULL, 0) = 78
[pid 12167] sendto(16, 
"\265\327\1\0\0\1\0\0\0\0\0\1\tprivate-1\10backends\4"..., 78, 0, NULL, 0) = 78
[...]
[pid 12167] sendto(16, "\341\21\1\0\0\1\0\0\0\0\0\1\nprivate-23\10backends"..., 
79, 0, NULL, 0) = 79
[pid 12167] sendto(16, "\\\223\1\0\0\1\0\0\0\0\0\1\nprivate-24\10backends"..., 
79, 0, NULL, 0) = 79

# thread 12166 gets created, and closes an fd it didn't create, which
# happens to be the socket opened to the DNS server:

strace: Process 12166 attached
[pid 12167] sendto(16, 
"\316\352\1\0\0\1\0\0\0\0\0\1\nprivate-25\10backends"..., 79, 0, NULL, 0 

[pid 12166] set_robust_list(0x7ff9bc9aa9e0, 24 
[pid 12167] <... sendto resumed> )  = 79
[pid 12166] <... set_robust_list resumed> ) = 0
[pid 12166] gettimeofday({1516289044, 684014}, NULL) = 0
[pid 12167] sendto(16, "\37\367\1\0\0\1\0\0\0\0\0\1\nprivate-26\10backends"..., 
79, 0, NULL, 0 
[pid 12166] mmap(NULL, 134217728, PROT_NONE, 
MAP_PRIVATE|MAP_ANONYMOUS|MAP_NORESERVE, -1, 0 
[pid 12167] <... sendto resumed> )  = 79
[pid 12167] sendto(16, "\224\10\1\0\0\1\0\0\0\0\0\1\nprivate-27\10backends"..., 
79, 0, NULL, 0) = 79
[pid 12166] <... mmap resumed> )= 0x7ff9ac00
[pid 12167] sendto(16, "\25 \1\0\0\1\0\0\0\0\0\1\nprivate-28\10backends"..., 
79, 0, NULL, 0) = 79
[pid 12166] munmap(0x7ff9b000, 67108864) = 0
[pid 12167] sendto(16, "\275\n\1\0\0\1\0\0\0\0\0\1\nprivate-29\10backends"..., 
79, 0, NULL, 0 
[pid 12166] mprotect(0x7ff9ac00, 135168, PROT_READ|PROT_WRITE 
[pid 12167] <... sendto resumed> )  = 79
[pid 12166] <... mprotect resumed> )= 0
[pid 12167] sendto(16, 
"\312\360\1\0\0\1\0\0\0\0\0\1\nprivate-30\10backends"..., 79, 0, NULL, 0 

[pid 12166] mmap(NULL, 8003584, PROT_READ|PROT_WRITE, 
MAP_PRIVATE|MAP_ANONYMOUS, -1, 0 
[pid 12167] <... sendto resumed> )  = 79
[pid 12167] sendto(16, "\247e\1\0\0\1\0\0\0\0\0\1\nprivate-31\10backends"..., 
79, 0, NULL, 0) = 79
[pid 12166] <... mmap resumed> )= 0x7ff9baa65000
[pid 12167] sendto(16, "_k\1\0\0\1\0\0\0\0\0\1\tprivate-0\nbackoffic"..., 80, 
0, NULL, 0 
[pid 12166] close(16 

# from now on, thread 12167 gets "Bad file descriptor" back when sending
# DNS queries:

[pid 12167] <... sendto resumed> )  = 80
[pid 12167] sendto(16, "\355\25\1\0\0\1\0\0\0\0\0\1\tprivate-1\nbackoffic"..., 
80, 0, NULL, 0 
[pid 12166] <... close resumed> )   = 0
[pid 12167] <... sendto resumed> )   

Re: Re[4]: How to parse custom PROXY protocol v2 header for custom routing in HAProxy configuration?

2018-01-22 Thread Cyril Bonté
Hi,

- Mail original -
> De: "Aleksandar Lazic" 
> À: haproxy@formilux.org
> Envoyé: Lundi 22 Janvier 2018 13:34:33
> Objet: Re[4]: How to parse custom PROXY protocol v2 header for custom routing 
> in HAProxy configuration?
> 
> Hi.
> 
> Have anyone a Idea how haproxy can handle the custom TLV in the proxy
> protocol v2

Currently, it can't. Only PP2_TYPE_NETNS is supported.
But some work can be done to, at least, support some other predefined fields, 
or even better, to provide a generic way to capture any type of field.

You can have a look at the function conn_recv_proxy() in src/connection.c :
http://git.haproxy.org/?p=haproxy.git;a=blob;f=src/connection.c;h=0f8acb02dbdbc0a70cdd99830f8a0c9256f731e8;hb=HEAD#l604

Cyril



Re[4]: How to parse custom PROXY protocol v2 header for custom routing in HAProxy configuration?

2018-01-22 Thread Aleksandar Lazic

Hi.

Have anyone a Idea how haproxy can handle the custom TLV in the proxy 
protocol v2


Best regards
Aleks
-- Originalnachricht --
Von: "Aleksandar Lazic" 
An: haproxy@formilux.org
Gesendet: 17.01.2018 20:49:58
Betreff: Re[3]: How to parse custom PROXY protocol v2 header for custom 
routing in HAProxy configuration?



Hi.

Any one any hints?

Regards
aleks

-- Originalnachricht --
Von: "Aleksandar Lazic" 
An: "Adam Sherwood" ; haproxy@formilux.org
Gesendet: 15.01.2018 16:52:15
Betreff: Re[2]: How to parse custom PROXY protocol v2 header for custom 
routing in HAProxy configuration?



Hi.

Follow up question to proxy protocol

Is it possible to handle the Type-Length-Value (TLV)  fields in from 
pp2 in haproxy config or in lua?


I refer to
2.2.7. Reserved type ranges
https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt

from the question on so 
https://stackoverflow.com/questions/48195311/how-to-parse-custom-proxy-protocol-v2-header-for-custom-routing-in-haproxy-confi


Regards
aleks

-- Originalnachricht --
Von: "Aleksandar Lazic" 
An: "Adam Sherwood" ; haproxy@formilux.org
Gesendet: 11.01.2018 12:24:46
Betreff: Re: How to parse custom PROXY protocol v2 header for custom 
routing in HAProxy configuration?



Hi.

-- Originalnachricht --
Von: "Adam Sherwood" 
An: haproxy@formilux.org
Gesendet: 10.01.2018 23:40:25
Betreff: How to parse custom PROXY protocol v2 header for custom 
routing in HAProxy configuration?


I have written this up as a StackOverflow question here: 
https://stackoverflow.com/q/48195311/2081835.


When adding PROXY v2 with AWS VPC PrivateLink connected to a Network 
Load Balancer, the endpoint ID of the connecting account is added as 
a TLV. I need to use this for routing frontend to backend, but I 
cannot sort out how.


Is there a way to call a custom matcher that could do the parsing 
logic, or is this already built-in and I'm just not finding the 
documentation?


Any ideas on the topic would be super helpful. Thank you.
Looks like AWS use the "2.2.7. Reserved type ranges" as described in 
https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt therefore 
you will need to parse this part by your own.


This could be possible in lua, maybe I'm not an expert in lua, yet 
;-)


There are javexamples in the doc link ( 
https://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-target-groups.html#proxy-protocol 
) which you have added int the stackoverflow request.


Regards
Aleks