Re: HTTP DELETE command failing

2017-11-02 Thread Igor Cicimov
On Fri, Nov 3, 2017 at 11:29 AM, Norman Branitsky <
norman.branit...@micropact.com> wrote:

> I have this included in the configuration:
>
> # Filter nasty input
>
> acl missing_cl hdr_cnt(Content-length) eq 0
>
> acl METH_PUT method PUT
>
> acl METH_GET method GET HEAD
>
> acl METH_PATCH method PATCH
>
> ##acl METH_DELETE method DELETE
>
> http-request deny if HTTP_URL_STAR !METH_OPTIONS || METH_POST
> missing_cl || METH_PUT missing_cl || METH_PATCH missing_cl
> ​​
> || METH_DELETE missing_cl
>
> http-request deny if METH_GET HTTP_CONTENT
>
> http-request deny unless METH_GET or METH_POST or METH_OPTIONS or
> METH_PATCH or METH_DELETE or METH_PUT
>
>
>
> My colleague commented out the METH_DELETE acl.
> It appears that in HAProxy 1.7 a number of acls are predefined
>
> and we could delete the METH_PUT, METH_GET, and METH_PATCH acls also.
> So is one of the http-request deny statements causing the problem?
>
>
> ​Maybe check the DELETE RFC
https://tools.ietf.org/html/rfc7231#section-4.3.5​

​and think about what to do with your conditions. Start by removing "​||
METH_DELETE missing_cl"
 from the first one.
​


RE: HTTP DELETE command failing

2017-11-02 Thread Norman Branitsky
I have this included in the configuration:

# Filter nasty input

acl missing_cl hdr_cnt(Content-length) eq 0

acl METH_PUT method PUT

acl METH_GET method GET HEAD

acl METH_PATCH method PATCH

##acl METH_DELETE method DELETE

http-request deny if HTTP_URL_STAR !METH_OPTIONS || METH_POST missing_cl || 
METH_PUT missing_cl || METH_PATCH missing_cl || METH_DELETE missing_cl

http-request deny if METH_GET HTTP_CONTENT

http-request deny unless METH_GET or METH_POST or METH_OPTIONS or 
METH_PATCH or METH_DELETE or METH_PUT

My colleague commented out the METH_DELETE acl.
It appears that in HAProxy 1.7 a number of acls are predefined
and we could delete the METH_PUT, METH_GET, and METH_PATCH acls also.
So is one of the http-request deny statements causing the problem?

From: Moemen MHEDHBI [mailto:mmhed...@haproxy.com]
Sent: November-02-17 7:50 PM
To: haproxy@formilux.org
Subject: Re: HTTP DELETE command failing

HAProxy is replying 403, which means that the DELETE request was explicitly 
denied by your conf.
In order for us to help you, we need to have a look to your conf
++
On 02/11/2017 17:17, Norman Branitsky wrote:
In HAProxy version 1.7.5,
I see GET and POST commands working correctly but DELETE fails:
[01/Nov/2017:11:02:34.423] main_ssl~ ssl_training-01/training-01. 0/0/0/20/69 
200 402587 - -  6/6/0/0/0 0/0 "GET 
/etk-training-ora1/etk-apps/rt/admin/manage-users.js HTTP/1.1"
Nov  1 11:02:34 localhost haproxy[40877]: 10.20.120.220:64971 
[01/Nov/2017:11:02:34.690] main_ssl~ ssl_training-01/training-01. 0/0/0/150/151 
200 1490 - -  6/6/0/1/0 0/0 "POST /etk-training-ora1/auth/oauth/token 
HTTP/1.1"
Nov  1 11:02:34 localhost haproxy[40877]: 10.20.120.220:64971 
[01/Nov/2017:11:02:34.889] main_ssl~ ssl_training-01/training-01. 0/0/1/54/56 
200 388 - -  6/6/1/1/0 0/0 "GET 
/etk-training-ora1/private/api/systemPreferences/maxPageSize HTTP/1.1"
Nov  1 11:02:35 localhost haproxy[40877]: 10.20.120.220:64970 
[01/Nov/2017:11:02:34.890] main_ssl~ ssl_training-01/training-01. 0/0/1/329/331 
200 19968 - -  6/6/0/0/0 0/0 "GET 
/etk-training-ora1/private/api/users?page=0=50=accountName,ASC 
HTTP/1.1"
Nov  1 11:02:42 localhost haproxy[40877]: 10.20.120.220:64971 
[01/Nov/2017:11:02:42.571] main_ssl~ main_ssl/ 0/-1/-1/-1/0 403 188 - - 
PR-- 4/4/0/0/0 0/0 "DELETE /etk-training-ora1/private/api/users/62469 HTTP/1.1"

In the GET and POST commands, path_beg matches /etk-training-ora1.
It appears that in the DELETE command path_beg returns nothing or something 
else.
Suggestions, please?

Norman

Norman Branitsky
Cloud Architect
MicroPact
(o) 416.916.1752
(c) 416.843.0670
(t) 1-888-232-0224 x61752
www.micropact.com
Think it > Track it > Done




--

Moemen MHEDHBI



Support Engineer

http://haproxy.com

Tel: +33 1 30 67 60 71


Re: HTTP DELETE command failing

2017-11-02 Thread Moemen MHEDHBI
HAProxy is replying 403, which means that the DELETE request was
explicitly denied by your conf.

In order for us to help you, we need to have a look to your conf

++

On 02/11/2017 17:17, Norman Branitsky wrote:
>
> In HAProxy version 1.7.5,
>
> I see GET and POST commands working correctly but DELETE fails:
>
> [01/Nov/2017:11:02:34.423] main_ssl~ ssl_training-01/training-01.
> 0/0/0/20/69 200 402587 - -  6/6/0/0/0 0/0 "GET
> /etk-training-ora1/etk-apps/rt/admin/manage-users.js HTTP/1.1"
>
> Nov  1 11:02:34 localhost haproxy[40877]: 10.20.120.220:64971
> [01/Nov/2017:11:02:34.690] main_ssl~ ssl_training-01/training-01.
> 0/0/0/150/151 200 1490 - -  6/6/0/1/0 0/0 "POST
> /etk-training-ora1/auth/oauth/token HTTP/1.1"
>
> Nov  1 11:02:34 localhost haproxy[40877]: 10.20.120.220:64971
> [01/Nov/2017:11:02:34.889] main_ssl~ ssl_training-01/training-01.
> 0/0/1/54/56 200 388 - -  6/6/1/1/0 0/0 "GET
> /etk-training-ora1/private/api/systemPreferences/maxPageSize HTTP/1.1"
>
> Nov  1 11:02:35 localhost haproxy[40877]: 10.20.120.220:64970
> [01/Nov/2017:11:02:34.890] main_ssl~ ssl_training-01/training-01.
> 0/0/1/329/331 200 19968 - -  6/6/0/0/0 0/0 "GET
> /etk-training-ora1/private/api/users?page=0=50=accountName,ASC
> HTTP/1.1"
>
> Nov  1 11:02:42 localhost haproxy[40877]: 10.20.120.220:64971
> [01/Nov/2017:11:02:42.571] main_ssl~ main_ssl/ 0/-1/-1/-1/0 403
> 188 - - PR-- 4/4/0/0/0 0/0 "DELETE
> /etk-training-ora1/private/api/users/62469 HTTP/1.1"
>
>  
>
> In the GET and POST commands, path_beg matches /etk-training-ora1.
>
> It appears that in the DELETE command path_beg returns nothing or
> something else.
> Suggestions, please?
>
>  
>
> Norman
>
> * *
>
> *Norman Branitsky
> *Cloud Architect
>
> MicroPact
>
> (o) 416.916.1752
>
> (c) 416.843.0670
>
> (t) 1-888-232-0224 x61752
>
> www.micropact.com 
>
> Think it > Track it > Done
>
>  
>

-- 
Moemen MHEDHBI

Support Engineer
http://haproxy.com
Tel: +33 1 30 67 60 71



Re: log-format in defaults section in 1.7

2017-11-02 Thread Cyril Bonté

Hi Thayne,

Le 02/11/2017 à 23:08, Thayne McCombs a écrit :
So, I looked into using `no log` in non http frontends. But that isn't 
sufficient.


For example, if I have:

global
   log-tag "test"
   log localhost:514 len 65535 local2 info info

defaults
   mode http
   timeout connect 100
   timeout server 3
   timeout client 3
   log-format "%Tq"

listen mine
   log global
   bind :80
   server localhost localhost:8080

listen health_url
   bind :27000
   mode health
   option httpchk
   no log


I still get [ALERT] 305/160229 (21975) : Parsing [test.cfg:10]: failed 
to parse log-format : format variable 'Tq' is reserved for HTTP mode.


You can specify several "defaults" sections in your configuration : one 
for http, and one for tcp frontends.


global
  log-tag "test"
  log localhost:514 len 65535 local2 info info

defaults
  mode http
  timeout connect 100
  timeout server 3
  timeout client 3
  log-format "%Tq"

listen mine
  log global
  bind :8080
  server localhost localhost:80

# ...
# Other HTTP frontends
# ...

defaults
  mode tcp
  timeout connect 100
  timeout server 3
  timeout client 3

listen health_url
  bind :27000
  mode health
  option httpchk

# ...
# Other TCP frontends
# ...


However, if I add `log-format "GARBAGE"` to the health_url listener, 
then the error goes away.


Or you can specify "option tcplog" in your "health_url" section (or any 
other tcp sections).



--
Cyril Bonté



Re: log-format in defaults section in 1.7

2017-11-02 Thread Baptiste
Hi,

This is due to the way the configuration parser works currently.
It parses those lines "atomically". We might want to move this
configuration checking in the sanity checks which is executed once we
launched the conf.

Baptiste


On Thu, Nov 2, 2017 at 11:08 PM, Thayne McCombs 
wrote:

> So, I looked into using `no log` in non http frontends. But that isn't
> sufficient.
>
> For example, if I have:
>
> global
>   log-tag "test"
>   log localhost:514 len 65535 local2 info info
>
> defaults
>   mode http
>   timeout connect 100
>   timeout server 3
>   timeout client 3
>   log-format "%Tq"
>
> listen mine
>   log global
>   bind :80
>   server localhost localhost:8080
>
> listen health_url
>   bind :27000
>   mode health
>   option httpchk
>   no log
>
>
> I still get [ALERT] 305/160229 (21975) : Parsing [test.cfg:10]: failed to
> parse log-format : format variable 'Tq' is reserved for HTTP mode.
>
> However, if I add `log-format "GARBAGE"` to the health_url listener, then
> the error goes away.
>
> It seems like if I specify `no log` then the log-format should be ignored,
> even if it comes from the default.
>
> On Mon, Oct 9, 2017 at 10:35 AM Thayne McCombs 
> wrote:
>
>> Actually, I just remembered that we do have a few tcp mode frontends.
>> Maybe that is the reason for the error? Still, is there a way to use a
>> default log-format for the http frontends? I'm going to try turning logs
>> off for tcp mode frontends and see if that fixes the error.
>>
>> On Mon, Oct 9, 2017 at 10:22 AM Thayne McCombs 
>> wrote:
>>
>>> I am working on upgrading haproxy from 1.6 to 1.7 on our load balancers.
>>>
>>> However, on 1.7 with our current config I get the following error:
>>>
>>> [ALERT] 278/170234 (8363) : Parsing [/etc/haproxy/haproxy-staged.cfg:31]:
>>> failed to parse log-format : format variable 'Tq' is reserved for HTTP mode.
>>>
>>> The log-format directive is in the *defaults* section, which also has a 
>>> *mode
>>> http* directive. Was there a change in 1.7 that made the use of Tq (and
>>> other http specific variables) illegal in the log-format of a defaults
>>> section?
>>>
>>> All of my frontends are http frontends. Is there any way I can use a
>>> common default log-format for all of them that uses http variables (for
>>> example, something like an http-log-format directive) in 1.7? Or do I have
>>> to duplicate the log-format for all of my frontends?
>>>
>>> Thanks,
>>>
>>> Thayne McCombs
>>> Lucid Software, Inc.
>>> --
>>> *Thayne McCombs*
>>> *Senior Software Engineer*
>>> Lucid Software, Inc.
>>>
>>> --
>> *Thayne McCombs*
>> *Senior Software Engineer*
>> Lucid Software, Inc.
>>
>> --
> *Thayne McCombs*
> *Senior Software Engineer*
> Lucid Software, Inc.
>
>


Re: [PATCH] LDAP authentication

2017-11-02 Thread Willy Tarreau
Hi Igor,

On Fri, Nov 03, 2017 at 09:47:59AM +1100, Igor Cicimov wrote:
> How about cases that have light load :-).

It's not a matter of load, it's a matter of being totally unreliable. You
just need a single request to a dead server and your haproxy will be frozen
until you kill and restart it. And worse, by claiming that it seldom works,
some people will blindly deploy it then complain that haproxy freezes all
the time. No, it's really not the right way to do it at all.

> I've been asking/waiting for
> this feature for a long time and think it is (going to be ) a very
> valuable addition to haproxy. Anyway, if you had experienced some issues
> with the lib I wonder what is the way Apache and Nginx are doing it without
> any performance impact? (or so we think?)
> 
> Maybe I would argue that as a feature it should be included in haproxy
> anyway and be left to the users to opt for using it or not, with heavy
> warning about possible performance impact.

There's little *performance* impact, but a huge *reliability* impact. And
that's out of question here. We created SPOE *exactly* for this type of
thing. And here the code in the patch looks simple, I'm pretty sure it
will be easy to port into one of the example SPOE modules. It will also
bring other benefits such as having high availability and load balancing
over multiple LDAP servers, not having to cut/reopen the connections on
restart, not having to restart haproxy to modify the ldap config etc.

For such problematic libs, SPOE is *the* right solution. You can even
use threads or anything you want to workaround the broken lib and
haproxy's traffic will never be affected.

Willy



Re: [PATCH] LDAP authentication

2017-11-02 Thread Igor Cicimov
Hi ​Thierry,

On Fri, Nov 3, 2017 at 8:16 AM,
​​
Thierry Fournier  wrote:

>
> > On 2 Nov 2017, at 21:56, my.card@web.de wrote:
> >
> > Hi all,
> >
> > the attached patch implements authentication against an LDAP Directory
> Server. It has been tested on Ubuntu 16.04 (x86_64) using libldap-2.4-2 on
> the client side and 389-ds-base 1.3.4.9-1 on the server side. Add
> USE_LDAP=1 to your make command line to compile it in.
> >
> > What do I have to to, to get this functionality integrated within the
> next offcial haproxy release?
> >
> > I'm currently trying to figure out, how to pass commas ',' and bracket
> '(', ')' as arguments to http_auth_ldap. Do you have any hints for me on
> this topic?
> >
> > Feedback is very welcome!
>
>
> Hi, thanks for your patch.
>
> I already tried to add ldap authent in haproxy, but unfortunately the
> OpenLDAP library is only available in blocking mode. Unfortunately (again)
> OpenLDAP seems to be the only one lib LDAP available. So during the
> processing of the sample fetch “http_auth_ldap”, the following functions
> perform some network request and block HAProxy.
>
>  * ldap_initialize (maybe)
>  * ldap_simple_bind_s
>  * ldap_search_ext_s
>
> HAProxy is blocked waiting for LDAP response, so during this time HAProxy
> no longer process more HTTP requests. This behavior is not acceptable under
> heavy load.
>

​How about cases that have light load :-). I've been asking/waiting for
this feature for ​a long time and think it is (going to be ) a very
valuable addition to haproxy. Anyway, if you had experienced some issues
with the lib I wonder what is the way Apache and Nginx are doing it without
any performance impact? (or so we think?)

Maybe I would argue that as a feature it should be included in haproxy
anyway and be left to the users to opt for using it or not, with heavy
warning about possible performance impact.


> Two way for performing LDAP authent:
>
>  * easy: look for SPOE protocol. You just write a mulithread server which
> listent HAProxy for SPOE, perform LDAP request and return response. You
> will fond an example of a SPOE server in the contrib directory. I gueess
> that an SPOE contrib for LDAP authent will be welcome.
>
>  * difficult: make you own LDAP payload (very hard with v3 and crypto) and
> write a code for using socket like SPOE or Lua cosoket
>
> Best regards,
> Thierry
>
>
> >
> > Kind regards,
> >
> >   Danny
> > <0001-Simple-LDAP-authentication.patch>
>
>
> ​Cheers,
Igor​


Re: log-format in defaults section in 1.7

2017-11-02 Thread Thayne McCombs
So, I looked into using `no log` in non http frontends. But that isn't
sufficient.

For example, if I have:

global
  log-tag "test"
  log localhost:514 len 65535 local2 info info

defaults
  mode http
  timeout connect 100
  timeout server 3
  timeout client 3
  log-format "%Tq"

listen mine
  log global
  bind :80
  server localhost localhost:8080

listen health_url
  bind :27000
  mode health
  option httpchk
  no log


I still get [ALERT] 305/160229 (21975) : Parsing [test.cfg:10]: failed to
parse log-format : format variable 'Tq' is reserved for HTTP mode.

However, if I add `log-format "GARBAGE"` to the health_url listener, then
the error goes away.

It seems like if I specify `no log` then the log-format should be ignored,
even if it comes from the default.

On Mon, Oct 9, 2017 at 10:35 AM Thayne McCombs 
wrote:

> Actually, I just remembered that we do have a few tcp mode frontends.
> Maybe that is the reason for the error? Still, is there a way to use a
> default log-format for the http frontends? I'm going to try turning logs
> off for tcp mode frontends and see if that fixes the error.
>
> On Mon, Oct 9, 2017 at 10:22 AM Thayne McCombs 
> wrote:
>
>> I am working on upgrading haproxy from 1.6 to 1.7 on our load balancers.
>>
>> However, on 1.7 with our current config I get the following error:
>>
>> [ALERT] 278/170234 (8363) : Parsing [/etc/haproxy/haproxy-staged.cfg:31]:
>> failed to parse log-format : format variable 'Tq' is reserved for HTTP mode.
>>
>> The log-format directive is in the *defaults* section, which also has a *mode
>> http* directive. Was there a change in 1.7 that made the use of Tq (and
>> other http specific variables) illegal in the log-format of a defaults
>> section?
>>
>> All of my frontends are http frontends. Is there any way I can use a
>> common default log-format for all of them that uses http variables (for
>> example, something like an http-log-format directive) in 1.7? Or do I have
>> to duplicate the log-format for all of my frontends?
>>
>> Thanks,
>>
>> Thayne McCombs
>> Lucid Software, Inc.
>> --
>> *Thayne McCombs*
>> *Senior Software Engineer*
>> Lucid Software, Inc.
>>
>> --
> *Thayne McCombs*
> *Senior Software Engineer*
> Lucid Software, Inc.
>
> --
*Thayne McCombs*
*Senior Software Engineer*
Lucid Software, Inc.


Re: [PATCH] LDAP authentication

2017-11-02 Thread Thierry Fournier

> On 2 Nov 2017, at 21:56, my.card@web.de wrote:
> 
> Hi all,
>  
> the attached patch implements authentication against an LDAP Directory 
> Server. It has been tested on Ubuntu 16.04 (x86_64) using libldap-2.4-2 on 
> the client side and 389-ds-base 1.3.4.9-1 on the server side. Add USE_LDAP=1 
> to your make command line to compile it in.
>  
> What do I have to to, to get this functionality integrated within the next 
> offcial haproxy release?
>  
> I'm currently trying to figure out, how to pass commas ',' and bracket '(', 
> ')' as arguments to http_auth_ldap. Do you have any hints for me on this 
> topic?
>  
> Feedback is very welcome!


Hi, thanks for your patch.

I already tried to add ldap authent in haproxy, but unfortunately the OpenLDAP 
library is only available in blocking mode. Unfortunately (again) OpenLDAP 
seems to be the only one lib LDAP available. So during the processing of the 
sample fetch “http_auth_ldap”, the following functions perform some network 
request and block HAProxy.

 * ldap_initialize (maybe)
 * ldap_simple_bind_s
 * ldap_search_ext_s

HAProxy is blocked waiting for LDAP response, so during this time HAProxy no 
longer process more HTTP requests. This behavior is not acceptable under heavy 
load.

Two way for performing LDAP authent:

 * easy: look for SPOE protocol. You just write a mulithread server which 
listent HAProxy for SPOE, perform LDAP request and return response. You will 
fond an example of a SPOE server in the contrib directory. I gueess that an 
SPOE contrib for LDAP authent will be welcome.

 * difficult: make you own LDAP payload (very hard with v3 and crypto) and 
write a code for using socket like SPOE or Lua cosoket

Best regards,
Thierry


>  
> Kind regards,
>  
>   Danny
> <0001-Simple-LDAP-authentication.patch>




[PATCH] LDAP authentication

2017-11-02 Thread My . Card . God
Hi all,

 

the attached patch implements authentication against an LDAP Directory Server. It has been tested on Ubuntu 16.04 (x86_64) using libldap-2.4-2 on the client side and 389-ds-base 1.3.4.9-1 on the server side. Add USE_LDAP=1 to your make command line to compile it in.

 

What do I have to to, to get this functionality integrated within the next offcial haproxy release?

 

I'm currently trying to figure out, how to pass commas ',' and bracket '(', ')' as arguments to http_auth_ldap. Do you have any hints for me on this topic?

 

Feedback is very welcome!

 

Kind regards,

 

  Danny>From 5e50e7bc3b619a45e0ad862eaf501538d4828c97 Mon Sep 17 00:00:00 2001
From: Daniela Sonnenschein 
Date: Thu, 2 Nov 2017 20:41:02 +0100
Subject: [PATCH] Simple LDAP authentication

This patch adds LDAP authentication via an http_auth_ldap
configuration fetch function.

http_auth_ldap([,]): boolean

  Returns a boolean indicating wether the authentication data
  received from the client match a username & password stored
  in an LDAP directory server. The credentials are verified
  using ldap_simple_bind_s(3). This fetch function is not really
  useful outside of ACLs. Currently only http basic auth is
  supported.

  The optional TEMPLATE is used to create a Distinguished Name
  (DN), that is used to bind to the LDAP directory. The string
  USERCN is replaced with the username supplied by the client.
  For example specify TEMPLATE to be something like:

"CN=USERCN,OU=People,DC=my,DC=corp"

  If no TEMPLATE is specified, it is expected, that the user
  part supplied via http will be able to bind to the directory
  as-is.

  The LDAP-URL is used as follows: The string USERDN is replaced
  with the DN of the user (after applying the ). It
  specifies the LDAP directory server, the search scope, the
  filter, and (unused) attributes (see RFC 4516 for more detailed
  information). The following example might be used to match the
  sample data below:

"ldap://dirsrv.my.corp/cn=haproxy,ou=groups,dc=my,dc=corp?uniqueMember?sub?(&(objectClass=groupOfUniqueNames)(uniqueMember=USERDN))"

  The authentication algorithm used:

- Connect to ldap://dirsrv.my.corp:389 (no TLS here, but
  specifiying ldaps:// is possible, see ldap_initialize(3))
- Bind with the credentials supplied by the client using
  ldap_simple_bind_s(3) (see TEMPLATE description above)
- Search the directory server with ldap_search_ext_s(3):
  - "cn=haproxy,ou=groups,dc=my,dc=corp" is the base DN
for the search within the directory
  - Retrieve the attribute(s): "uniqueMember"
  - Use the scope "sub"
  - Use the filter "(&(objectClass=groupOfUniqueNames)(uniqueMember=CN=...))"
- If at least one result is returned (the group!) access is
  granted to the user.

Sample test data for dirsrv(8):

  User allowed to use the protected resource:

# Haproxy Technical User, people, my.corp
dn: cn=Haproxy Technical User,ou=people,dc=my,dc=corp
cn: Haproxy Technical User
givenName: Haproxy
gidNumber: 321
homeDirectory: /var/run/haproxy
sn: Technical User
loginShell: /sbin/nologin
objectClass: inetOrgPerson
objectClass: posixAccount
objectClass: top
objectClass: organizationalPerson
objectClass: person
uidNumber: 321
uid: haproxy
userPassword:: e0NSWVBUfSQxJHBaSDNJQmpuJGpZQ3AxYmt3TFkzakhuUkJCUG1VUS4=

  Static group of all resource users:

# haproxy, groups, my.corp
dn: cn=haproxy,ou=groups,dc=my,dc=corp
objectClass: groupOfUniqueNames
objectClass: top
owner: cn=Haproxy Technical User,ou=people,dc=my,dc=corp
uniqueMember: cn=Haproxy Technical User,ou=people,dc=my,dc=corp
uniqueMember: cn=Another User,ou=people,dc=my,dc=corp
cn: haproxy

  ACI to allow the users to search and read their own group:

# haproxy, groups, my.corp
dn: cn=haproxy,ou=groups,dc=my,dc=corp
aci: (target="ldap:///cn=haproxy,ou=groups,dc=my,dc=corp;) (targetattr="uniqueMember || objectClass") (version 3.0; acl "HA-Proxy Administrators"; allow (search, read) groupdn = "ldap:///cn=haproxy,ou=groups,dc=my,dc=corp;;)
---
 Makefile |   9 ++
 include/proto/ldap.h |  28 ++
 src/ldap.c   | 237 +++
 src/proto_http.c |  28 ++
 tests/test-ldap.cfg  |  67 +++
 5 files changed, 369 insertions(+)
 create mode 100644 include/proto/ldap.h
 create mode 100644 src/ldap.c
 create mode 100644 tests/test-ldap.cfg

diff --git a/Makefile b/Makefile
index f066f31..7045b95 100644
--- a/Makefile
+++ b/Makefile
@@ -498,6 +498,15 @@ BUILD_OPTIONS   += $(call ignore_implicit,USE_ZLIB)
 OPTIONS_LDFLAGS += $(if $(ZLIB_LIB),-L$(ZLIB_LIB)) -lz
 endif
 
+ifneq ($(USE_LDAP),)
+LDAP_INC =
+LDAP_LIB =
+OPTIONS_OBJS+= src/ldap.o
+OPTIONS_CFLAGS  += -DUSE_LDAP $(if $(LDAP_INC),-I$(LDAP_INC))
+BUILD_OPTIONS   += $(call ignore_implicit,USE_LDAP)

Re: 1.8-RC1 100% cpu usage

2017-11-02 Thread Mihail Samoylov
Thank you. Now everything works as expected.

On Fri, Nov 3, 2017 at 12:09 AM, Lukas Tribus  wrote:

> Hello Mihail,
>
>
> 2017-11-02 15:20 GMT+01:00 Mihail Samoylov :
> > I recompiled with explicit disabling threads:
> >
> > root@ubuntu-xenial:~/4/haproxy-1.8-rc1# ./haproxy -vv
> > HA-Proxy version 1.8-rc1-901f75c 2017/10/31
> > Copyright 2000-2017 Willy Tarreau 
> >
> > Build options :
> >   TARGET  = linux2628
> >   CPU = native
> >   CC  = gcc
> >   CFLAGS  = -O2 -march=native -g -fno-strict-aliasing
> > -Wdeclaration-after-statement -fwrapv -Wno-unused-label
> >   OPTIONS = USE_LINUX_SPLICE=1 USE_LINUX_TPROXY=1 USE_LIBCRYPT=1
> > USE_THREAD=0 USE_OPENSSL=1
> >
> > Problem persists.
>
> Can you upgrade to latest git, there are a number of issues fixed now,
> including the healthcheck/threading issue and and issue with
> errorfiles (which also causes a spinning process).
>
>
>
> Lukas
>



-- 
С уважением, Михаил Самойлов
телефон: +7-903-936-2664
электронная почта: mihail.samoy...@gmail.com


Re: How to debug haproxy + lua?

2017-11-02 Thread Aleksandar Lazic

Hi.

-- Originalnachricht --
Von: "aogooc xu" 
An: haproxy@formilux.org
Gesendet: 02.11.2017 08:57:17
Betreff: How to debug haproxy + lua?

In the high concurrent environment, there is a blocking function, how 
fast positioning ?


I am very confused, but it's too much trouble to make a log.


Please can you send us the following output.

haproxy -vv

your lua scripts
your config

Regards
Aleks




Re: 1.8-RC1 100% cpu usage

2017-11-02 Thread Lukas Tribus
Hello Mihail,


2017-11-02 15:20 GMT+01:00 Mihail Samoylov :
> I recompiled with explicit disabling threads:
>
> root@ubuntu-xenial:~/4/haproxy-1.8-rc1# ./haproxy -vv
> HA-Proxy version 1.8-rc1-901f75c 2017/10/31
> Copyright 2000-2017 Willy Tarreau 
>
> Build options :
>   TARGET  = linux2628
>   CPU = native
>   CC  = gcc
>   CFLAGS  = -O2 -march=native -g -fno-strict-aliasing
> -Wdeclaration-after-statement -fwrapv -Wno-unused-label
>   OPTIONS = USE_LINUX_SPLICE=1 USE_LINUX_TPROXY=1 USE_LIBCRYPT=1
> USE_THREAD=0 USE_OPENSSL=1
>
> Problem persists.

Can you upgrade to latest git, there are a number of issues fixed now,
including the healthcheck/threading issue and and issue with
errorfiles (which also causes a spinning process).



Lukas



HTTP DELETE command failing

2017-11-02 Thread Norman Branitsky
In HAProxy version 1.7.5,
I see GET and POST commands working correctly but DELETE fails:
[01/Nov/2017:11:02:34.423] main_ssl~ ssl_training-01/training-01. 0/0/0/20/69 
200 402587 - -  6/6/0/0/0 0/0 "GET 
/etk-training-ora1/etk-apps/rt/admin/manage-users.js HTTP/1.1"
Nov  1 11:02:34 localhost haproxy[40877]: 10.20.120.220:64971 
[01/Nov/2017:11:02:34.690] main_ssl~ ssl_training-01/training-01. 0/0/0/150/151 
200 1490 - -  6/6/0/1/0 0/0 "POST /etk-training-ora1/auth/oauth/token 
HTTP/1.1"
Nov  1 11:02:34 localhost haproxy[40877]: 10.20.120.220:64971 
[01/Nov/2017:11:02:34.889] main_ssl~ ssl_training-01/training-01. 0/0/1/54/56 
200 388 - -  6/6/1/1/0 0/0 "GET 
/etk-training-ora1/private/api/systemPreferences/maxPageSize HTTP/1.1"
Nov  1 11:02:35 localhost haproxy[40877]: 10.20.120.220:64970 
[01/Nov/2017:11:02:34.890] main_ssl~ ssl_training-01/training-01. 0/0/1/329/331 
200 19968 - -  6/6/0/0/0 0/0 "GET 
/etk-training-ora1/private/api/users?page=0=50=accountName,ASC 
HTTP/1.1"
Nov  1 11:02:42 localhost haproxy[40877]: 10.20.120.220:64971 
[01/Nov/2017:11:02:42.571] main_ssl~ main_ssl/ 0/-1/-1/-1/0 403 188 - - 
PR-- 4/4/0/0/0 0/0 "DELETE /etk-training-ora1/private/api/users/62469 HTTP/1.1"

In the GET and POST commands, path_beg matches /etk-training-ora1.
It appears that in the DELETE command path_beg returns nothing or something 
else.
Suggestions, please?

Norman

Norman Branitsky
Cloud Architect
MicroPact
(o) 416.916.1752
(c) 416.843.0670
(t) 1-888-232-0224 x61752
www.micropact.com
Think it > Track it > Done



Re: Diagnose a PD-- status

2017-11-02 Thread Mildis
I ran in debug mode and found the issue : 

156e:ft-public.clireq[000c:000d]: PUT 
/api/products/5/versions/5/documentations HTTP/1.1
156e:ft-public.clihdr[000c:000d]: X-CSRF-TOKEN: 
de035ec0-58a3-4668-9e43-e4b36911d2ff
156e:ft-public.clihdr[000c:000d]: Content-Type: application/json
156e:ft-public.clihdr[000c:000d]: accept: application/json
156e:ft-public.clihdr[000c:000d]: Content-Length: 12605
156e:ft-public.clihdr[000c:000d]: Host: edc-ci.geomath.fr
156e:ft-public.clihdr[000c:000d]: Connection: Keep-Alive
156e:ft-public.clihdr[000c:000d]: User-Agent: Apache-HttpClient/4.5.3 
(Java/1.8.0_131)
156e:ft-public.clihdr[000c:000d]: Cookie: 
CSRF-TOKEN=de035ec0-58a3-4668-9e43-e4b36911d2ff; 
JSESSIONID=8Xn2-NKJJuaMo-eI6c5PvSTwxNf5BLugv7e7rmes; 
remember-me=Y1luT2VGVXZnMHZvRWN6ZkluY3F6Zz09Olh4UmZmM3BpQVJxaVlZUmhWbkI1MlE9PQ
156e:ft-public.clihdr[000c:000d]: Accept-Encoding: gzip,deflate
156e:bck-traefik.srvrep[000c:000d]: HTTP/1.1 200 OK
156e:bck-traefik.srvhdr[000c:000d]: Cache-Control: no-cache, no-store, 
max-age=0, must-revalidate
156e:bck-traefik.srvhdr[000c:000d]: Content-Type: 
application/json;charset=UTF-8
156e:bck-traefik.srvhdr[000c:000d]: Date: Thu, 02 Nov 2017 13:47:18 GMT
156e:bck-traefik.srvhdr[000c:000d]: Expires: 0
156e:bck-traefik.srvhdr[000c:000d]: Pragma: no-cache
156e:bck-traefik.srvhdr[000c:000d]: Strict-Transport-Security: 
max-age=31536000 ; includeSubDomains
156e:bck-traefik.srvhdr[000c:000d]: X-Application-Context: 
application:prod,mysql:8081
156e:bck-traefik.srvhdr[000c:000d]: X-Content-Type-Options: nosniff
156e:bck-traefik.srvhdr[000c:000d]: X-Xss-Protection: 1; mode=block
156e:bck-traefik.srvhdr[000c:000d]: Transfer-Encoding: chunked
[WARNING] 305/144718 (21260) : HTTP compression failed: unexpected behavior of 
previous filters


Compression was enabled in the default section with :
   compression algo gzip
   compression type text/css text/html text/javascript 
application/javascript text/plain text/xml application/json

By commenting these two lines, it went OK.

However, I still can’t figure out what is causing this behavior : there are 
many other calls to the same URL.

Any clues ?

Regards,
mildis

> Le 1 nov. 2017 à 12:37, Mildis  a écrit :
> 
> Hi,
> 
> I got a request ending in a PD status.
> However, ‘show errors’ does not tell anything about that.
> 
> backend server returned 200, haproxy returned 200 to the client.
> The entire request took 202ms and returned 15k of data : 3/0/0/199/202 200 
> 15262
> 
> Is there a way to diagnose further the PD status ?
> Maybe make haproxy log the reason why it ended in PD ?
> 
> Thanks,
> mildis




Re: [ANNOUNCE] haproxy-1.8-rc1 : the last mile

2017-11-02 Thread William Lallemand
Hi Lukas,

On Wed, Nov 01, 2017 at 09:02:53PM +0100, Willy Tarreau wrote:
> Hi Lukas,
> 
> On Wed, Nov 01, 2017 at 08:43:19PM +0100, Lukas Tribus wrote:
> > Just upgrading the binary from -dev3 to -rc1 however broke my setup:
> > Turns out that the new object caching code breaks when another filter
> > (compression) is already enabled (at config parsing stage) - even when
> > object caching is not enabled itself:
> > 
> (...)
> > 
> > lukas@dev:~/haproxy$ ./haproxy -f ../haproxy.cfg
> > [ALERT] 304/203750 (6995) : Proxy 'http_in': unable to find the cache
> > '(null)' referenced by the filter 'cache'.
> > [ALERT] 304/203750 (6995) : Proxy 'bk_testbk': unable to find the
> > cache '(null)' referenced by the filter 'cache'.
> > [ALERT] 304/203750 (6995) : Fatal errors found in configuration.
> > lukas@dev:~/haproxy$
> > 
> > Now I'm going to disable compression and try the fun stuff :)
> 

That's a bug of the post parsing callback, it tries to use the cache with a
filter which is not a cache. I just fix it in the master. 


> Thanks for reporting, such type of early breakage is indeed expected
> after I stressed everyone to merge. We know that the inned parts work
> pretty well overall but some integration work is now needed.
> 
> You may have to explicitly use the compression filter by the way,
> though I have no idea how to do that but I think it's mentionned
> somewhere in the config manual. William was about to write some doc
> when I interrupted him to get his code, but he'll certainly get back
> to this soon.

It will need a configuration filter keyword for the cache, to define the
explicit order of the filters. 

The cache might not work after the compression in the current state of the
filter API.

-- 
William Lallemand



re: [ANNOUNCE] haproxy-1.8-rc1 : the last mile

2017-11-02 Thread Robert Samuel Newson
Hi,

I think the "cert bundle" feature from 1.7 is broken in 1.8-rc1. My exact 
config works with 1.7 but says this for 1.8-rc1;

unable to stat SSL certificate from file '/path/to/foo.pem': No such file or 
directory.

That is, it's attempting to load foo.pem, not foo.pem.rsa or foo.pem.ecdsa like 
1.7 does.

I also tried asking the mailing list daemon for help by emailing 
haproxy+h...@formilux.org as the signup confirmation specifies, the full body 
of that help is just "Hello,". I was hoping to ask the daemon to send me the 
initial message in this thread so I could reply into the thread properly. Sadly 
the mailing list archive doesn't show any of the headers I might have injected 
to get threading working that way, so sorry for breaking the thread but I 
really tried not to.

I am very excited about many of the new features in 1.8 and am itching to try 
them.

B.




Re: 1.8-RC1 100% cpu usage

2017-11-02 Thread Lukas Tribus
2017-11-02 15:20 GMT+01:00 Mihail Samoylov :
> I recompiled with explicit disabling threads:

The variable would have to be emtpy, not 0 to make any difference (USE_THREAD=).

Still, threads are not supposed to be used without explicit
configuration (nbthreads),
so this is probably a different problem indeed.


Lukas



Re: 1.8-RC1 100% cpu usage

2017-11-02 Thread Mihail Samoylov
I recompiled with explicit disabling threads:

root@ubuntu-xenial:~/4/haproxy-1.8-rc1# ./haproxy -vv
HA-Proxy version 1.8-rc1-901f75c 2017/10/31
Copyright 2000-2017 Willy Tarreau 

Build options :
  TARGET  = linux2628
  CPU = native
  CC  = gcc
  CFLAGS  = -O2 -march=native -g -fno-strict-aliasing
-Wdeclaration-after-statement -fwrapv -Wno-unused-label
  OPTIONS = USE_LINUX_SPLICE=1 USE_LINUX_TPROXY=1 USE_LIBCRYPT=1
USE_THREAD=0 USE_OPENSSL=1

Problem persists.

On Thu, Nov 2, 2017 at 8:52 PM, Lukas Tribus  wrote:

> Hi,
>
>
> 2017-11-02 14:33 GMT+01:00 Pavlos Parissis :
> > On 02/11/2017 02:24 μμ, Mihail Samoylov wrote:
> >> Hi.
> >>
> >> I've tried 1.8-RC1 and in my case it ate 100% CPU and didn't work. I
> found out that this is caused
> >> by option httpchk. When I commented this line everything became fine.
> Some details:
> >>
> >
> > Willy  mentioned in the announcement that checks ares broken, so the
> behavior you observed is expected.
>
> Only when threads are enabled. Without threads, health checking works
> fine in -rc1.
>
> Mihail are you sure you did not enable threads?
>
>
> Lukas
>



-- 
С уважением, Михаил Самойлов
телефон: +7-903-936-2664
электронная почта: mihail.samoy...@gmail.com


[PATCH] send-proxy-v2-ssl-crypto parameter

2017-11-02 Thread Emmanuel Hocdet

Hi Willy,

This patches implement send-proxy-v2-ssl-crypto to add CIPHER
SIG_ALG and KEY_ALG to send-proxy-v2-ssl as describe in proxy-protocol.txt

++
Manu




0001-MINOR-ssl-extract-full-pkey-info-in-load-certificate.patch
Description: Binary data


0002-MINOR-ssl-add-ssl_sock_get_pkey_algo-function.patch
Description: Binary data


0003-MINOR-ssl-add-ssl_sock_get_cert_sign-function.patch
Description: Binary data


0004-MINOR-connection-add-send-proxy-v2-ssl-crypto-parame.patch
Description: Binary data




Re: 1.8-RC1 100% cpu usage

2017-11-02 Thread Lukas Tribus
Hi,


2017-11-02 14:33 GMT+01:00 Pavlos Parissis :
> On 02/11/2017 02:24 μμ, Mihail Samoylov wrote:
>> Hi.
>>
>> I've tried 1.8-RC1 and in my case it ate 100% CPU and didn't work. I found 
>> out that this is caused
>> by option httpchk. When I commented this line everything became fine. Some 
>> details:
>>
>
> Willy  mentioned in the announcement that checks ares broken, so the behavior 
> you observed is expected.

Only when threads are enabled. Without threads, health checking works
fine in -rc1.

Mihail are you sure you did not enable threads?


Lukas



Re: 1.8-RC1 100% cpu usage

2017-11-02 Thread Pavlos Parissis
On 02/11/2017 02:24 μμ, Mihail Samoylov wrote:
> Hi.
> 
> I've tried 1.8-RC1 and in my case it ate 100% CPU and didn't work. I found 
> out that this is caused
> by option httpchk. When I commented this line everything became fine. Some 
> details:
> 

Willy  mentioned in the announcement that checks ares broken, so the behavior 
you observed is expected.

Cheers,
Pavlos



signature.asc
Description: OpenPGP digital signature


1.8-RC1 100% cpu usage

2017-11-02 Thread Mihail Samoylov
Hi.

I've tried 1.8-RC1 and in my case it ate 100% CPU and didn't work. I found
out that this is caused by option httpchk. When I commented this line
everything became fine. Some details:

root# haproxy -vv

HA-Proxy version 1.8-rc1-901f75c 2017/10/31
Copyright 2000-2017 Willy Tarreau 

Build options :
  TARGET  = linux2628
  CPU = native
  CC  = gcc
  CFLAGS  = -O2 -march=native -g -fno-strict-aliasing
-Wdeclaration-after-statement -fwrapv -Wno-unused-label
  OPTIONS = USE_LINUX_SPLICE=1 USE_LINUX_TPROXY=1 USE_LIBCRYPT=1
USE_OPENSSL=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with OpenSSL version : OpenSSL 1.0.2g  1 Mar 2016
Running on OpenSSL version : OpenSSL 1.0.2g  1 Mar 2016
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT
IP_FREEBIND
Built with network namespace support.
Built without compression support (neither USE_ZLIB nor USE_SLZ are set).
Compression algorithms supported : identity("identity")
Encrypted password support via crypt(3): yes
Built without PCRE or PCRE2 support (using libc's regex instead)

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available filters :
[SPOE] spoe
[COMP] compression
[TRACE] trace

root# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 16.04.3 LTS
Release: 16.04
Codename: xenial


root# uname -a
Linux f2 4.4.0-96-generic #119-Ubuntu SMP Tue Sep 12 14:59:54 UTC 2017
x86_64 x86_64 x86_64 GNU/Linux

root# cat ./haproxy.cfg
global
chroot /var/lib/haproxy
user haproxy
group haproxy
daemon
defaults
modehttp
frontend in
bind *:80
backend www
mode http
option httpchk GET /api/check HTTP/1.0\r\nHost:\ www.ru
server www 10.3.1.14:80 check inter 1s

strace:

chroot("/var/lib/haproxy")  = 0
chdir("/")  = 0
getgroups(0, NULL)  = 1
setgroups(0, [])= 0
setgid(122) = 0
setuid(115) = 0
getrlimit(RLIMIT_NOFILE, {rlim_cur=4012, rlim_max=4012}) = 0
pipe([5, 6])= 0
fcntl(5, F_SETFL, O_RDONLY|O_NONBLOCK)  = 0
epoll_ctl(3, EPOLL_CTL_ADD, 4, {EPOLLIN|EPOLLRDHUP, {u32=4, u64=4}}) = 0
epoll_wait(3, [], 200, 0)   = 0
socket(PF_INET, SOCK_STREAM, IPPROTO_TCP) = 7
fcntl(7, F_SETFL, O_RDONLY|O_NONBLOCK)  = 0
setsockopt(7, SOL_TCP, TCP_NODELAY, [1], 4) = 0
connect(7, {sa_family=AF_INET, sin_port=htons(80),
sin_addr=inet_addr("10.3.1.14")}, 16) = -1 EINPROGRESS (Operation now in
progress)
epoll_wait(3, [], 200, 0)   = 0
recvfrom(7, 0x1fc8f34, 16384, 0, NULL, NULL) = -1 EAGAIN (Resource
temporarily unavailable)
connect(7, {sa_family=AF_INET, sin_port=htons(80),
sin_addr=inet_addr("10.3.1.14")}, 16) = 0
epoll_ctl(3, EPOLL_CTL_ADD, 7, {EPOLLIN|EPOLLRDHUP, {u32=7, u64=7}}) = 0
epoll_wait(3, [], 200, 1000)= 0
epoll_wait(3, [], 200, 0)   = 0
getsockopt(7, SOL_SOCKET, SO_ERROR, [0], [4]) = 0
^C--- SIGINT {si_signo=SIGINT, si_code=SI_KERNEL} ---
strace: Process 20581 detached

gdb:
(gdb) bt
#0  0x0046ada1 in pl_cpu_relax () at include/import/atomic-ops.h:25
#1  chk_report_conn_err (check=check@entry=0x13bb7d8,
errno_bck=errno_bck@entry=0, expired=1) at src/checks.c:656
#2  0x0046ec87 in process_chk_conn (t=0x13b3870) at
src/checks.c:2196
#3  process_chk (t=0x13b3870) at src/checks.c:2259
#4  0x004d5336 in process_runnable_tasks () at src/task.c:261
#5  0x004a332f in run_poll_loop () at src/haproxy.c:2261
#6  run_thread_poll_loop (data=data@entry=0x13b3ab0) at src/haproxy.c:2310
#7  0x0040ab19 in main (argc=, argv=0x7ffe3282f1d8)
at src/haproxy.c:2835


How to debug haproxy + lua?

2017-11-02 Thread aogooc xu
In the high concurrent environment, there is a blocking function, how fast
positioning ?

I am very confused, but it's too much trouble to make a log.


Re: [ANNOUNCE] haproxy-1.8-rc1 : the last mile

2017-11-02 Thread Willy Tarreau
Hi Lukas,

On Wed, Nov 01, 2017 at 11:35:30PM +0100, Lukas Tribus wrote:
> In ALPN, the client announces the supported protocols, currently for example
> http1/1 and h2, and then the server picks a protocol out of that
> selection, based on its own preference.

Yep.

> However, with clients supporting both http/1.1 and h2 the configuration:
> alpn http/1.1,h2
> 
> always leads to HTTP/1.1 (on both curl and Chrome) here.
> 
> I had to turn it around, for HTTP/2 to be actually negotiated:
> alpn h2,http/1.1

Hmmm I think you're right, I've been doing most of my tests with "h2"
only and figured it would be good to propose h1 as well in the mail
announce so that testers don't see connections rejected!

> With the latter configuration, HTTP/2 is actually used.

OK.

> > "alpn http/1.1,h2" on a "bind" line present in an HTTP mode frontend,
> 
> And "alpn h2,http/1.1" does work via HTTP2 in a HTTP mode frontend.
> 
> 
> But when the frontend is in TCP mode, and the backend is in HTTP mode,

I didn't think about this use case.

> its looks like the H2 parser is not started even when ALPN negotiates h2, and
> the HTTP/1.1 only backend server receives the "PRI" connection request 
> verbatim:

That's totally true. H2 is only used if negociated by ALPN *and* the
frontend is in HTTP mode, as I couldn't find any case where we would
not want to use H2 there, so I preferred not to introduce another
option in the frontend.

> 000a:https_in.accept(0005)=0008 from [10.0.0.4:54737] ALPN=h2
> 000a:bk_testbk.clireq[0008:]: PRI * HTTP/2.0
> 000a:bk_testbk.srvrep[0008:adfd]: HTTP/1.1 400 Bad Request
> 000a:bk_testbk.srvhdr[0008:adfd]: Server: nginx/1.10.3 (Ubuntu)
> 000a:bk_testbk.srvhdr[0008:adfd]: Date: Wed, 01 Nov 2017 21:15:41 GMT
> 000a:bk_testbk.srvhdr[0008:adfd]: Content-Type: text/html
> 000a:bk_testbk.srvhdr[0008:adfd]: Content-Length: 182
> 000a:bk_testbk.srvhdr[0008:adfd]: Connection: close
> 000a:bk_testbk.srvcls[0008:adfd]
> 000b:bk_testbk.clireq[0008:]: SM
> 000b:bk_testbk.clicls[adfd:]
> 000b:bk_testbk.closed[adfd:]
> 
> 
> In the HTTP/1.1 world I used to think that even if the frontend is in TCP 
> mode,
> when haproxy selects a backend in HTTP mode, it understands that this is gonna
> be a HTTP session and it turned on the HTTP hut for all intends and purposes.

Yes but here we have to make the choice while installing the mux, it's
important to understand that we *cannot* apply the frontend's switching
rules to the preface which purposely looks like a broken request, then
decide to route it to an HTTP backend. So we have to decide of the protocol
in the frontend.

> Of course, when both front and backend are in TCP mode and we negotiate h2
> via NPN or ALPN, we have to leave it alone (just terminate TLS).

Exactly.

> But in the "frontend->tcp mode; backend->http mode" case, should we be able to
> start the H2 parsing and actually handle this case? Or is this something we 
> are
> unable to cover?

I don't see a way to cover it unfortunately. However I think the HTTP parser
needs to detect the H2 preface and immediately reject it so that we don't
end up with hung requests like this.

> I'm certainly able to fix this issue here via configuration, I'm just
> not sure if that is the case for all the use-cases out there?

We at least need to address it in the configuration. We *may* be able to
address it in the config validity checking : since we're able to detect
http frontends switching to tcp backends and report an error, we might
be able to check for the opposite when one of the frontend's "bind" lines
is configured to support H2, and then emit a warning (since some people
might very well switch to a TCP backend only when ALPN says H2 to offload
the processing somewhere else). However that would leave them with an
unfixable warning and I don't want this either, as TCP->HTTP setups are
used when you want to support multiple protocols on the same port and
ALPN is a perfect example of this. So I think the best would be to just
reject the preface in the backend and report the reason in the logs.

> Now, comment number 3, pretty sure this is actually a bug :)
> 
> In my configuration, files transferred via HTTP2 are corrupted with
> random content
> from memory. I could spot my certificate, some HTTP headers and the content of
> /etc/resolv.conf in the HTTP payload. Using HTTP/1.1 (even by a client
> side change)
> fixes this. Just H2 is affected.

Ouch, not good.

> How to reproduce? In my case I'm transferring a 150KB .svg file
> through haproxy and
> it gets corrupted every time.
> 
> 
> The configuration is *VERY* basic (also, no filters are enabled :) ):
> 
> lukas@dev:~/haproxy$ cat ../haproxy.cfg
> defaults
>  timeout connect 5000
>  timeout client  5
>  timeout server  5
>  #compression algo gzip
> frontend http_in
>  bind :80
>  default_backend bk_testbk
> frontend https_in
>  mode http
>  bind :443 ssl crt