regarding tune.http.maxhdr

2017-02-07 Thread Sukbum Hong
Hi, All.

One customer is experiencing "502 Bad Gateway" error when Apache web server
responses with lots(over 100) of "Set-Cookie" Header
and found that following limitation in the default configuration of
haproxy.

Environment is L7 haproxy ==> nginx reverse proxy ==> Apache web server as
HTTP origin.

tune.http.maxhdr 

  Sets the maximum number of headers in a request. When a request comes with a
  number of headers greater than this value (including the first line), it is
  rejected with a "400 Bad Request" status code. Similarly, too large responses
  are blocked with "502 Bad Gateway". The default value is 101, which is enough
  for all usages, considering that the widely deployed Apache server uses the
  same limit.

as it's hard to fix customer's application immediately, we would like
to setup this value such as 500 as temporary solution.

Questions.

1. what's the side effect if we set this value? eats more memory?
haproxy is 16GB currently.

2. As I know, default Apache configuration has no limitation for the
Number of Headers.
   per my testing, just 8KB for the header size.(seems no limitation)
   Is it correct that Apache has same limit 101 as max header numbers?

Please advise.

Thanks


RE: Haproxy loabalance with cookie

2017-02-07 Thread Hoang Le Trung
Hi Aaron,

Here is my haproxy configure

frontend kylin-web
bind 192.168.1.120:7077
acl url_static   path_beg   -i /kylin
stats enable
stats uri /haproxy?stats
stats realm Strictly\ Private
default_backend app
#-
backend app
balance roundrobin
server  hdp01.example.local 192.168.1.100:7070   check
server  hdp02.example.local 192.168.1.101:7070   check

HAproxy:

When I login  to LB 192.168.1.120:7077/kylin and do query. I can see request 
loadbalance to all backend server( no need re-authentication). First request to 
HDP01, second request to HDP02 and third request to HDP01……

But when I use RESTful API following this docs 
http://kylin.apache.org/docs16/howto/howto_use_restapi.html
I using cookie to save authentication, that mean client do not need 
re-authentication for each request sent. So problem happen here,
Example when client login to LB, it create JSESSIONID and save to cookie. 
JSESSIONID point to only HDP01 server so all subsequent request go to this 
server. When client authentication again, JSESSIONID point to only HDP02 server 
and all subsequent request go to this server.

So what I want here:
Client login to LB, request sent will be forward balance to HDP01 and HDP02 
server, client no need re-authentication.


Thanks!

From: Aaron West [mailto:aa...@loadbalancer.org]
Sent: Tuesday, February 07, 2017 5:14 PM
To: Hoang Le Trung
Cc: haproxy@formilux.org
Subject: Re: Haproxy loabalance with cookie

Hi Hoang,

Could we get your HAproxy config please, an example of both scenarios would be 
best.

It may help to better to better understand your situation.

Aaron West

Loadbalancer.org Limited
+44 (0)330 380 1064
www.loadbalancer.org

On 7 February 2017 at 01:55, Hoang Le Trung 
> wrote:
Hi

I use HAproxy to loadblance my backend servers.
But I have problem when I using cookie.
When a cookie is present, the same backend server is used until it dies. So it 
make overload on this server while other servers free.
If not using cookie, Each client need pass authentication to request data from 
backend servers. It work ok but take long time to finish many requests from 
client.
So any solution for my case, I want use Haproxy to load balance session with 
client and backend server. When client send request, it will forward balance to 
backend servers( not only one server when use cookie). And client do not need 
re-authentication when send subsequent request


Thanks!
Best  Regards,



This e-mail may contain confidential or privileged information. If you received 
this e-mail by mistake, please don't forward it to anyone else, please erase it 
from your device and let me know so I don't do it again.



This e-mail may contain confidential or privileged information. If you received 
this e-mail by mistake, please don't forward it to anyone else, please erase it 
from your device and let me know so I don't do it again.


Re: Lua sample fetch logging ends up in response when doing http-request redirect

2017-02-07 Thread Willy Tarreau
On Tue, Feb 07, 2017 at 06:37:09PM +, Jesse Schulman wrote:
> Thank you for the update, we are running the patch Thierry provided with
> success, but we only do a lua call within the %[] almost identically to the
> simple reproducer I provided.  I *think* we are safe considering we don't
> do any redirect in the way that your (Willy's) reproducer is doing it.

OK that's fine but be careful, any implicit type cast or any converter
involving a string can simply break with this patch. It may be fine in
your specific use case but I'm saying this so that others don't blindly
apply it.

> We will definitely look to upgrade to the next available stable version
> that includes the proper fix.

I now see how to address it in a future-proof way that will also help us
close this thing for other existing areas and possibly future designs.
It should be done by tomorrow (I hope so).

Thanks,
Willy



Re: Debug Log: Response headers logged before rewriting

2017-02-07 Thread Daniel Schneller
Hello everyone!

While I have since figured out what my original problem was, the original 
question remains.

Is this intentional, am I missing something, or both? :)

Cheers,
Daniel


> On 3. Feb. 2017, at 13:40, Daniel Schneller 
>  wrote:
> 
> Hi there!
> 
> I currently trying to figure out a problem with request and response header 
> rewriting.
> To make things easier I run haproxy in debug mode, so I get the client/server 
> conversation all dumped to my terminal.
> I am wondering, however, if I am missing something, because apparently the 
> output of the response shows only what the backend server sent in response to 
> a request, but any changes I make to the response headers are not to be seen 
> in haproxy’s output. 
> 
> In my case I have a 
> 
> http-response replace-header Location '(http|https):\/\/my.domain\/(.*)' '/\2'
> 
> which appears to work, because the client gets the rewritten response, but 
> the debug output looks like this (somewhat redacted)
> 
> 002:front.accept(000b)=0012 from [1.2.3.4:62699]
> 002:front.clireq[0012:]: GET 
> /authorize?client_id=xxx_uri=yyy=zzz_type=code 
> HTTP/1.1
> 002:front.clihdr[0012:]: Host: my.domain
> 
> 
> 002:back.srvrep[0012:0013]: HTTP/1.1 302 Found
> 002:back.srvhdr[0012:0013]: Server: Apache-Coyote/1.1
> 002:back.srvhdr[0012:0013]: Location: 
> https://my.domain/login?client_id=xxx_uri=yyy_type=code
>   ^
>   | to be removed |
> 
> 
> 003:front.clireq[0012:0013]: GET 
> /login?client_id=xxx_uri=yyy_type=code HTTP/1.1
>^^^
> | obviously removed
> 
> 003:front.clihdr[0012:0013]: Host: my.domain
> …
> 
> 
> This is just one of the rewrites that happen, and it makes things more 
> cumbersome to debug, because I need to capture both the server’s and the 
> client’s logs and merge them together.
> 
> Is there a switch or config setting I am missing that would show what the 
> server actually puts on the wire towards the client?
> 
> Thanks
> Daniel
> 
> 
> 
> -- 
> Daniel Schneller
> Principal Cloud Engineer
> 
> CenterDevice GmbH  | Hochstraße 11
>   | 42697 Solingen
> tel: +49 1754155711| Deutschland
> daniel.schnel...@centerdevice.de   | www.centerdevice.de
> 
> Geschäftsführung: Dr. Patrick Peschlow, Dr. Lukas Pustina,
> Michael Rosbach, Handelsregister-Nr.: HRB 18655,
> HR-Gericht: Bonn, USt-IdNr.: DE-815299431
> 
> 




Re: Debug Log: Response headers logged before rewriting

2017-02-07 Thread Skarbek, John
I’ve run into this issue in the past. It’d be great if someone could provide 
some insight. I ended up blogging about this in the past: 
http://jtslear.github.io/haproxy-url-rewrite-logging-double-take/


--
John Skarbek


On February 7, 2017 at 14:00:25, Daniel Schneller 
(daniel.schnel...@centerdevice.com) 
wrote:

Hello everyone!

While I have since figured out what my original problem was, the original 
question remains.

Is this intentional, am I missing something, or both? :)

Cheers,
Daniel


> On 3. Feb. 2017, at 13:40, Daniel Schneller 
>   > wrote:
>
> Hi there!
>
> I currently trying to figure out a problem with request and response header 
> rewriting.
> To make things easier I run haproxy in debug mode, so I get the client/server 
> conversation all dumped to my terminal.
> I am wondering, however, if I am missing something, because apparently the 
> output of the response shows only what the backend server sent in response to 
> a request, but any changes I make to the response headers are not to be seen 
> in haproxy’s output.
>
> In my case I have a
>
> http-response replace-header Location '(http|https):\/\/my.domain\/(.*)' '/\2'
>
> which appears to work, because the client gets the rewritten response, but 
> the debug output looks like this (somewhat redacted)
>
> 002:front.accept(000b)=0012 from [1.2.3.4:62699]
> 002:front.clireq[0012:]: GET 
> /authorize?client_id=xxx_uri=yyy=zzz_type=code 
> HTTP/1.1
> 002:front.clihdr[0012:]: Host: my.domain
>
>
> 002:back.srvrep[0012:0013]: HTTP/1.1 302 Found
> 002:back.srvhdr[0012:0013]: Server: Apache-Coyote/1.1
> 002:back.srvhdr[0012:0013]: Location: 
> https://urldefense.proofpoint.com/v2/url?u=https-3A__my.domain_login-3Fclient-5Fid-3Dxxx-26redirect-5Furi-3Dyyy-26response-5Ftype-3Dcode=DwIFaQ=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0=BZ2S09kcMRiJIUh57WZsng=tQNdSzaKpC53d3kW9drHgsYezyVJJxmlQGYAj1yiKyA=jfynFggrAFl0-lUX55K44O2ShY1NkyLx4i9aGXDUR2k=
> ^
> | to be removed |
>
>
> 003:front.clireq[0012:0013]: GET 
> /login?client_id=xxx_uri=yyy_type=code HTTP/1.1
> ^^^
> | obviously removed
>
> 003:front.clihdr[0012:0013]: Host: my.domain
> …
>
>
> This is just one of the rewrites that happen, and it makes things more 
> cumbersome to debug, because I need to capture both the server’s and the 
> client’s logs and merge them together.
>
> Is there a switch or config setting I am missing that would show what the 
> server actually puts on the wire towards the client?
>
> Thanks
> Daniel
>
>
>
> --
> Daniel Schneller
> Principal Cloud Engineer
>
> CenterDevice GmbH | Hochstraße 11
> | 42697 Solingen
> tel: +49 1754155711 | Deutschland
> daniel.schnel...@centerdevice.de | 
> https://urldefense.proofpoint.com/v2/url?u=http-3A__www.centerdevice.de=DwIFaQ=_hRq4mqlUmqpqlyQ5hkoDXIVh6I6pxfkkNxQuL0p-Z0=BZ2S09kcMRiJIUh57WZsng=tQNdSzaKpC53d3kW9drHgsYezyVJJxmlQGYAj1yiKyA=e0a21qW1lqHIwjNXhxKj23f1y6CowSCnPercr38zfVk=
>
> Geschäftsführung: Dr. Patrick Peschlow, Dr. Lukas Pustina,
> Michael Rosbach, Handelsregister-Nr.: HRB 18655,
> HR-Gericht: Bonn, USt-IdNr.: DE-815299431
>
>




Re: Lua sample fetch logging ends up in response when doing http-request redirect

2017-02-07 Thread Jesse Schulman
Thank you for the update, we are running the patch Thierry provided with
success, but we only do a lua call within the %[] almost identically to the
simple reproducer I provided.  I *think* we are safe considering we don't
do any redirect in the way that your (Willy's) reproducer is doing it.

We will definitely look to upgrade to the next available stable version
that includes the proper fix.

Thanks again!
Jesse

On Tue, Feb 7, 2017 at 3:09 AM Willy Tarreau  wrote:

> On Tue, Feb 07, 2017 at 11:21:20AM +0100, thierry.fourn...@arpalert.org
> wrote:
> > Hi,
> >
> > This bug should be backported from 1.5 to 1.7, and obviously in 1.8.
> > unfortunately, the problem is nt cleanly fixed (it is just move), so we
> > work on another - and definitive - fix.
>
> Indeed, just to give an idea, it breaks this :
>
>http-request redirect prefix "%[src,lower,base64]"
>
> $ curl -I http://127.0.0.1:8000/log
> MTI3LjAuMC4xFound
> Cache-Control: no-cache
> Content-length: 0
> Location: MTI3LjAuMC4x/log
> Connection: close
>
> I have an idea about a way to make the trash allocations safer, I may
> come up with a patch. At least we have two distinct reproducers now.
>
> Willy
>


Dynamically manage server SSL certificates?

2017-02-07 Thread Cedric Maion
Hi,

I'm thinking about using HAProxy to terminate SSL connections for
thousands of domains on a single frontend (using SNI).

Certificates will obviously need to be added/removed/renewed quite
regularly.

Right now it seems that the usual strategy to manage this is to maintain
the list of all certificates in a directory and reload haproxy
whenever needed.
However, from what I understand, this has the following drawbacks:
- whenever haproxy soft-restarts, new connections might be dropped
- very slow startup time for many SSL certificates (which also drops
  traffic during that time?)
- loss of state (e.g., SSL session cache, stick tables, non persisted
  ACLs...)

A great feature would be to be able to dynamically add/remove SSL
certificates (or reload them all) from a running haproxy instance,
through the stat socket - in a way that doesn't drop traffic nor block
haproxy.
Is there some work planed/in progress on this subject?
Is there a way to help here?

Or did I miss another way to solve this?

Thanks!
Kind regards,

  Cedric




Re: Strange behavior of sample fetches in http-response replace-header option

2017-02-07 Thread Holger Just
Hi all,

I just checked and the issue is still present in current master. Could
you maybe have a look at this issue?

It smells a bit like this could potentially be connected to the issue
discussed in the thread "Lua sample fetch logging ends up in response
when doing http-request redirect". However, I couldn't reproduce my
issue when `http-request redirect`, neither with the patch nor without
so it might also be a red herring.

Regards,
Holger

Holger Just wrote:
> Hi there,
> 
> I observed some strange behavior when trying to use a `http-response
> replace-header` rule. As soon as I start using fetched samples in the
> replace-fmt string, the resulting header value is garbled or empty
> (depending on the HAProxy version).
> 
> Please consider the config in the attachment of this mail (in order to
> preserve newlines properly). As you can see, we add a Set-Cookie header
> to the response in the backend which is altered again in the frontend.
> Specifically, the configuration intends to replace the expires tag of
> the cookie as set by the backend and set a new value.
> 
> With this configuration, I observe the following headers when running a
> `curl http://127.0.0.1:8000`:
> 
> HAProxy 1.5.14 and haproxy-1.5 master:
> 
> Set-Cookie: WeWeWeWeWeWeWeWeWeWeWeWeWeWeWeW
> X-Expires: Wed, 05 Oct 2016 11:51:01 GMT
> 
> haproxy-1.6 master and current haproxy master:
> 
> Set-Cookie:
> X-Expires: Wed, 05 Oct 2016 11:51:01 GMT
> 
> The `http-response replace-header` rule works fine if we replace the
> sample fetch with a variable like %T. In that case, the value is
> properly replaced. Any use of a sample fetch results in the above
> garbled output.
> 
> The exact same behavior can be observed if a "real" backend is setting
> the original Set-Cookie header instead of using the listen / backend
> hack in the self-contained config I posted.
> 
> Am I doing something wrong here or is it possible that there is an issue
> with applying sample fetches here?
> 
> 
> I tested with both on freshly compiles HAProxies on MacOS with `make
> TARGET=generic` as well as on a HAProxy 1.5.14 with the following stats:
> 
> HA-Proxy version 1.5.14 2015/07/02
> Copyright 2000-2015 Willy Tarreau 
> 
> Build options :
>   TARGET  = linux2628
>   CPU = generic
>   CC  = gcc
>   CFLAGS  = -m64 -march=x86-64 -O2 -g -fno-strict-aliasing
>   OPTIONS = USE_GETADDRINFO=1 USE_ZLIB=1 USE_OPENSSL=1 USE_PCRE=1
> 
> Default settings :
>   maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 200
> 
> Encrypted password support via crypt(3): yes
> Built with zlib version : 1.2.8
> Compression algorithms supported : identity, deflate, gzip
> Built with OpenSSL version : OpenSSL 1.0.2j  26 Sep 2016
> Running on OpenSSL version : OpenSSL 1.0.2j  26 Sep 2016
> OpenSSL library supports TLS extensions : yes
> OpenSSL library supports SNI : yes
> OpenSSL library supports prefer-server-ciphers : yes
> Built with PCRE version : 8.35 2014-04-04
> PCRE library supports JIT : no (USE_PCRE_JIT not set)
> Built with transparent proxy support using: IP_TRANSPARENT
> IPV6_TRANSPARENT IP_FREEBIND
> 
> Available polling systems :
>   epoll : pref=300,  test result OK
>poll : pref=200,  test result OK
>  select : pref=150,  test result OK
> Total: 3 (3 usable), will use epoll.
> 
> Thanks for your help,
> Holger



Re: [PATCH] BUILD: ssl: fix to build (again) with boringssl

2017-02-07 Thread Emmanuel Hocdet
you need:
ADDLIB="-lpthread -ldecrepit"

> Le 7 févr. 2017 à 16:09, Igor Pav  a écrit :
> 
> Hi, Emmanuel. build with static lib, but no luck, can you provide some
> building details? Thanks.
> 
> /build/slib/libcrypto.a(thread_pthread.c.o): In function `CRYPTO_MUTEX_init':
> /root/boringssl/crypto/thread_pthread.c:31: undefined reference to
> `pthread_rwlock_init'
> /build/slib/libcrypto.a(thread_pthread.c.o): In function
> `CRYPTO_MUTEX_lock_read':
> /root/boringssl/crypto/thread_pthread.c:37: undefined reference to
> `pthread_rwlock_rdlock'
> /build/slib/libcrypto.a(thread_pthread.c.o): In function
> `CRYPTO_MUTEX_lock_write':
> /root/boringssl/crypto/thread_pthread.c:43: undefined reference to
> `pthread_rwlock_wrlock'
> /build/slib/libcrypto.a(thread_pthread.c.o): In function
> `CRYPTO_MUTEX_unlock_read':
> /root/boringssl/crypto/thread_pthread.c:49: undefined reference to
> `pthread_rwlock_unlock'
> /build/slib/libcrypto.a(thread_pthread.c.o): In function
> `CRYPTO_MUTEX_unlock_write':
> /root/boringssl/crypto/thread_pthread.c:55: undefined reference to
> `pthread_rwlock_unlock'
> /build/slib/libcrypto.a(thread_pthread.c.o): In function 
> `CRYPTO_MUTEX_cleanup':
> /root/boringssl/crypto/thread_pthread.c:61: undefined reference to
> `pthread_rwlock_destroy'
> /build/slib/libcrypto.a(thread_pthread.c.o): In function
> `CRYPTO_STATIC_MUTEX_lock_read':
> /root/boringssl/crypto/thread_pthread.c:65: undefined reference to
> `pthread_rwlock_rdlock'
> /build/slib/libcrypto.a(thread_pthread.c.o): In function
> `CRYPTO_STATIC_MUTEX_lock_write':
> /root/boringssl/crypto/thread_pthread.c:71: undefined reference to
> `pthread_rwlock_wrlock'
> /build/slib/libcrypto.a(thread_pthread.c.o): In function
> `CRYPTO_STATIC_MUTEX_unlock_read':
> /root/boringssl/crypto/thread_pthread.c:77: undefined reference to
> `pthread_rwlock_unlock'
> /build/slib/libcrypto.a(thread_pthread.c.o): In function
> `CRYPTO_STATIC_MUTEX_unlock_write':
> /root/boringssl/crypto/thread_pthread.c:83: undefined reference to
> `pthread_rwlock_unlock'
> /build/slib/libcrypto.a(thread_pthread.c.o): In function `CRYPTO_once':
> /root/boringssl/crypto/thread_pthread.c:89: undefined reference to
> `pthread_once'
> /build/slib/libcrypto.a(thread_pthread.c.o): In function `thread_local_init':
> /root/boringssl/crypto/thread_pthread.c:126: undefined reference to
> `pthread_key_create'
> /build/slib/libcrypto.a(thread_pthread.c.o): In function
> `CRYPTO_get_thread_local':
> /root/boringssl/crypto/thread_pthread.c:135: undefined reference to
> `pthread_getspecific'
> /build/slib/libcrypto.a(thread_pthread.c.o): In function
> `CRYPTO_set_thread_local':
> /root/boringssl/crypto/thread_pthread.c:150: undefined reference to
> `pthread_getspecific'
> /root/boringssl/crypto/thread_pthread.c:158: undefined reference to
> `pthread_setspecific'
> collect2: error: ld returned 1 exit status
> make: *** [haproxy] Error 1
> 
> On Tue, Feb 7, 2017 at 9:12 PM, Emmanuel Hocdet  wrote:
>> I Igor,
>> I build haproxy with boringssl static library to avoid any conflict with 
>> openssl shared lib.
>> It also need to be link with libdecrepit (boringssl).
>> 
>>> Le 30 janv. 2017 à 14:28, Igor Pav  a écrit :
>>> 
>>> sorry for unclear question, it's quite simple, build haproxy from git
>>> with boringssl (DBUILD_SHARED_LIBS=1), just config a simple SSL
>>> frontend.
>>> 
>>> On Mon, Jan 30, 2017 at 5:42 PM, Willy Tarreau  wrote:
 On Mon, Jan 30, 2017 at 04:07:33PM +0800, Igor Pav wrote:
> any idea with error?
> 
> undefined symbol: BIO_read_filename
 
 I doubt you'll get any useful response if you don't provide at least a
 bit of information, such as what ssl lib you're using, whether or not
 this is with the patch applied, build options maybe, etc...
 
 Willy
>>> 
>> 
> 



Re: [PATCH] BUILD: ssl: fix to build (again) with boringssl

2017-02-07 Thread Igor Pav
Hi, Emmanuel. build with static lib, but no luck, can you provide some
building details? Thanks.

/build/slib/libcrypto.a(thread_pthread.c.o): In function `CRYPTO_MUTEX_init':
/root/boringssl/crypto/thread_pthread.c:31: undefined reference to
`pthread_rwlock_init'
/build/slib/libcrypto.a(thread_pthread.c.o): In function
`CRYPTO_MUTEX_lock_read':
/root/boringssl/crypto/thread_pthread.c:37: undefined reference to
`pthread_rwlock_rdlock'
/build/slib/libcrypto.a(thread_pthread.c.o): In function
`CRYPTO_MUTEX_lock_write':
/root/boringssl/crypto/thread_pthread.c:43: undefined reference to
`pthread_rwlock_wrlock'
/build/slib/libcrypto.a(thread_pthread.c.o): In function
`CRYPTO_MUTEX_unlock_read':
/root/boringssl/crypto/thread_pthread.c:49: undefined reference to
`pthread_rwlock_unlock'
/build/slib/libcrypto.a(thread_pthread.c.o): In function
`CRYPTO_MUTEX_unlock_write':
/root/boringssl/crypto/thread_pthread.c:55: undefined reference to
`pthread_rwlock_unlock'
/build/slib/libcrypto.a(thread_pthread.c.o): In function `CRYPTO_MUTEX_cleanup':
/root/boringssl/crypto/thread_pthread.c:61: undefined reference to
`pthread_rwlock_destroy'
/build/slib/libcrypto.a(thread_pthread.c.o): In function
`CRYPTO_STATIC_MUTEX_lock_read':
/root/boringssl/crypto/thread_pthread.c:65: undefined reference to
`pthread_rwlock_rdlock'
/build/slib/libcrypto.a(thread_pthread.c.o): In function
`CRYPTO_STATIC_MUTEX_lock_write':
/root/boringssl/crypto/thread_pthread.c:71: undefined reference to
`pthread_rwlock_wrlock'
/build/slib/libcrypto.a(thread_pthread.c.o): In function
`CRYPTO_STATIC_MUTEX_unlock_read':
/root/boringssl/crypto/thread_pthread.c:77: undefined reference to
`pthread_rwlock_unlock'
/build/slib/libcrypto.a(thread_pthread.c.o): In function
`CRYPTO_STATIC_MUTEX_unlock_write':
/root/boringssl/crypto/thread_pthread.c:83: undefined reference to
`pthread_rwlock_unlock'
/build/slib/libcrypto.a(thread_pthread.c.o): In function `CRYPTO_once':
/root/boringssl/crypto/thread_pthread.c:89: undefined reference to
`pthread_once'
/build/slib/libcrypto.a(thread_pthread.c.o): In function `thread_local_init':
/root/boringssl/crypto/thread_pthread.c:126: undefined reference to
`pthread_key_create'
/build/slib/libcrypto.a(thread_pthread.c.o): In function
`CRYPTO_get_thread_local':
/root/boringssl/crypto/thread_pthread.c:135: undefined reference to
`pthread_getspecific'
/build/slib/libcrypto.a(thread_pthread.c.o): In function
`CRYPTO_set_thread_local':
/root/boringssl/crypto/thread_pthread.c:150: undefined reference to
`pthread_getspecific'
/root/boringssl/crypto/thread_pthread.c:158: undefined reference to
`pthread_setspecific'
collect2: error: ld returned 1 exit status
make: *** [haproxy] Error 1

On Tue, Feb 7, 2017 at 9:12 PM, Emmanuel Hocdet  wrote:
> I Igor,
> I build haproxy with boringssl static library to avoid any conflict with 
> openssl shared lib.
> It also need to be link with libdecrepit (boringssl).
>
>> Le 30 janv. 2017 à 14:28, Igor Pav  a écrit :
>>
>> sorry for unclear question, it's quite simple, build haproxy from git
>> with boringssl (DBUILD_SHARED_LIBS=1), just config a simple SSL
>> frontend.
>>
>> On Mon, Jan 30, 2017 at 5:42 PM, Willy Tarreau  wrote:
>>> On Mon, Jan 30, 2017 at 04:07:33PM +0800, Igor Pav wrote:
 any idea with error?

 undefined symbol: BIO_read_filename
>>>
>>> I doubt you'll get any useful response if you don't provide at least a
>>> bit of information, such as what ssl lib you're using, whether or not
>>> this is with the patch applied, build options maybe, etc...
>>>
>>> Willy
>>
>



Re: ROI Driven Campaign For haproxy.org

2017-02-07 Thread Caroll Acosta



Hello *haproxy.org*  Team,



I was fascinated visiting your website – *haproxy.org*
. Clearly, your company has a rich and interactive
website and hopefully you make adequate online traffic, sales or lead
generation. No?



Allow me to put together some issues that deter your success:

1. Fewer back links vis-à-vis competition.

2. Website should be enhancement to meet latest Google algorithm update.

3. Social media efforts addressing company marketing/branding needs.

4. Website landing pages must be tweaked to generate more sales.



Nevertheless to say this is a partial list. You can request a “*Free
Website Audit*” that reveals your current situation with a self explanatory
roadmap describing how to remain competitive and at the same time achieve
optimal return on investment.



Please write back if you have any questions / provide me your best number
to discuss this further. I assure, you won’t hesitate speaking with me.



Sincerely,


*Caroll Acosta Online Marketing Consultant*

--


*PS1*: I am not spamming. I have studied your website, prepared an audit
report and believe I can help with your business promotion.

*PS2: *#1 Ranking & More Organic Traffic Improvements.

*PS3:* 100% Money Back Guarantee, if no results.

*PS4:* *3 Months FREE SEO, SMO & ORM services included.*


Re: 1.8dev 405ff31e31eb1cbdc76ba0d93c6db4c7a3fd497a regression ?

2017-02-07 Thread Emmanuel Hocdet
Hi Jarno,

I'm not able to reproduce this crash with current 1.8dev and openssl 1.0.2j.

Manu

> Le 5 févr. 2017 à 20:04, Jarno Huuskonen  a écrit :
> 
> Hi,
> 
> Commit 405ff31e31eb1cbdc76ba0d93c6db4c7a3fd497a
> (BUG/MINOR: ssl: assert on SSL_set_shutdown with BoringSSL) is causing
> trouble (with centos7 + openssl-1.0.1e-60.el7.x86_64).
> 
> If I have a backend server with ssl and httpchk enabled I get a crash:
> (gdb) bt
> #0  0x77218419 in sk_free () from /lib64/libcrypto.so.10
> #1  0x7719f199 in int_free_ex_data () from /lib64/libcrypto.so.10
> #2  0x775641fd in SSL_free () from /lib64/libssl.so.10
> #3  0x0040e332 in ssl_sock_close (conn=0x723ac0) at 
> src/ssl_sock.c:4012
> #4  0x0045d1b6 in conn_force_close (conn=0x723ac0)
>at include/proto/connection.h:151
> #5  wake_srv_chk (conn=0x723ac0) at src/checks.c:1406
> #6  0x0049b6e6 in conn_fd_handler (fd=)
>at src/connection.c:141
> #7  0x004a7304 in fd_process_cached_events () at src/fd.c:223
> #8  0x00409d7d in run_poll_loop () at src/haproxy.c:1598
> #9  main (argc=4, argv=0x7fffdc78) at src/haproxy.c:1957
> 
> This is fairly minimal config that fails for me:
> global
>   log /dev/log local2 info
>   stats socket /tmp/stats level admin
> 
> defaults
>   mode http
> 
> frontend test4
>   bind ipv4@127.0.0.1:8083
>   default_backend test_be2
> 
> backend test_be2
>   option httpchk GET /crashme\ HTTP/1.1\r\nHost:\ 
> some.example.org\r\nConnection:\ close
>   server srv1 some.ip.with.ssl:443 id 1 check ssl verify none
> 
> -Jarno
> 
> -- 
> Jarno Huuskonen
> 




Re: [PATCH] BUILD: ssl: fix to build (again) with boringssl

2017-02-07 Thread Emmanuel Hocdet
I Igor,
I build haproxy with boringssl static library to avoid any conflict with 
openssl shared lib.
It also need to be link with libdecrepit (boringssl).

> Le 30 janv. 2017 à 14:28, Igor Pav  a écrit :
> 
> sorry for unclear question, it's quite simple, build haproxy from git
> with boringssl (DBUILD_SHARED_LIBS=1), just config a simple SSL
> frontend.
> 
> On Mon, Jan 30, 2017 at 5:42 PM, Willy Tarreau  wrote:
>> On Mon, Jan 30, 2017 at 04:07:33PM +0800, Igor Pav wrote:
>>> any idea with error?
>>> 
>>> undefined symbol: BIO_read_filename
>> 
>> I doubt you'll get any useful response if you don't provide at least a
>> bit of information, such as what ssl lib you're using, whether or not
>> this is with the patch applied, build options maybe, etc...
>> 
>> Willy
> 




RE: frequently reload haproxy without sleep time result in old haproxy process never dying

2017-02-07 Thread Pierre Cheynier
Hi,

I guess you're using a systemd-based distro.  You should have a look at this 
thread https://www.mail-archive.com/haproxy@formilux.org/msg23867.html.

The patches were applied to 1.7, but apparently backported to 1.6.11 and 1.5.19 
since.

Now I have a clean termination of old processes, no more orphans, even when 
performing a ton of reloads.

Pierre


Re: Lua sample fetch logging ends up in response when doing http-request redirect

2017-02-07 Thread Willy Tarreau
On Tue, Feb 07, 2017 at 11:21:20AM +0100, thierry.fourn...@arpalert.org wrote:
> Hi, 
> 
> This bug should be backported from 1.5 to 1.7, and obviously in 1.8.
> unfortunately, the problem is nt cleanly fixed (it is just move), so we
> work on another - and definitive - fix.

Indeed, just to give an idea, it breaks this :

   http-request redirect prefix "%[src,lower,base64]"

$ curl -I http://127.0.0.1:8000/log
MTI3LjAuMC4xFound
Cache-Control: no-cache
Content-length: 0
Location: MTI3LjAuMC4x/log
Connection: close

I have an idea about a way to make the trash allocations safer, I may
come up with a patch. At least we have two distinct reproducers now.

Willy



Re: Lua sample fetch logging ends up in response when doing http-request redirect

2017-02-07 Thread thierry . fournier
Hi, 

This bug should be backported from 1.5 to 1.7, and obviously in 1.8.
unfortunately, the problem is nt cleanly fixed (it is just move), so we
work on another - and definitive - fix.

Thierry

On Mon, 06 Feb 2017 17:41:15 +
Jesse Schulman  wrote:

> Any idea on if this will be going into 1.7.3 or only into 1.8?
> 
> Thanks!
> 
> On Sun, Jan 29, 2017 at 9:07 PM Jesse Schulman  wrote:
> 
> > That fixes the issue for me, thank you for the fast response!  Will this
> > be in 1.7.3, and is there any idea of when 1.7.3 will be released?
> >
> > Thanks!
> > Jesse
> >
> > On Fri, Jan 27, 2017 at 11:02 PM  wrote:
> >
> > Hi,
> >
> > thanks for the bug repport. I already encoutered with another function
> > than redirect. Can you try the join patch ?
> >
> > Thierry
> >
> >
> > On Fri, 27 Jan 2017 22:50:00 +
> > Jesse Schulman  wrote:
> >
> > > I've found what seems to be a bug when I log from within a Lua sample
> > fetch
> > > that I am using to determine a redirect URL.  It seems that whatever is
> > > logged from the lua script is written to the log file as expected, but it
> > > also is replacing the response, making the response invalid and breaking
> > > the redirection.
> > >
> > > Thanks,
> > > Jesse
> > >
> > > Here's what I'm seeing:
> > >
> > > *no logging: curl -v http://lab.mysite.com *
> > > > GET / HTTP/1.1
> > > > Host: lab.mysite.com
> > > > User-Agent: curl/7.51.0
> > > > Accept: */*
> > > >
> > > < HTTP/1.1 302 Found
> > > < Cache-Control: no-cache
> > > < Content-length: 0
> > > < Location: https://www.google.com/
> > > < Connection: close
> > > <
> > >
> > > *issue seen here with logging the string "LOG MSG" from lua script: curl
> > -v
> > > http://lab.mysite.com/log *
> > > > GET /log HTTP/1.1
> > > > Host: lab.mysite.com
> > > > User-Agent: curl/7.51.0
> > > > Accept: */*
> > > >
> > > LOG MSG 302 Found
> > > Cache-Control: no-cache
> > > Content-length: 0
> > > Location: https://www.google.com/log
> > > Connection: close
> > >
> > >
> > > Here are steps to reproduce and my current setup:
> > >
> > > */etc/redhat-release:*
> > > CentOS Linux release 7.2.1511 (Core)
> > >
> > > *uname -rv*
> > > 3.10.0-327.28.3.el7.x86_64 #1 SMP Thu Aug 18 19:05:49 UTC 2016
> > >
> > > *haproxy -vv:*
> > > HA-Proxy version 1.7.2 2017/01/13
> > > Copyright 2000-2017 Willy Tarreau 
> > >
> > > Build options :
> > >   TARGET  = linux2628
> > >   CPU = generic
> > >   CC  = gcc
> > >   CFLAGS  = -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement
> > >   OPTIONS = USE_LINUX_TPROXY=1 USE_ZLIB=1 USE_REGPARM=1 USE_OPENSSL=1
> > > USE_LUA=1 USE_PCRE=1
> > >
> > > Default settings :
> > >   maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200
> > >
> > > Encrypted password support via crypt(3): yes
> > > Built with zlib version : 1.2.7
> > > Running on zlib version : 1.2.7
> > > Compression algorithms supported : identity("identity"),
> > > deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
> > > Built with OpenSSL version : OpenSSL 1.0.1e-fips 11 Feb 2013
> > > Running on OpenSSL version : OpenSSL 1.0.1e-fips 11 Feb 2013
> > > OpenSSL library supports TLS extensions : yes
> > > OpenSSL library supports SNI : yes
> > > OpenSSL library supports prefer-server-ciphers : yes
> > > Built with PCRE version : 8.32 2012-11-30
> > > Running on PCRE version : 8.32 2012-11-30
> > > PCRE library supports JIT : no (USE_PCRE_JIT not set)
> > > Built with Lua version : Lua 5.3.3
> > > Built with transparent proxy support using: IP_TRANSPARENT
> > IPV6_TRANSPARENT
> > > IP_FREEBIND
> > >
> > > Available polling systems :
> > >   epoll : pref=300,  test result OK
> > >poll : pref=200,  test result OK
> > >  select : pref=150,  test result OK
> > > Total: 3 (3 usable), will use epoll.
> > >
> > > Available filters :
> > > [COMP] compression
> > > [TRACE] trace
> > > [SPOE] spoe
> > >
> > > *haproxy.cfg:*
> > > global
> > >log 127.0.0.1 local2 debug
> > >lua-load /etc/haproxy/lua/redirect.lua
> > >chroot /var/lib/haproxy
> > >pidfile /var/run/haproxy.pid
> > >maxconn 256
> > >tune.ssl.default-dh-param 1024
> > >stats socket /var/run/haproxy.sock mode 600 level admin
> > >stats timeout 2m #Wait up to 2 minutes for input
> > >user haproxy
> > >group haproxy
> > >daemon
> > >
> > > defaults
> > >log global
> > >mode tcp
> > >option tcplog
> > >option dontlognull
> > >timeout connect 10s
> > >timeout client 60s
> > >timeout server 60s
> > >timeout tunnel 600s
> > >
> > > frontend http
> > >bind "${BIND_IP}:80"
> > >mode http
> > >option httplog
> > >option forwardfor
> > >capture request header Host len 32
> > >log-format %hr\ %r\ %ST\ %b/%s\ %ci:%cp\ %B\ %Tr
> > >
> > >http-request redirect prefix 

Re: Haproxy loabalance with cookie

2017-02-07 Thread Aaron West
Hi Hoang,

Could we get your HAproxy config please, an example of both scenarios would
be best.

It may help to better to better understand your situation.

Aaron West

Loadbalancer.org Limited
+44 (0)330 380 1064
www.loadbalancer.org

On 7 February 2017 at 01:55, Hoang Le Trung  wrote:

> Hi
>
>
>
> I use HAproxy to loadblance my backend servers.
>
> But I have problem when I using cookie.
>
> When a cookie is present, the same backend server is used until it dies.
> So it make overload on this server while other servers free.
>
> If not using cookie, Each client need pass authentication to request data
> from backend servers. It work ok but take long time to finish many requests
> from client.
>
> So any solution for my case, I want use Haproxy to load balance session
> with client and backend server. When client send request, it will forward
> balance to backend servers( not only one server when use cookie). And
> client do not need re-authentication when send subsequent request
>
>
>
>
>
> Thanks!
>
> Best  Regards,
>
> 
>
> --
> This e-mail may contain confidential or privileged information. If you
> received this e-mail by mistake, please don't forward it to anyone else,
> please erase it from your device and let me know so I don't do it again.
>