[SPAM] To haproxy@formilux.org,MEGA SALE – L Bags 80% Off

2016-04-26 Thread Heather Chide


Your email client cannot read this email.
To view it online, please go here:
http://rcruphoymsgl.moods88.win/display.php?M=5247934=bd767fbc15b24f5b288ab2021ad978d8=733!L=231!N=57


To stop receiving these
emails:http://rcruphoymsgl.moods88.win/unsubscribe.php?M=5247934!C=bd767fbc15b24f5b288ab2021ad978d8!G=15!L=231!N=733!lan=en


Re: Maybe I've found an haproxy bug?

2016-04-26 Thread CJ Ess
That sounds like the issue exactly, the solution seems to be to upgrade.
Thanks for the pointer!

On Tue, Apr 26, 2016 at 6:12 PM, Cyril Bonté  wrote:

> Hi,
>
>
> Le 26/04/2016 23:41, CJ Ess a écrit :
>
>> Maybe I've found an haproxy bug? I am wondering if anyone else can
>> reproduce this -
>>
>> You'll need to send two requests w/ keep-alive:
>>
>> curl -v -v -v http://127.0.0.1/something http://127.0.0.1/
>>
>> On my system the first request returns a 404 error (but I've also seen
>> this with 200 responses - the 404 was highly customized with a chunked
>> response body, and the 200 also had a chunked response body, but I don't
>> know that the chunked encoding is relevant or not), and the second
>> request returns a 504 error (gateway timeout) - in this case haproxy is
>> timing out the connection after 15 seconds.
>>
>> When you run curl the first request will happen just fine (you'll get
>> the 404 response) and the second request will time out, at which point
>> the connection will close with no response of any sort.
>>
>> (curl tries to be smart and will resend the request after the connection
>> closes, but it does note the connection dies)
>>
>> I'm using Haproxy 1.5.12 and can reproduce this at will.
>>
>
> Not sure to follow your explanations (lack of details), but it looks like
> a behaviour that has already been modified in 1.5.16 with this commit :
>
> http://www.haproxy.org/git?p=haproxy-1.5.git;a=commit;h=ef8a113d59e89b2214adf7ab9f9b0b75905a7050
>
> Please upgrade and retry.
>
> --
> Cyril Bonté
>


Re: Maybe I've found an haproxy bug?

2016-04-26 Thread Cyril Bonté

Hi,

Le 26/04/2016 23:41, CJ Ess a écrit :

Maybe I've found an haproxy bug? I am wondering if anyone else can
reproduce this -

You'll need to send two requests w/ keep-alive:

curl -v -v -v http://127.0.0.1/something http://127.0.0.1/

On my system the first request returns a 404 error (but I've also seen
this with 200 responses - the 404 was highly customized with a chunked
response body, and the 200 also had a chunked response body, but I don't
know that the chunked encoding is relevant or not), and the second
request returns a 504 error (gateway timeout) - in this case haproxy is
timing out the connection after 15 seconds.

When you run curl the first request will happen just fine (you'll get
the 404 response) and the second request will time out, at which point
the connection will close with no response of any sort.

(curl tries to be smart and will resend the request after the connection
closes, but it does note the connection dies)

I'm using Haproxy 1.5.12 and can reproduce this at will.


Not sure to follow your explanations (lack of details), but it looks 
like a behaviour that has already been modified in 1.5.16 with this commit :

http://www.haproxy.org/git?p=haproxy-1.5.git;a=commit;h=ef8a113d59e89b2214adf7ab9f9b0b75905a7050

Please upgrade and retry.

--
Cyril Bonté



Maybe I've found an haproxy bug?

2016-04-26 Thread CJ Ess
Maybe I've found an haproxy bug? I am wondering if anyone else can
reproduce this -

You'll need to send two requests w/ keep-alive:

curl -v -v -v  http://127.0.0.1/something http://127.0.0.1/

On my system the first request returns a 404 error (but I've also seen this
with 200 responses - the 404 was highly customized with a chunked response
body, and the 200 also had a chunked response body, but I don't know that
the chunked encoding is relevant or not), and the second request returns a
504 error (gateway timeout) - in this case haproxy is timing out the
connection after 15 seconds.

When you run curl the first request will happen just fine (you'll get the
404 response) and the second request will time out, at which point the
connection will close with no response of any sort.

(curl tries to be smart and will resend the request after the connection
closes, but it does note the connection dies)

I'm using Haproxy 1.5.12 and can reproduce this at will.


sni lookup precedence order - delay default cert insertion

2016-04-26 Thread Roberto Guimaraes
Hi,

Not sure if valid, and even if so, it might be minor and related to specific
use cases. 
So, I've noticed that if in the same bind line we have a fallback cert followed
by more specific certs, or a directory containing those certs, haproxy still
adds the hosts of the fallback cert first, which also happens to be the 
default one.

Even though I understand that the longest match will be honored, because 
of the separate ebtrees, in case of any equal collision between the hosts in 
the fallback cert and the more specific certs, the fallback cert will be served,
which is probably not desirable.

ex:
bind . crt foo-san.com foo.com

with both cotnaining *.foo.com

foo-san.com would be the cert served.
Same for fqdn collisions.

I have a small patch that merely saves the hosts of the default(fallback) 
cert in a wordlist inside the bind_conf structure, and then inserts them last,
after all other certs of the config line have been processed.
With the change, haproxy consistently searches for the 
non-default (fallback) certs first, and only serves the fallback if a more 
specific match isn't found.

Any thoughts on whether this would be a positive change and/or 
whether it's needed?

thanks,
roberto





RE: Regarding client side keep-alive

2016-04-26 Thread Stefan Johansson
Many thanks!

Cheers.

-Original Message-
From: Baptiste [mailto:bed...@gmail.com] 
Sent: den 22 april 2016 14:49
To: Stefan Johansson 
Cc: haproxy@formilux.org
Subject: Re: Regarding client side keep-alive

> Basically, I want the client<>haProxy side to use keep alive, but 
> haProxy<>server side to close every connection and open a new one for 
> each request.
>
> Is it then correct to use http-server-close?

You are correct.  Set it up in your defaults section or in both the frontend 
and the backend.

> Also, has anybody had any issues with http-server-close in high 
> traffic environments? Like lingering connections, connections not 
> closed properly etc.

This feature has been available for many years and it is very stable for many 
years too :) You can use without any issue.

Baptiste


Re: map_dom vs map_str

2016-04-26 Thread Thierry FOURNIER
On Thu, 31 Mar 2016 08:36:45 +
Gerd Mueller  wrote:

> Hi all,
> 
> just read in the documentation that map_str uses a balanced tree vs.
> map_dom is using a list. So why should I use map_dom?


Hi,

from the documentation:

   domain match : check that a dot-delimited portion of the contents
  exactly match one of the provided string patterns.

The string match does not check the dot delimiters.

If you're matching the host header, in most cases, we known the full
domain. If you write:

   use_backend 
%[req.hdr(host),lower,map(/etc/haproxy/hostname2backend.map,default_backend)]

The domain contained in the host header may be an exact match of the
domain names in your file. For, if your file contains

   google.com

If the host header provides "www.google.com", it not match.
If the host header provides "google.com", it matchs.


> Right now I am
> using following:
> 
> use_backend
> %[req.hdr(host),lower,map_dom(/etc/haproxy/hostname2backend.map,default
> _backend)] 
> 
> Would map_str improve the performance?


it depends of the number of entries of your file
"hostname2backend.map". Approximatively, between 0 and 50 entries, a
list is quick than a tree. For more than 50 line, a tree is faster.
(50 is not the exact limit, but it is a rough estimation).

Thierry



Re: LUA: Skip HTTP headers and forward TCP traffic

2016-04-26 Thread Thierry FOURNIER
Hi,

Its strange.

The yield is normally allowed except when the function process the last
set of data.

I suppose that your readline function finished to read the headers line
and its reading the ssh payload. The ssh payload it is not tzerminated
by a '\n', so the readline function waits for more data. No more data
is avalaible, and the function yields, so we are processing the last
data of the stream ans the yield is not authorized.

So, first: this behavoour is not user friendly :(.

Second: After reading the http empty line, '\r\n' you must not use
readline() again. You expect a empty line (line ==""), so your debug
trace shows ".." for the empty line. The log system remove system
character, and the ".." are probably "\r\n".

If you wait fot the pattern "\r\n", you script probably runs.

Thierry



On Fri, 8 Apr 2016 12:46:26 +0200
Florian Aßmann  wrote:

> Hi everybody,
> 
> I try to connect to an SSH process via proxytunnel. The incoming request 
> carries normal HTTP headers that I have to skip those in order to forward 
> further encrypted SSH traffic to an SSH process. I thought I could tackle 
> this task using Lua and register_action, but since it’s my first time working 
> with Lua and haproxy and I got stuck. I hope someone could help me on this 
> topic.
> 
> ### Output:
> Apr 08 10:15:48 HOST docker[4059]: [info] 098/101548 (12) : connect-ssh
> Apr 08 10:15:48 HOST docker[4059]: [debug] 098/101548 (12) : CONNECT 
> 127.0.0.1:22 HTTP/1.1..
> Apr 08 10:15:48 HOST docker[4059]: [debug] 098/101548 (12) : Host: FQDN..
> Apr 08 10:15:48 HOST docker[4059]: [debug] 098/101548 (12) : 
> Proxy-Connection: Keep-Alive..
> Apr 08 10:15:48 HOST docker[4059]: [debug] 098/101548 (12) : 
> X-Forwarded-Proto: https..
> Apr 08 10:15:48 HOST docker[4059]: [debug] 098/101548 (12) : X-Forwarded-For: 
> IP..
> Apr 08 10:15:48 HOST docker[4059]: [debug] 098/101548 (12) : ..
> Apr 08 10:15:53 HOST docker[4059]: [ALERT] 098/101553 (12) : Lua function 
> 'connect-ssh': yield not allowed.
> 
> ### haproxy.cfg:
> global
> lua-load /etc/haproxy/proxytunnel.lua
> 
> …
> 
> frontend multiplex-ssh-http
> bind :80
> mode tcp
> option tcplog
> tcp-request inspect-delay 5s
> tcp-request content lua.connect-ssh if METH_CONNECT
> 
> # Detect SSH connection attempts
> acl client_attempts_ssh payload(0,7) -m bin 5353482d322e30
> 
> use_backend tcp-ssh if client_attempts_ssh
> default_backend http-nginx
> 
> backend tcp-ssh
> mode tcp
> option  tcplog
> server ssh dockerhost:22
> timeout server 2h
> 
> …
> 
> ### proxytunnel.lua:
> function string.starts(haystack,  needle)
>   return haystack:sub(1, needle:len()) == needle
> end
> 
> core.register_action('connect-ssh', { "tcp-req" }, function(txn)
>   local line = txn.req:getline();
> 
>   txn:Info("connect-ssh");
> 
>   if line == nil then
> txn:Debug("Got nil, skipping...");
> return
>   elseif not line:starts("CONNECT 127.0.0.1:22 HTTP/1.1") then
> txn:Debug("No match, got " .. line .. ", skipping...");
> return
>   end
> 
>   repeat -- skip headers
> txn:Debug(line);
> line = txn.req:getline();
>   until line == nil or line == "";
> 
>   return
> 
> end);
> 
> King regards
> Florian Aßmann


-- 




Re: Issue setting limits from Systemd to Haproxy service

2016-04-26 Thread Lukas Tribus

Hi Ricardo,


haproxy sets those values itself. If you want custom settings, adjust 
the configuration (although its not recommended, because this is 
automatically computed based on maxconn configuration):


http://cbonte.github.io/haproxy-dconv/configuration-1.6.html#3.1-ulimit-n


Also see:
http://cbonte.github.io/haproxy-dconv/configuration-1.6.html#3.2-maxconn


I would suggest you only configure maxconn in the haproxy configuration, 
and not enforce artificial ulimits, otherwise things will break when the 
kernel starts enforcing those limits.




Regards,

Lukas




Issue setting limits from Systemd to Haproxy service

2016-04-26 Thread Ricardo Fraile
Hello,



I try to limit the number of file descriptors using the variable
"LimitNOFILE" inside the following systemd unit:

[Unit]
Description=HAProxy Load Balancer
After=network.target

[Service]
ExecStartPre=/usr/local/sbin/haproxy -f /etc/haproxy/haproxy.cfg -c -q
ExecStart=/usr/local/sbin/haproxy-systemd-wrapper
-f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid
ExecReload=/bin/kill -USR2 $MAINPID
KillMode=mixed
Restart=always
LimitNOFILE=5 # For testing only...

[Install]
WantedBy=multi-user.target



But it only works for the first process spawned, wich is
haproxy-systemd-wrapper:

root  4421  0.0  0.1  17084  1508 ?Ss   10:11
0:00 /usr/local/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg
-p /run/haproxy.pid
nobody4423  0.0  0.4  30104  4436 ?S10:11   0:00
\_ /usr/local/sbin/haproxy -f /etc/haproxy/haproxy.cfg
-p /run/haproxy.pid -Ds
nobody4424  0.0  0.2  30104  2508 ?Ss   10:11   0:00
\_ /usr/local/sbin/haproxy -f /etc/haproxy/haproxy.cfg
-p /run/haproxy.pid -Ds

# cat /proc/4421/limits
Limit Soft Limit   Hard Limit
Units
Max open files55
files

# cat /proc/4423/limits
Limit Soft Limit   Hard Limit
Units
Max open files6401364013
files

# cat /proc/4424/limits
Limit Soft Limit   Hard Limit
Units
Max open files6401364013
files



The process who listen in the socket is the last one, 4424, with the bad
settings:

# netstat -ntap | grep haproxy
tcp0  0 0.0.0.0:80 0.0.0.0:*   LISTEN
4424/haproxy
tcp0  0 0.0.0.0:8088   0.0.0.0:*   LISTEN
4424/haproxy



haproxy-systemd-wrapper would not have to pass these values?

Is it possible to pass the limits from systemd to the listening haproxy
process?


Thanks,




Re: Haproxy running on 100% CPU and slow downloads

2016-04-26 Thread Lukas Tribus

Hi Sachin,


there is another fix Willy recently committed, its ff9c7e24fb [1]
and its in the snapshots [2] since 1.6.4-20160426.

This is supposed to fix the issue altogether.

Please let us know if this works for you.



Thanks,

Lukas


[1] 
http://www.haproxy.org/git?p=haproxy-1.6.git;a=commitdiff_plain;h=ff9c7e24fbbc33074e5257297e38473a3411f407

[2] http://www.haproxy.org/download/1.6/src/snapshot/