Re: [PATCH] BUG/MEDIUM: systemd: set KillMode to 'mixed'

2014-10-09 Thread Willy Tarreau
Hi Apollon,

On Wed, Oct 08, 2014 at 03:14:41PM +0300, Apollon Oikonomopoulos wrote:
> By default systemd will send SIGTERM to all processes in the service's
> control group. In our case, this includes the wrapper, the master
> process and all worker processes.
> 
> Since commit c54bdd2a the wrapper actually catches SIGTERM and survives
> to see the master process getting killed by systemd and regard this as
> an error, placing the unit in a failed state during "systemctl stop".

Then shouldn't we fix this by letting the wrapper die after receiving the
SIGTERM ? Otherwise I'm happy to merge your patch, but I'd rather ensure
that we don't encounter yet another issue.

I'm really amazed by the amount of breakage these new service managers are
causing to a simple process management that has been working well for over
40 years of UNIX existence now, and the difficulty we have to work around
this whole mess!

Thanks!
Willy




Re: Dynamic Backend Selection

2014-10-09 Thread Willy Tarreau
Hi,

On Tue, Oct 07, 2014 at 08:19:59AM -0500, B. Heath Robinson wrote:
> I am trying to use the dynamic backend selection feature of 1.5, but I am
> missing something.  Here is a snippet of my configuration:
> 
> frontend sledgehammer
> bind *:1
> option http-pretend-keepalive
> default_backend other
> capture request header X-Backend len 15
> use_backend %hr
> 
> This was my interpretation of using the log-format as the backend name.
> Can someone give me a little more info on this feature?

I think that your %hr will contain braces, which is clearly not what you
want. You'd rather simply do this to extract the header, and you can get
rid of the capture :

   use_backend %{hdr(x-backend)}

Willy




TSL handshake errors using mobile applications

2014-10-09 Thread Attila Heidrich
Dear All!

I have been using haproxy for more than a year with total satiscation.

This is the first problem we are unable to solve, maybe someone has already
meet any similar!

There are two JBOSS servers (http, port 8080), behind a haproxy (HA config
with keepalived, but this is irrelevant at the moment).
Haproxy listens on 80 and 443, port 80 connections are usually redirected
to 443.
Certs are configured in haproxy.

There are browsers using the service without problem.
There are also mobile clients (Android and IOS), which also can use the
services in most of the cases.

The client ocassianoally asks for a URL providing a live video steam.
Ther stream is provided in a long single connection, in which the mobile
client ask for the live video stream (URL has been recently provided),
which is served with sequence of jpeg images. There's a player...
blahblahblah

The problem is, that currenntly the stream must be provided on a http link,
because the https link doesn't work - haproxy responds error 503.

We have modified the server to provide http:// links, and added a rule NOT
to force the https redirection for the given specific URL path prefix - and
it works this way, but we need the video to be passed over a secure channel.

The switch:
- The same URL which fails from the mobile clients, works with any browser:
tested firefox, wget, curl, chrome. Also when running on the very same
phone where the client fails.
- The mobile client also works with the original https: URLs if not
haproxy, but apache/mod-proxy is involved. No other tests performed anyway.
- Behaviour is the same on both mobile platforms.
- For Android, the same class/member is used to get the URL of the live
stream, and to get the stream itself. Forst succeeds, 2nd fails. I guess it
should be similar for IOS, but I haven't seen the code yet.

Any idea, where to look for the real problem?

I have traced the network connection several times. Sometimes the TLS
handshake fails for the problematic session (succeds for all other session
from the same app using the same code), but not always. haproxy logs
 lines for the failed requests, no backend is connected. Like this
line here:

[09/Oct/2014:11:17:40.832] httpX~ httpX/ -1/-1/-1/-1/642 503 212 - -
SC-- 34/3/0/0/0 0/0 "POST
/portal/seam/resource/media?stream=1&streamGroupId=1001&streamId=20b227de-e7d7-4e26-8f92-350657413b5c&deviceId=228
HTTP/1.1"

the "same" POST from firefox/rest client:
[09/Oct/2014:11:20:16.292] httpX~ backend/server-1 9/0/1/510/75607 200
467801 - - --VN 34/3/3/3/0 0/0 "POST
/portal/seam/resource/media?stream=1&streamGroupId=1001&streamId=20b227de-e7d7-4e26-8f92-350657413b5c&deviceId=228
HTTP/1.1"

Regards,

Attila


Re: Freezing haproxy traffic with maxconn 0 and keepalive connections

2014-10-09 Thread Willy Tarreau
Hi Ivan,

On Thu, Oct 09, 2014 at 04:10:29PM +1300, Ivan Kurnosov wrote:
> Since `haproxy v1.5.0` it was possible to temporarily stop reverse-proxying
> traffic to frontends using
> 
> set maxconn frontend  0
> 
> command.
> 
> I've noticed that if haproxy is configured to maintain keepalive
> connections between hapxory and a client then said connections will
> continue be served whereas the new ones will continue awaiting for
> "un-pausing" a frontend.
> 
> The question is: is it possible to terminate current keepalive connections
> *gracefully* so that a client was required to establish new connections?

It's something I'd like to add also for the graceful shutdown, but for now
we don't have an easy way to navigate through the idle connections. However
something I was considering was to avoid keep-alive when serving a response
over a saturated frontend or when the process is stopping. That way existing
connections will fade out as new requests are sent over them. Do you think
that would already be acceptable in your case ?

> I've only found `shutdown session` and `shutdown sessions` commands but
> they are obviously not graceful at all.

Absolutely.

Willy




Re: [PATCH] BUG/MEDIUM: systemd: set KillMode to 'mixed'

2014-10-09 Thread Jason J. W. Williams

> I'm really amazed by the amount of breakage these new service managers are
> causing to a simple process management that has been working well for over
> 40 years of UNIX existence now, and the difficulty we have to work around
> this whole mess!

If there was a poster child for "knowing better" than the UNIX way and doing 
violence to it, it would be systemd. 


Re: [PATCH] BUG/MEDIUM: systemd: set KillMode to 'mixed'

2014-10-09 Thread Apollon Oikonomopoulos
Hi Willy,

On 11:26 Thu 09 Oct , Willy Tarreau wrote:
> Hi Apollon,
> 
> On Wed, Oct 08, 2014 at 03:14:41PM +0300, Apollon Oikonomopoulos wrote:
> > By default systemd will send SIGTERM to all processes in the service's
> > control group. In our case, this includes the wrapper, the master
> > process and all worker processes.
> > 
> > Since commit c54bdd2a the wrapper actually catches SIGTERM and survives
> > to see the master process getting killed by systemd and regard this as
> > an error, placing the unit in a failed state during "systemctl stop".
> 
> Then shouldn't we fix this by letting the wrapper die after receiving the
> SIGTERM ? Otherwise I'm happy to merge your patch, but I'd rather ensure
> that we don't encounter yet another issue.

The wrapper does exit on its own when the haproxy "master" process 
exits, which is done as soon as all "worker" processes exit. The problem 
is that the wrapper wants to control all worker processes on its own, 
while systemd second-guesses it by delivering SIGTERM to all processes 
by default.

> 
> I'm really amazed by the amount of breakage these new service managers are
> causing to a simple process management that has been working well for over
> 40 years of UNIX existence now, and the difficulty we have to work around
> this whole mess!

I guess every new system has its difficulties and learning curve, 
especially when it breaks implicit assumptions that hold for a long 
time.

Cheers,
Apollon



Re: Freezing haproxy traffic with maxconn 0 and keepalive connections

2014-10-09 Thread Ivan Kurnosov
> It's something I'd like to add also for the graceful shutdown, but for now
we don't have an easy way to navigate through the idle connections. However
something I was considering was to avoid keep-alive when serving a response
over a saturated frontend or when the process is stopping. That way existing
connections will fade out as new requests are sent over them. Do you think
that would already be acceptable in your case ?

What about `disable server app-servers/${name}`?

I was just told that it will work, but I will only be able to check it
tomorrow in the office. From my perspective - if it will, then the I miss
the whole point of introducing of `set maxconn frontend  0`
in v1.5.0

On 9 October 2014 22:31, Willy Tarreau  wrote:

> Hi Ivan,
>
> On Thu, Oct 09, 2014 at 04:10:29PM +1300, Ivan Kurnosov wrote:
> > Since `haproxy v1.5.0` it was possible to temporarily stop
> reverse-proxying
> > traffic to frontends using
> >
> > set maxconn frontend  0
> >
> > command.
> >
> > I've noticed that if haproxy is configured to maintain keepalive
> > connections between hapxory and a client then said connections will
> > continue be served whereas the new ones will continue awaiting for
> > "un-pausing" a frontend.
> >
> > The question is: is it possible to terminate current keepalive
> connections
> > *gracefully* so that a client was required to establish new connections?
>
> It's something I'd like to add also for the graceful shutdown, but for now
> we don't have an easy way to navigate through the idle connections. However
> something I was considering was to avoid keep-alive when serving a response
> over a saturated frontend or when the process is stopping. That way
> existing
> connections will fade out as new requests are sent over them. Do you think
> that would already be acceptable in your case ?
>
> > I've only found `shutdown session` and `shutdown sessions` commands but
> > they are obviously not graceful at all.
>
> Absolutely.
>
> Willy
>
>


-- 
With best regards, Ivan Kurnosov


Re: [PATCH] BUG/MEDIUM: systemd: set KillMode to 'mixed'

2014-10-09 Thread Willy Tarreau
On Thu, Oct 09, 2014 at 12:35:10PM +0300, Apollon Oikonomopoulos wrote:
> Hi Willy,
> 
> On 11:26 Thu 09 Oct , Willy Tarreau wrote:
> > Hi Apollon,
> > 
> > On Wed, Oct 08, 2014 at 03:14:41PM +0300, Apollon Oikonomopoulos wrote:
> > > By default systemd will send SIGTERM to all processes in the service's
> > > control group. In our case, this includes the wrapper, the master
> > > process and all worker processes.
> > > 
> > > Since commit c54bdd2a the wrapper actually catches SIGTERM and survives
> > > to see the master process getting killed by systemd and regard this as
> > > an error, placing the unit in a failed state during "systemctl stop".
> > 
> > Then shouldn't we fix this by letting the wrapper die after receiving the
> > SIGTERM ? Otherwise I'm happy to merge your patch, but I'd rather ensure
> > that we don't encounter yet another issue.
> 
> The wrapper does exit on its own when the haproxy "master" process 
> exits, which is done as soon as all "worker" processes exit. The problem 
> is that the wrapper wants to control all worker processes on its own, 
> while systemd second-guesses it by delivering SIGTERM to all processes 
> by default.

OK, so I'm merging your patch if you think it's the best solution.

> > I'm really amazed by the amount of breakage these new service managers are
> > causing to a simple process management that has been working well for over
> > 40 years of UNIX existence now, and the difficulty we have to work around
> > this whole mess!
> 
> I guess every new system has its difficulties and learning curve, 
> especially when it breaks implicit assumptions that hold for a long 
> time.

Well, we're far away from the learning curve, we're writing a wrapper
to help a stupid service manager handle daemons, because the people who
wrote it did not know that in the unix world, there were some services
running in background. "ps aux" could have educated them by discovering
that there were other processes than "ps" and their shell :-/

Thanks,
Willy




Re: [PATCH] BUG/MEDIUM: systemd: set KillMode to 'mixed'

2014-10-09 Thread Apollon Oikonomopoulos
On 11:44 Thu 09 Oct , Willy Tarreau wrote:
> On Thu, Oct 09, 2014 at 12:35:10PM +0300, Apollon Oikonomopoulos wrote:
> > Hi Willy,
> > 
> > On 11:26 Thu 09 Oct , Willy Tarreau wrote:
> > > Hi Apollon,
> > > 
> > > On Wed, Oct 08, 2014 at 03:14:41PM +0300, Apollon Oikonomopoulos wrote:
> > > > By default systemd will send SIGTERM to all processes in the service's
> > > > control group. In our case, this includes the wrapper, the master
> > > > process and all worker processes.
> > > > 
> > > > Since commit c54bdd2a the wrapper actually catches SIGTERM and survives
> > > > to see the master process getting killed by systemd and regard this as
> > > > an error, placing the unit in a failed state during "systemctl stop".
> > > 
> > > Then shouldn't we fix this by letting the wrapper die after receiving the
> > > SIGTERM ? Otherwise I'm happy to merge your patch, but I'd rather ensure
> > > that we don't encounter yet another issue.
> > 
> > The wrapper does exit on its own when the haproxy "master" process 
> > exits, which is done as soon as all "worker" processes exit. The problem 
> > is that the wrapper wants to control all worker processes on its own, 
> > while systemd second-guesses it by delivering SIGTERM to all processes 
> > by default.
> 
> OK, so I'm merging your patch if you think it's the best solution.

Well, I think it's the most sane thing to do and is behaviour-compatible 
with the current wrapper version.

> 
> > > I'm really amazed by the amount of breakage these new service managers are
> > > causing to a simple process management that has been working well for over
> > > 40 years of UNIX existence now, and the difficulty we have to work around
> > > this whole mess!
> > 
> > I guess every new system has its difficulties and learning curve, 
> > especially when it breaks implicit assumptions that hold for a long 
> > time.
> 
> Well, we're far away from the learning curve, we're writing a wrapper
> to help a stupid service manager handle daemons, because the people who
> wrote it did not know that in the unix world, there were some services
> running in background. "ps aux" could have educated them by discovering
> that there were other processes than "ps" and their shell :-/

Truth is, we're writing a wrapper to handle gracefully reloading HAProxy 
by completely replacing the master process. Other than that, systemd is 
plain happy with just HAProxy running in the foreground using -Ds. I 
even have a suspicion that we don't need the wrapper at all to do 
graceful reloading. I have to do some experiments on that and I'll come 
back to you.

Thanks,
Apollon



Re: [PATCH] BUG/MEDIUM: systemd: set KillMode to 'mixed'

2014-10-09 Thread Willy Tarreau
On Thu, Oct 09, 2014 at 12:55:25PM +0300, Apollon Oikonomopoulos wrote:
> On 11:44 Thu 09 Oct , Willy Tarreau wrote:
> > OK, so I'm merging your patch if you think it's the best solution.
> 
> Well, I think it's the most sane thing to do and is behaviour-compatible 
> with the current wrapper version.

It's already merged in both 1.5 and 1.6.

> > > > I'm really amazed by the amount of breakage these new service managers 
> > > > are
> > > > causing to a simple process management that has been working well for 
> > > > over
> > > > 40 years of UNIX existence now, and the difficulty we have to work 
> > > > around
> > > > this whole mess!
> > > 
> > > I guess every new system has its difficulties and learning curve, 
> > > especially when it breaks implicit assumptions that hold for a long 
> > > time.
> > 
> > Well, we're far away from the learning curve, we're writing a wrapper
> > to help a stupid service manager handle daemons, because the people who
> > wrote it did not know that in the unix world, there were some services
> > running in background. "ps aux" could have educated them by discovering
> > that there were other processes than "ps" and their shell :-/
> 
> Truth is, we're writing a wrapper to handle gracefully reloading HAProxy 
> by completely replacing the master process.

Yes, which seems normal to me. Otherwise how do you upgrade a service
without replacing the master process ? People are performing their seamless
version upgrades everywhere thanks to this.

> Other than that, systemd is 
> plain happy with just HAProxy running in the foreground using -Ds. I 
> even have a suspicion that we don't need the wrapper at all to do 
> graceful reloading. I have to do some experiments on that and I'll come 
> back to you.

Quite frankly, I don't see how it makes sense to run a *daemon* in
foreground, except to hide the flaws of the service manager. It also
prevents running in multi-process mode. A daemon runs in background,
with or without sub-processes, and may be replaced at any moment for
various reasons ranging from config changes, upgrades and operations
or mistakes from the admin.

Anyway we're not there to discuss the benefits or defaults of systemd,
some major distros have adopted it and now we have to work around its
breakages so that users can continue to use their systems as if it was
still a regular, manageable UNIX system.

So thanks for your patch :-)
Willy




Re: [PATCH] BUG/MEDIUM: systemd: set KillMode to 'mixed'

2014-10-09 Thread Apollon Oikonomopoulos
On 12:07 Thu 09 Oct , Willy Tarreau wrote:
> Anyway we're not there to discuss the benefits or defaults of systemd,
> some major distros have adopted it and now we have to work around its
> breakages so that users can continue to use their systems as if it was
> still a regular, manageable UNIX system.

Trust me, there are lots of things I don't like about systemd either.  
But given that it seems to be here to stay for a while, I don't think we 
have much choice :)

Regards,
Apollon



1.5.5 - Config with Disabled backend causes silent loss of configuration.

2014-10-09 Thread Paul Taylor
Hi,

I have some 1.5.3 configurations which contain a default_backend which is 
actually disabled.
Snippet below.
On upgrading to 1.5.5 - the first backend following the disabled line gets 
silently lost.

Frontend main *:80
  ...
default_backend default

#-
# round robin balancing between the various backends
#-
backend default
disabled

backend app1


backend app2


Easy to reproduce - and loss is visible in stats page.

Any thoughts ?

Best Regards,
Paul


Equifax Limited is registered in England with Registered No. 2425920. 
Registered Office: Capital House, 25 Chapel Street, London NW1 5DS. Equifax 
Limited is authorised and regulated by the Financial Conduct Authority.
Equifax Touchstone Limited is registered in Scotland with Registered No. 
SC113401. Registered Office: 54 Deerdykes View, Westfield Park, Cumbernauld G68 
9HN.
Equifax Commercial Services Limited is registered in the Republic of Ireland 
with Registered No. 215393. Registered Office: IDA Business & Technology Park, 
Rosslare Road, Drinagh, Wexford.

This message contains information from Equifax which may be confidential and 
privileged. If you are not an intended recipient, please refrain from any 
disclosure, copying, distribution or use of this information and note that such 
actions are prohibited. If you have received this transmission in error, please 
notify by e-mail postmas...@equifax.com.


Re: Freezing haproxy traffic with maxconn 0 and keepalive connections

2014-10-09 Thread Willy Tarreau
On Thu, Oct 09, 2014 at 10:37:04PM +1300, Ivan Kurnosov wrote:
> > It's something I'd like to add also for the graceful shutdown, but for now
> we don't have an easy way to navigate through the idle connections. However
> something I was considering was to avoid keep-alive when serving a response
> over a saturated frontend or when the process is stopping. That way existing
> connections will fade out as new requests are sent over them. Do you think
> that would already be acceptable in your case ?
> 
> What about `disable server app-servers/${name}`?
> 
> I was just told that it will work, but I will only be able to check it
> tomorrow in the office.

If you disable all of your servers, users will end up with getting a 503 when
they want to send a new request.

> From my perspective - if it will, then the I miss
> the whole point of introducing of `set maxconn frontend  0`
> in v1.5.0

It's in order to limit the amount of concurrent conns on a frontend. 0 is
just a value among other ones. This is important in a number of situations,
for example when you have a shared load balancer between many hosted customers.
You then want to limit one of them on the fly because it's eating all of
your resources.

Regards,
Willy




Re: TSL handshake errors using mobile applications

2014-10-09 Thread Attila Heidrich
Finally it turned out NOT to be TLS related.
The problem was the HTTP acl matching in the frontend.

I used acl: hdr(host). The mobile client was the only in the whole world
which specified the :443 for https: URLs, so I have never noticed earlier,
that it can be a problem.

changed to hdr_dom(host), and happy now.

Regards,

Attila


[PATCH] systemd: check config before starting.

2014-10-09 Thread Marcus Rueckert
as the patch name says ... systemd gives us a hook to run stuff before
the service is started, we can use that to test if the config is valid.

that's something that my old init script also did.

with kind regards

darix


-- 
   openSUSE - SUSE Linux is my linux
   openSUSE is good for you
   www.opensuse.org
>From b940a258a735cdfd330a5d45c8f0e38be6a80534 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Kristoffer=20Gr=C3=B6nlund?= 
Date: Thu, 9 Oct 2014 16:51:29 +0200
Subject: [PATCH] systemd: Check configuration before start

Adds a configuration check before starting the haproxy service.
---
 contrib/systemd/haproxy.service.in | 1 +
 1 file changed, 1 insertion(+)

diff --git a/contrib/systemd/haproxy.service.in b/contrib/systemd/haproxy.service.in
index 0bc5420..85937e4 100644
--- a/contrib/systemd/haproxy.service.in
+++ b/contrib/systemd/haproxy.service.in
@@ -3,6 +3,7 @@ Description=HAProxy Load Balancer
 After=network.target
 
 [Service]
+ExecStartPre=@SBINDIR@/haproxy -f /etc/haproxy/haproxy.cfg -c -q
 ExecStart=@SBINDIR@/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid
 ExecReload=/bin/kill -USR2 $MAINPID
 KillMode=mixed
-- 
1.8.4.5



Re: 2 services (frontend+backend), both with cookies, failure

2014-10-09 Thread Jarno Huuskonen
Hi,

On Mon, Oct 06, Kari Mattsson wrote:
> (IP numbers are imaginary, not real.)
> When I go to http://200.200.200.111 and http://200.200.200.222, and press F5 
> (refresh) on Firefox for a few time, I end up with 4 cookies instead of 2.

For example when you go to .111 and hit refresh few times do the
requests go the same (backend)server or to both servers ?

Couple of things to check:
- what do you get in haproxy log (option httplog) when you do:
  firefox refresh test ?
  your logs should show when haproxy inserts the cookie:
  http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#8.5

- you could also use tcpdump to see what cookies firefox <-> haproxy
  send/receive ?

- have you tried testing w/out using stick table / stick on cookie ? (For
  debugging purposes?) I think just the cookie SERVICE_1 insert and
  cookie app* on server lines should be enough to get session
  persistence.

- what are you trying to store with the stick table ? I think you are
  going to have only two entries in the stick table:
  key=appl01 and key=appl02 ?

-Jarno

> backend service_1_inside
>   mode http
>   balance roundrobin   # source roundrobin leastconn ...
> 
>   stick-table type string len 32 size 100k expire 1h store 
> conn_cur,conn_rate(60s)
>   stick on cookie(SERVICE_1)
>   cookie SERVICE_1 insert indirect maxlife 1h
> 
>   default-server maxconn 1000 weight 100 inter 2s fastinter 700ms downinter 
> 10s fall 3 rise 2
>   server App_101 10.10.10.101:80 cookie app101 check
>   server App_102 10.10.10.102:80 cookie app102 check

-- 
Jarno Huuskonen



Re: 1.5.5 - Config with Disabled backend causes silent loss of configuration.

2014-10-09 Thread Bryan Talbot
I think I can reproduce this and a similar bug that causes a SEGFAULT (on
load or config check) when 'disabled' appears in a backend using the config
shown below.


defaults
  timeout client 5s
  timeout server 5s

frontend main :
  default_backend one

backend one

backend two
  disabled



A git bisect shows it breaking with commit

91b00c2194b728ccd61133cca83f03de3650b674 is the first bad commit
commit 91b00c2194b728ccd61133cca83f03de3650b674
Author: Willy Tarreau 
Date:   Tue Sep 16 13:41:21 2014 +0200

MEDIUM: config: compute the exact bind-process before listener's
maxaccept

This is a continuation of previous patch, the listener's maxaccept is
divided
by the number of processes, so it's best if we can swap the two blocks
so that
the number of processes is already known when computing the maxaccept
value.
(cherry picked from commit 419ead8eca9237f9cc2ec32630d96fde333282ee)



On Thu, Oct 9, 2014 at 3:12 AM, Paul Taylor  wrote:

>  Hi,
>
>
>
> I have some 1.5.3 configurations which contain a default_backend which is
> actually disabled.
>
> Snippet below.
>
> On upgrading to 1.5.5 – the first backend following the disabled line gets
> silently lost.
>
>
>
> Frontend main *:80
>
>   …
>
> default_backend default
>
>
>
> #-
>
> # round robin balancing between the various backends
>
> #-
>
> backend default
>
> disabled
>
>
>
> backend app1
>
> ….
>
>
>
> backend app2
>
> ….
>
>
>
> Easy to reproduce – and loss is visible in stats page.
>
>
>
> Any thoughts ?
>
>
>
> Best Regards,
>
> Paul
>
>
>
>
>
> Equifax Limited is registered in England with Registered No. 2425920. 
> Registered
> Office: Capital House, 25 Chapel Street, London NW1 5DS. Equifax Limited is
> authorised and regulated by the Financial Conduct Authority.
>
> Equifax Touchstone Limited is registered in Scotland with Registered No.
> SC113401. Registered Office: 54 Deerdykes View, Westfield Park, Cumbernauld
> G68 9HN.
>
> Equifax Commercial Services Limited is registered in the Republic of
> Ireland with Registered No. 215393. Registered Office: IDA Business &
> Technology Park, Rosslare Road, Drinagh, Wexford.
>
>
>
> This message contains information from Equifax which may be confidential
> and privileged. If you are not an intended recipient, please refrain from
> any disclosure, copying, distribution or use of this information and note
> that such actions are prohibited. If you have received this transmission in
> error, please notify by e-mail postmas...@equifax.com.
>


Re: 1.5.5 - Config with Disabled backend causes silent loss of configuration.

2014-10-09 Thread Willy Tarreau
Hi guys,

On Thu, Oct 09, 2014 at 11:57:03AM -0700, Bryan Talbot wrote:
> I think I can reproduce this and a similar bug that causes a SEGFAULT (on
> load or config check) when 'disabled' appears in a backend using the config
> shown below.
> 
> 
> defaults
>   timeout client 5s
>   timeout server 5s
> 
> frontend main :
>   default_backend one
> 
> backend one
> 
> backend two
>   disabled
> 

This is a serious bug, I don't understand how it could happen, so it
make me think that there's a side effect somewhere of the code move,
but I can't understand which one. I have to analyse it.

> A git bisect shows it breaking with commit
> 
> 91b00c2194b728ccd61133cca83f03de3650b674 is the first bad commit
> commit 91b00c2194b728ccd61133cca83f03de3650b674
> Author: Willy Tarreau 
> Date:   Tue Sep 16 13:41:21 2014 +0200
> 
> MEDIUM: config: compute the exact bind-process before listener's
> maxaccept

(...)

Thank you Brian, that's really useful, it saves me quite some time!
I'll check this tomorrow as this evening I'm exhausted.

Best regards,
Willy




Re: [PATCH] systemd: check config before starting.

2014-10-09 Thread Willy Tarreau
Hi Marcus,

On Thu, Oct 09, 2014 at 05:00:09PM +0200, Marcus Rueckert wrote:
> as the patch name says ... systemd gives us a hook to run stuff before
> the service is started, we can use that to test if the config is valid.
> 
> that's something that my old init script also did.
> 
> with kind regards

Thanks. I'd like the folks who use systemd to review this so we ensure
we don't break something again for some obscure setups. It seems fine
to me at first glance, but since every time I merge something in this
area someone else finds a side effect, it would be better to find the
side effects before merging :-)

If everyone remains silent tomorrow, I'll merge it as-is (and if I forget,
do not hesitate to hammer me).

Thanks,
Willy




Re: TSL handshake errors using mobile applications

2014-10-09 Thread Willy Tarreau
Hi,

On Thu, Oct 09, 2014 at 02:15:19PM +0200, Attila Heidrich wrote:
> Finally it turned out NOT to be TLS related.
> The problem was the HTTP acl matching in the frontend.
> 
> I used acl: hdr(host). The mobile client was the only in the whole world
> which specified the :443 for https: URLs, so I have never noticed earlier,
> that it can be a problem.
> 
> changed to hdr_dom(host), and happy now.

it's not the only one in the world, I happen to see a very small part of
them doing so from time to time. Similarly some clients send host:80,
presumably when the port is written on a link, but I could be wrong.

I tend to use hdr_dom() or hdr_beg() for this. I'd like to have a sample
converter which strips the port without affecting IPv6 addresses, it would
be more convenient for such use cases.

Regards,
Willy




SNI in logs

2014-10-09 Thread Eugene Istomin
Hello,

can we log SNI headers (req_ssl_sni) or generally, 
SNI availability (ssl_fc_has_sni) the same way we 
log SSL version (%sslv)?
/---/
*/Best regards,/*
/Eugene Istomin/



Re: Connect to SNI-only server (haproxy as a client)

2014-10-09 Thread Eugene Istomin
Hello,

yesterday we are looking for the client-side SNI custom string for one of 
our clients and choose stunnel (as outbound TLS termination) for two 
reasons:
1) ability to send client certificate (client mode)
2) ability to send custom SNI header in client mode

We use haproxy as main L7 routers for years with a little bit of stunnel for 
client cert auth.
Do you have any plans to add this features in 1.6?

Thanks.
/---/
*/Best regards,/*
/Eugene Istomin/


> On Mon, Aug 18, 2014 at 05:46:14PM +0200, Baptiste wrote:
> > On Mon, Aug 18, 2014 at 2:40 PM, Willy Tarreau  wrote:
> > > Hi Benedikt,
> > > 
> > > On Mon, Aug 18, 2014 at 10:17:02AM +0200, Benedikt Fraunhofer 
wrote:
> > >> Hello List,
> > >> 
> > >> I'm trying to help an java6-app that can't connect to a server which
> > >> seems to support SNI-only.
> > >> 
> > >> I thought I could just add some frontend and backend stancas
> > >> 
> > >> and include the sni-only server as a server in the backend-section 
like so:
> > >>server a 1.2.3.4:443 ssl verify none force-tlsv12
> > >> 
> > >> (I had verify set, just removed it to keep it simple and rule it out)
> > >> 
> > >> But it seems the server in question insists on SNI, whatever force-* 
I
> > >> use and the connection is tcp-reset by the server (a) right after 
the
> > >> Client-Hello from haproxy.
> > >> 
> > >> Is there a way to specify the "TLS SNI field" haproxy should use for
> > >> these outgoing connections?
> > > 
> > > Not yet. We identified multiple needs for this field which a single
> > > constant in the configuration will not solve. While some users will
> > > only need a constant value (which seems to be your case), others
> > > need to forward the SNI they got on the other side, or to build one
> > > from a Host header field.
> > > 
> > > So it's likely that we'll end up with a sample expression instead of
> > > a constant. Additionally that means that for health checks we need 
an
> > > extra setting (likely a constant this time).
> > > 
> > > But for now, the whole solution is not designed yet, let alone
> > > implented.
> 
> Btw is this something you're actively looking at, to design/implement?
> 
> People on the list should be able to provide feedback about the planned
> expression to set the SNI field for client connections..
> > > regards,
> > > Willy
> > 
> > Hi,
> > 
> > Microsoft Lync seems to have the same requirement for SNI...
> > We need it in both traffic and health checks.
> 
> OK, good to know.
> 
> 
> Thanks,
> 
> -- Pasi
> 
> > Baptiste



Re: Connect to SNI-only server (haproxy as a client)

2014-10-09 Thread Willy Tarreau
Hello Eugene,

On Fri, Oct 10, 2014 at 08:13:43AM +0300, Eugene Istomin wrote:
> Hello,
> 
> yesterday we are looking for the client-side SNI custom string for one of 
> our clients and choose stunnel (as outbound TLS termination) for two 
> reasons:
> 1) ability to send client certificate (client mode)
> 2) ability to send custom SNI header in client mode
> 
> We use haproxy as main L7 routers for years with a little bit of stunnel for 
> client cert auth.
> Do you have any plans to add this features in 1.6?

It is already possible to send the client certificate, you just have
to specify "crt " on the server line. There are some ongoing
discussions about SNI. We all want to have it but want to ensure we're
doing it correctly. Most users want to have a dynamic one, at least being
able to retrieve the one from the other side, and possibly extract it
from a Host header. And of course also from a static string. We're just
trying to find the best way to configure this so that it's easy for all
users.

I personally think that a sample expression would be appropriate, just
as for the "usesrc" keyword (which is currently limited). I'd rather
avoid the ugly logformat string at this point since I don't think we
need this complexity.

If you have any opinion on the subject, please voice in!

Best regards,
Willy