Re: redirect prefix, use variable host

2012-05-23 Thread Finn Arne Gangstad
On Thu, May 17, 2012 at 2:41 PM, Willy Tarreau  wrote:
> Hi,
>
> On Wed, May 16, 2012 at 05:05:05PM +0200, hapr...@serverphorums.com wrote:
>> I think I am in this exact same boat. I have a site with wildcard subdomains
>> as well.
>>
>> Is there an ETA on this this pattern extraction? I browsed the changelogs up
>> through 5/14/2012 and it looks like there could be some possible headway on
>> this. Can someone please confirm?
>
> A lot of progress has indeed been made, but we still don't have such
> ability and it will take some time to implement.
>
>> Are there any other creative solutions for this? I need to redirect any
>> subdomain to https unless it is www.
>
> No idea right now. From what I understand you'd like to take the host
> header and put it into your redirects, prepended by "http://"; or "https://";
> depending on the type of redirection, that's it ?
>
> Maybe it should not be too hard to implement some "http-host" and
> "https-host" options to the "redirect" statement, as alternatives
> to "prefix", and which would automatically concatenate a scheme
> ("http://"; or "https://";) with the extract of the Host header and
> the current URI. I must say I have not much studied the idea, but
> it should be doable without too much effort.

This is how we currently do https redirects with haproxy and nginx:

in haproxy:

  use_backend https-redirect if {  }

backend https-redirect:
  server https-redir 127.2.0.1:80

in nginx:

server {
listen 127.2.0.1:80;
rewrite ^ https://$host$request_uri? permanent;
}

And we also do ssl termination with nginx, so the overhead of this
solution is minimal.

If haproxy could do both the ssl termination and the https redirect,
that would be something :)


- Finn Arne



Re: SSL farm

2012-05-23 Thread Hervé COMMOWICK
Or you may use PROXY protocol and set send-proxy in your haproxy 
configuration and ask stud to merge this : 
https://github.com/bumptech/stud/pull/81


Hervé.

On 05/22/2012 05:48 PM, Allan Wind wrote:

I read through the last 6 months of archive and the usual answer
for SSL support is put nginx/stunnel/stud in front.  This, as far
as I can tell, means a single server handling SSL, and this is
the what  suggest is a non-scalable
solution.

You can obviously configure haproxy to route ssl connections to a
form via the tcp mode, but you then lose the client IP.  The
transparent keyword is promising but apparently requires haproxy
box to be the gateway.  Not sure that is possible with our cloud
environment.

I understand from:

that session reuse (i.e. mod_gnutls in our case) would need to be
configured on the backend to permit ssl resume.

But how do you go about distributing traffic to a ssl form
without losing the client IP?


/Allan


--
Hervé COMMOWICK
Ingénieur systèmes et réseaux.

http://www.rezulteo.com
by Lizeo Online Media Group 
42 quai Rambaud - 69002 Lyon (France) ⎮ ☎ +33 (0)4 63 05 95 30



Comments on proposed config for compartmentalised testing

2012-05-23 Thread Ben Tisdall
Dear list,

first of all apologies if this is a re-post - I've had no reply since
sending this on 2012-05-21 and having lurked on the list for a few
months now that seems unusual. I can't check whether the message hit
the list because the archive contains no recent posts.

Anyway, I'd appreciate your collective wisdom on the following
problem. My company uses a 2-tier web architecture for those parts of
the website that are dynamic:

* Presentation (pres.) server (in our parlance the "frontend" service)
- responsible for content generation.
* API server (in our parlance the "backend" service) - performs
business logic, supplies business objects to the presentation server,
handles AJAX requests directly.

We wish to implement a means of deploying new code (pres. and API
versions that have been tested together, referred to as logical
versions 'X' and 'Y') to a subset of a group servers balanced by a
single haproxy instance. We will then measure the live performance of
the new vs old code before completing the deployment or rolling back.

In summary:

* There should be user affinity to pres. version 'X' or 'Y'.
* An API request from the browser with affinity to pres. 'X' should
ideally go to an API server running 'X', falling back to any available
version. Ditto 'Y'.
* An API request from a pres server running version 'X' should ideally
go to an API server running 'X', falling back to any available
version. Ditto 'Y'.

We have implemented this as follows:

* Browser <-> server affinity managed using a cookie inserted by haproxy
* Server <-> server affinity managed an http header set by the pres.
server indicating preferred API server version.

The config representing the above is
https://gist.github.com/f7075d6fb1b5d00ddf52: Note that this
represents the configuration in the old vs new state, the
configuration for each step of the release process is templated.

Does this solution reasonable?

Thanks,

Ben Tisdall.



No PID file when running in foreground

2012-05-23 Thread Chad Gatesman
Is there a major reason the -p option to generate a pid file is ignored
when running haproxy the foreground (e.g. using -db)?  It would be nice if
this file was still generated when specified--even in foreground mode.
Could this be something that could be changed in a future releases?

The change looks trivial, but I would rather not rely on having to use a
custom modified version of haproxy.  It would nicer to see this behavior in
an official version of haproxy.

-Chad


Re: redirect prefix, use variable host

2012-05-23 Thread Baptiste
> If haproxy could do both the ssl termination and the https redirect,
> that would be something :)
>
>
> - Finn Arne
>


SSL termination is on its way ;)

cheers



Re: SSL farm

2012-05-23 Thread Allan Wind
On 2012-05-23 11:42:24, Hervé COMMOWICK wrote:
> Or you may use PROXY protocol and set send-proxy in your haproxy
> configuration and ask stud to merge this :
> https://github.com/bumptech/stud/pull/81

This is the single ssl server configuration that I explicitly 
wanted to avoid.  Right?


/Allan
-- 
Allan Wind
Life Integrity, LLC




Re: SSL farm

2012-05-23 Thread Hervé COMMOWICK

No, you may have multiple stud.

On 05/23/2012 04:12 PM, Allan Wind wrote:

On 2012-05-23 11:42:24, Hervé COMMOWICK wrote:

Or you may use PROXY protocol and set send-proxy in your haproxy
configuration and ask stud to merge this :
https://github.com/bumptech/stud/pull/81


This is the single ssl server configuration that I explicitly
wanted to avoid.  Right?


/Allan


--
Hervé COMMOWICK
Ingénieur systèmes et réseaux.

http://www.rezulteo.com
by Lizeo Online Media Group 
42 quai Rambaud - 69002 Lyon (France) ⎮ ☎ +33 (0)4 63 05 95 30



Re: SSL farm

2012-05-23 Thread Allan Wind
On 2012-05-23 16:21:35, Hervé COMMOWICK wrote:
> No, you may have multiple stud.

And how do you load balance between them?  DNS round robin is not 
good enough.


/Allan
-- 
Allan Wind
Life Integrity, LLC




Re: SSL farm

2012-05-23 Thread Baptiste
On Wed, May 23, 2012 at 4:27 PM, Allan Wind
 wrote:
> On 2012-05-23 16:21:35, Hervé COMMOWICK wrote:
>> No, you may have multiple stud.
>
> And how do you load balance between them?  DNS round robin is not
> good enough.
>
>

layer4 load-balancers (LVS).



Re: SSL farm

2012-05-23 Thread Hervé COMMOWICK
just use HAProxy to load balance to multiple stud, with send-proxy on 
HAProxy side, and --read-proxy on stud side.


Hervé.

On 05/23/2012 04:27 PM, Allan Wind wrote:

On 2012-05-23 16:21:35, Hervé COMMOWICK wrote:

No, you may have multiple stud.


And how do you load balance between them?  DNS round robin is not
good enough.


/Allan


--
Hervé COMMOWICK
Ingénieur systèmes et réseaux.

http://www.rezulteo.com
by Lizeo Online Media Group 
42 quai Rambaud - 69002 Lyon (France) ⎮ ☎ +33 (0)4 63 05 95 30



RE: SSL farm

2012-05-23 Thread Jens Dueholm Christensen (JEDC)
Or put keepalived in front of 2 or more machines with stud/stunnel/nginx for 
SSL termination and HAProxy for distributing the traffic to all backends.

Keepalived can move a floating IP between multiple machines, and as long as 
each machine can do ssl termination and load balancing, you've got no single 
machine that can cause a failure.

A crude ASCII drawing should illustrate what I mean:

/-\
   /  \
---*-*---
   \  /
\-/


Regards,
Jens Dueholm Christensen


From: Hervé COMMOWICK [herve.commow...@lizeo-group.com]
Sent: 23 May 2012 16:37
To: haproxy@formilux.org
Subject: Re: SSL farm

just use HAProxy to load balance to multiple stud, with send-proxy on
HAProxy side, and --read-proxy on stud side.

Hervé.

On 05/23/2012 04:27 PM, Allan Wind wrote:
> On 2012-05-23 16:21:35, Hervé COMMOWICK wrote:
>> No, you may have multiple stud.
>
> And how do you load balance between them?  DNS round robin is not
> good enough.
>
>
> /Allan

--
Hervé COMMOWICK
Ingénieur systèmes et réseaux.

http://www.rezulteo.com
by Lizeo Online Media Group 
42 quai Rambaud - 69002 Lyon (France) ⎮ ☎ +33 (0)4 63 05 95 30

Re: SSL farm

2012-05-23 Thread Allan Wind
On 2012-05-23 16:37:53, Hervé COMMOWICK wrote:
> just use HAProxy to load balance to multiple stud, with send-proxy
> on HAProxy side, and --read-proxy on stud side.

Thanks for the pointers, Hervé.   stud is not in debian stable, 
and both haproxy and stunnel are too old to have this feature.  
mod_gnutls doesn't support the proxy protocol as far as I can 
tell.

What happens to a browser session when a ssl connection moves 
from one server to another without the resume feature mid-stream?  
Reload, at least with stunnel, implies renegotiation of the 
ssl connection as far as I can tell.


/Allan
-- 
Allan Wind
Life Integrity, LLC




Re: SSL farm

2012-05-23 Thread Baptiste
Without SSL resume, the client will make the server to generate a new
asymetric key.
Which takes much more resources than a simple SSL transaction.

So it's better to be able to resume if your clients move from one LB
to an other one very often ;)

cheers



mysql failover and forcing disconnects

2012-05-23 Thread Justin Karneges
Hello list,

I'm using haproxy to handle failover between a mysql master and slave. The 
slave replicates from master and is read-only. I specify both mysql servers in 
my haproxy configuration, and use the "backup" option on the slave. 
Applications connect to haproxy instead of mysql directly. Haproxy routes all 
connections to the master, unless the master is down in which case it routes 
them all to the slave.

This actually works well enough, but a couple of peculiarities arise from the 
fact that haproxy doesn't disturb existing connections when servers go up and 
down:

1) Even if haproxy notices within seconds that the mysql master is down, 
existing connections remain pointed to the master. I set "timeout server 5m" 
so that within 5 minutes of inactivity, haproxy will eventually kill the 
connections, causing clients to reconnect and get routed to the slave. This 
means that in practice, the failover takes 5 minutes to fully complete. I 
could reduce this timeout value futher but this does not feel like the ideal 
solution.

2) If the master eventually comes back, all connections that ended up routing 
to the slave will stay on the slave indefinitely. The only solution I have for 
this is to restart mysql on the slave, which kicks everyone off causing them to 
reconnect and get routed back to the master. This is acceptable if restoring 
master required some kind of manual maintenance, since I'd already be getting 
my hands dirty anyway. However, if master disappears and comes back due to 
brief network outage that resolves itself automatically, it's unfortunate that 
I'd still have to manually react to this by kicking everyone off the slave.

I wonder if both of these could be solved with some options that could make it 
so all clients are disconnected whenever master (not slave!) goes up or down. 
Or maybe there are some consequences to this approach that I'm not aware of.

Thanks,
Justin



haproxy conditional healthchecks/failover

2012-05-23 Thread Zulu Chas

Hi!

I'm trying to use HAproxy to support the concepts of "offline", "in maintenance 
mode", and "not working" servers.  I have separate health checks for each 
condition and I have been trying to use ACLs to be able to switch between 
backends.  In addition to the fact that this doesn't seem to work, I'm also not 
loving having to repeat the server lists (which are the same) for each backend. 
 But perhaps I'm misunderstanding something fundamental here about how I should 
be tackling this.  As far as I can tell, having multiple httpchk's per backend 
doesn't work in an "if any of these fail, then call mark this server offline" 
-- I think it's more like "if any of these succeed, mark this server online" -- 
and that's what's making this scenario complex.  That is, the /check can pass 
but I might have marked the server offline manually or be in the process of 
deploying and so /maintenance.html exists -- it's not a strictly boolean 
(online/offline) issue.
Here's the setup:

global  maxconn 1024  log   127.0.0.1   local0 notice  spread-checks 5  
daemon  user haproxy
defaults  log global  mode http  balance leastconn  maxconn 500  option httplog 
 option abortonclose  option httpclose  option forwardfor  retries 3  option 
redispatch  timeout client 1m  timeout connect 30s  timeout server 1m  stats 
enable  stats uri /haproxy?stats  stats authhauser:hapasswd  
monitor-uri /haproxy?monitor  timeout check 1
frontend staging 0.0.0.0:8080  # if the number of servers *not marked offline* 
is *less than the total number of app servers* (in this case, 2), then it is 
considered degraded  acl degraded nbsrv(only_online) lt 2
  # if the number of servers *not marked offline* is *less than one*, the site 
is considered down  acl down nbsrv(only_online) lt 1
  # if the number of servers without the maintenance page is *less than the 
total number of app servers* (in this case, 2), then it is considered 
maintenance mode  acl mx_mode nbsrv(maintenance) lt 2
  # if the number of servers without the maintenance page is less than 1, we're 
down because everything is in maintenance mode  acl down_mx nbsrv(maintenance) 
lt 1
  # if not running at full potential, use the backend that identified the 
degraded state  use_backend only_online if degraded  use_backend maintenance if 
mx_mode
  # if we are down for any reason, use the backend that identified that fact  
use_backend backup_only if down  use_backend backup_only if down_mx
  # by default, use 'normal ops'  default_backend normal

backend only_online  # if /offline exists, the server has been intentionally 
marked as offline  option httpchk HEAD /offline HTTP/1.0  http-check expect 
status 404  http-check send-state  server App1 app1:8080 check inter 5000 rise 
2 fall 2  server App2 app2:8080 check inter 5000 rise 2 fall 2

backend maintenance  # if /maintenance.html exists, the server is in maintance 
mode  option httpchk HEAD /maintenance.html HTTP/1.0  http-check expect status 
404  http-check send-state  server App1 app1:8080 check inter 2000 rise 2 fall 
2  server App2 app2:8080 check inter 2000 rise 2 fall 2

backend normal  cookie SESSIONID insert indirect  option httpchk HEAD /check 
HTTP/1.0  http-check send-state  server App1 app1:8080 cookie A check inter 
1 rise 2 fall 2  server App2 app2:8080 cookie B check inter 1 rise 2 
fall 2  server Backup1 app3:8080 cookie C check inter 1 rise 2 fall 2 backup

backend backup_only  option httpchk HEAD /check HTTP/1.0  http-check send-state 
 server Backup1 app3:8080 check inter 2000 rise 2 fall 2
  

mysql failover and forcing disconnects

2012-05-23 Thread Justin Karneges
(Apologies if this comes through twice. The first time I sent was before 
subscription approval, and I don't think it went through.)

Hello list,

I'm using haproxy to handle failover between a mysql master and slave. The 
slave replicates from master and is read-only. I specify both mysql servers in 
my haproxy configuration, and use the "backup" option on the slave. 
Applications connect to haproxy instead of mysql directly. Haproxy routes all 
connections to the master, unless the master is down in which case it routes 
them all to the slave.

This actually works well enough, but a couple of peculiarities arise from the 
fact that haproxy doesn't disturb existing connections when servers go up and 
down:

1) Even if haproxy notices within seconds that the mysql master is down, 
existing connections remain pointed to the master. I set "timeout server 5m" 
so that within 5 minutes of inactivity, haproxy will eventually kill the 
connections, causing clients to reconnect and get routed to the slave. This 
means that in practice, the failover takes 5 minutes to fully complete. I 
could reduce this timeout value futher but this does not feel like the ideal 
solution.

2) If the master eventually comes back, all connections that ended up routing 
to the slave will stay on the slave indefinitely. The only solution I have for 
this is to restart mysql on the slave, which kicks everyone off causing them to 
reconnect and get routed back to the master. This is acceptable if restoring 
master required some kind of manual maintenance, since I'd already be getting 
my hands dirty anyway. However, if master disappears and comes back due to 
brief network outage that resolves itself automatically, it's unfortunate that 
I'd still have to manually react to this by kicking everyone off the slave.

I wonder if both of these could be solved with some options that could make it 
so all clients are disconnected whenever master (not slave!) goes up or down. 
Or maybe there are some consequences to this approach that I'm not aware of.

Thanks,
Justin



Re: haproxy conditional healthchecks/failover

2012-05-23 Thread Baptiste
Hi,

My questions and remarks inline.

On Wed, May 23, 2012 at 11:42 PM, Zulu Chas  wrote:
> Hi!
>
> I'm trying to use HAproxy to support the concepts of "offline", "in
> maintenance mode", and "not working" servers.

Any good reason to do that???
(I'm a bit curious)

>  I have separate health checks
> for each condition and I have been trying to use ACLs to be able to switch
> between backends.  In addition to the fact that this doesn't seem to work,
> I'm also not loving having to repeat the server lists (which are the same)
> for each backend.

Nothing weird here, this is how HAProxy configuration works.

>  But perhaps I'm misunderstanding something fundamental
> here about how I should be tackling this.  As far as I can tell, having
> multiple httpchk's per backend doesn't work in an "if any of these fail,
> then call mark this server offline"

you can have a single health check per backend.


> -- I think it's more like "if any of
> these succeed, mark this server online" -- and that's what's making this
> scenario complex.

euh I might misunderstanding something.
There is nothing more simple that "if the health check is successful,
then the server is considered healthy"...

>  That is, the /check can pass but I might have marked the
> server offline manually or be in the process of deploying and so
> /maintenance.html exists -- it's not a strictly boolean (online/offline)
> issue.
>
> Here's the setup:
>
> global
>   maxconn 1024
>   log   127.0.0.1       local0 notice
>   spread-checks 5
>   daemon
>   user haproxy
>
> defaults
>   log global
>   mode http
>   balance leastconn
>   maxconn 500
>   option httplog
>   option abortonclose
>   option httpclose
>   option forwardfor
>   retries 3
>   option redispatch
>   timeout client 1m
>   timeout connect 30s
>   timeout server 1m
>   stats enable
>   stats uri     /haproxy?stats
>   stats auth    hauser:hapasswd
>   monitor-uri /haproxy?monitor
>   timeout check 1
>
> frontend staging 0.0.0.0:8080
>   # if the number of servers *not marked offline* is *less than the total
> number of app servers* (in this case, 2), then it is considered degraded
>   acl degraded nbsrv(only_online) lt 2
>

This will match 0 and 1

>   # if the number of servers *not marked offline* is *less than one*, the
> site is considered down
>   acl down nbsrv(only_online) lt 1
>

This will match 0, so you're both down and degraded ACL covers the
same value (0).
Which may lead to an issue later

>   # if the number of servers without the maintenance page is *less than the
> total number of app servers* (in this case, 2), then it is
> considered maintenance mode
>   acl mx_mode nbsrv(maintenance) lt 2
>
>   # if the number of servers without the maintenance page is less than 1,
> we're down because everything is in maintenance mode
>   acl down_mx nbsrv(maintenance) lt 1
>

Same remark as above.


>   # if not running at full potential, use the backend that identified the
> degraded state
>   use_backend only_online if degraded
>   use_backend maintenance if mx_mode
>
>   # if we are down for any reason, use the backend that identified that fact
>   use_backend backup_only if down
>   use_backend backup_only if down_mx
>

Here is the problem (see above).
The 2 use_backend above will NEVER match, because the degraded ad
mx_mode ACL overlaps their values!

>   # by default, use 'normal ops'
>   default_backend normal
>
>
> backend only_online
>   # if /offline exists, the server has been intentionally marked as offline
>   option httpchk HEAD /offline HTTP/1.0
>   http-check expect status 404
>   http-check send-state
>   server App1 app1:8080 check inter 5000 rise 2 fall 2
>   server App2 app2:8080 check inter 5000 rise 2 fall 2
>
>
> backend maintenance
>   # if /maintenance.html exists, the server is in maintance mode
>   option httpchk HEAD /maintenance.html HTTP/1.0
>   http-check expect status 404
>   http-check send-state
>   server App1 app1:8080 check inter 2000 rise 2 fall 2
>   server App2 app2:8080 check inter 2000 rise 2 fall 2
>
>
> backend normal
>   cookie SESSIONID insert indirect
>   option httpchk HEAD /check HTTP/1.0
>   http-check send-state
>   server App1 app1:8080 cookie A check inter 1 rise 2 fall 2
>   server App2 app2:8080 cookie B check inter 1 rise 2 fall 2
>   server Backup1 app3:8080 cookie C check inter 1 rise 2 fall 2 backup
>
>
> backend backup_only
>   option httpchk HEAD /check HTTP/1.0
>   http-check send-state
>   server Backup1 app3:8080 check inter 2000 rise 2 fall 2
>



Do you know the "disable-on-404" option?
it may help you make your configuration in the right way (not
considering a 404 as a healthy response).

cheers



Re: mysql failover and forcing disconnects

2012-05-23 Thread Willy Tarreau
Hi Justin,

On Wed, May 23, 2012 at 03:11:00PM -0700, Justin Karneges wrote:
> (Apologies if this comes through twice. The first time I sent was before 
> subscription approval, and I don't think it went through.)

It was OK, you don't need to be subscribed to post messages.

(...)
> 1) Even if haproxy notices within seconds that the mysql master is down, 
> existing connections remain pointed to the master. I set "timeout server 5m" 
> so that within 5 minutes of inactivity, haproxy will eventually kill the 
> connections, causing clients to reconnect and get routed to the slave. This 
> means that in practice, the failover takes 5 minutes to fully complete. I 
> could reduce this timeout value futher but this does not feel like the ideal 
> solution.

There is an option at the server level which is "on-marked-down 
shutdown-session".
It achieves exactly what you want, it will kill all connections to a server
which is detected as down.

> 2) If the master eventually comes back, all connections that ended up routing 
> to the slave will stay on the slave indefinitely. The only solution I have 
> for 
> this is to restart mysql on the slave, which kicks everyone off causing them 
> to 
> reconnect and get routed back to the master. This is acceptable if restoring 
> master required some kind of manual maintenance, since I'd already be getting 
> my hands dirty anyway. However, if master disappears and comes back due to 
> brief network outage that resolves itself automatically, it's unfortunate 
> that 
> I'd still have to manually react to this by kicking everyone off the slave.

There is no universal solution for this. As haproxy doesn't inspect the mysql
traffic, it cannot know when a connection remains idle and unused. Making it
arbitrarily kill connections to a working server would be the worst thing to
do, as it would kill connections on which a transaction is waiting for being
completed.

I really think the best you can do is to have your slave declared as a backup
server and use short enough timeouts so that idle connections expire quickly
and are replaced by new connections to the master server.

Regards,
Willy