Re: recent haproxy dropping ftp connections

2012-02-22 Thread Baptiste
Hi Jérémy,

You can run two instances of haproxy on a single box, as long as they
don't listen on the same ports or IP.

Your issue is interesting and I'm looking forward to fix it :)

cheers

On Tue, Feb 21, 2012 at 3:31 PM, Jérémy Longo  wrote:
> Will do that ASAP. Unfortunately, I can't do it at the moment, both clusters
> being in production.
> I will try to reproduce the problem in vms locally.
> Anyway, thanks for the help !
>
> Best regards,
>
> Jérémy
>
> Le 21/02/2012 12:40, John Marrett a écrit :
>
>>
>> Jeremy,
>>
>> > but everything has worked perfectly with haproxy 1.4.15 for months now;
>> > however I am no tcp guru.
>>
>> Extremely odd.
>>
>> > Given directions, I can produce traces however if it can help.
>>
>> Try enabling option tcplog on both the ftp frontend and backend, this
>> should produce detailed log output when you attempt the connection.
>>
>> I'm still quite mystified by what's happening.
>>
>> -JohnF
>>
>
>



Re: MINOR imap health check

2012-02-22 Thread Willy Tarreau
On Sat, Feb 04, 2012 at 03:44:45PM +0100, Patrick Mézard wrote:
> Le 04/02/12 15:01, Baptiste a écrit :
> > Hi everybody,
> > 
> > Please find in attachment a patch which provides IMAP health checking.
> > 
> > Note that it provides also a clean application logout :)
> > 
> > it applies on HAProxy HEAD git.
> > 
> > I've tested it on my courier IMAP server and it works as expected.
> > You can test it with the simple conf below:
> > listen imap
> > bind 0.0.0.0:1143
> > mode tcp
> > option imapchk
> > server imapserver :143 check
> > 
> > Your feedback are welcome.
> 
> Some remarks about the documentation :
> - It would be great if Willy could pronounce on the options naming: *chk / 
> *-check / *-chk

I agree with Patrick here. We're trying to get rid of shortened names which
are too confusing even if shorter to type. Baptiste, could you please rename
it "imap-check" so that it follows the new naming convention ?

(...)
> Sidenote: it would be great to unify all the health checks descriptions. All
> of them describe the check with more or less details, then sometimes add
> consideration about logging and whatnots.

Yes, I think we should have a full section dedicated to health checks and
tracking in the doc. This would also reduce cross-references. Anyone volunteer
for contributing this ? We need to address the following points in the check
section :

- health checks : introduction / why performing health checks. Some of
  this is already present in other sections.
- describe the 4 (?) server states : UP, DOWN, NOLB, MAINT and their
  transistions
- describe the slowstart mechanism
- describe the disable-on-404 mechanism which introduces the NOLB state
- describe the server tracking mechanism and possibilities
- explain how timeout connect and timeout check influence the checks
- describe the impacts of the global "spread-checks" setting
- explain how a health check uses "source" and "port" parameters to
  establish a connection
- reminder that checks are per-process
- enumerate all the currently supported health checks methods with their
  parameters

Right now, I *think* that all the text exists at other places in the doc
and has to be moved. There will obviously have to be some glue to tie all
this together, and to place reference to this section into places where
the text ws stolen (at least to give pointers for full descriptions of
some options). What is really missing right now are examples for all
concepts and methods, but with a dedicated section it would be much more
suited to provide them.

If someone is interested in doing this work, it would be really great and
I would be happy to help with it.

Thanks,
Willy




Re: Testing Stick table replication with snapshot 20120222

2012-02-22 Thread Willy Tarreau
Hi,

On Wed, Feb 22, 2012 at 05:54:48PM +0100, Baptiste wrote:
> Hey,
> 
> Why not using "balance source" instead of using stick tables to do ip
> source affinity?

The main difference between "balance source" and "stick on src" is that
with the former, when you lose a server, all clients are redistributed
while in the second only the clients attached to the failed server are
redistributed. Also when the failed server comes back, clients are moved
again with "balance source". "balance source" + "hash-type consistent"
at least fixes the first issue but the second one remains. I'm not fond of
stick tables at all, but I must admit they address real world issues :-)

> Note that the behavior you're observing is not a bug, it's by design
> :) There is no master on the table.

Upon restart, there is a special state where the new peer connects to
other ones and asks them to dump all of their tables contents. So this
issue should not happen at all or it's a bug. We already observed this
behaviour during the development of the feature, but it's never been
observed since it was released. Maybe we recently broke something. Mark,
what version are you using ? Do you have any patches applied ?

Regards,
Willy




RE: HAProxy Support

2012-02-22 Thread Jens Dueholm Christensen (JEDC)
For what it's worth..

I think you are overcomplicating your setup here.

Unless your last "leg" of the connection between haproxy and the backend https 
server is running over an unsecure network (ie. internet or large LAN with no 
absolute control of the flowing traffic on the LAN) why insist that the 
connections really are encrypted?

I mean - from the client to haproxy there's a pretty obvious reason for it, but 
as you must be in full control of what happens with the traffic as soon as it 
hits haproxy, there is no need for the overhead in talking https to the 
backends.

I fully understand that your application requires logins etc to be encrypted, 
but as long as the connection between the client and haproxy has come through 
stunnel (you already have the listener on port 81 in place), you can have 
haproxy add a header that tells the backend that this request really was 
secure, and then have your app check that header instead of insisting on an SSL 
connection.


And.. even if you had no control of the last leg, I wouldn't nessecarily rely 
on yet another stunnel encrypting every request to the backend. Why not use a 
SSL-based VPN connection like OpenVPN?

That way your - perhaps far-away (network-wise) - backend would be quite local 
to haproxy with no risk of any middlemen intercepting the individual 
connections or content.
Yes there is an overhead with any VPN, but so has stunnel.

Regards,
Jens Dueholm Christensen


From: Tulga Kalayci [tulga_kala...@hotmail.com]
Sent: 22 February 2012 19:25
To: er...@dekp.net; cyril.bo...@free.fr; bed...@gmail.com; haproxy@formilux.org
Subject: RE: HAProxy Support

Hi Cyril (& Baptiste),
My name is Tulga, I work with Erick for our haproxy / stunnel implementation. 
As Erick mentioned before we are trying to achieve following:

a) https client -> stunnel -> haproxy -> stunnel -> https server
b) http client -> haproxy -> http server

At the end; if page customer wants to visit is a https based page like login, 
traffic should flow based on option (a). For regular http pages traffic should 
flow based on option (b). I understand from your answer, flow (a) can be 
achieved using stunnel as a client. However we are not very sure how to 
configure stunnel for this scenario.

Our current implementation is based on examples on the internet. Basically:
stunnel listens 443, and sends stripped traffic to local port 81.
haproxy listens local ports 80 & 81, and after inserting cookies send traffic 
remote ports 80 and 81

However since we lost SSL at the stunnel level, our IIS web server does not 
behave correctly for SSL required pages like login etc.

Current configuration of stunnel as is follows:

---
; Protocol version (all, SSLv2, SSLv3, TLSv1)
sslVersion = SSLv3

; Some security enhancements for UNIX systems - comment them out on Win32
chroot = /var/lib/stunnel4/
setuid = stunnel4
setgid = stunnel4
pid = /stunnel4.pid

; Some performance tunings
socket = l:TCP_NODELAY=1
socket = r:TCP_NODELAY=1
;compression = zlib

; Some debugging stuff useful for troubleshooting
#debug = 7
output = /var/log/stunnel4/stunnel.log

; Use it for client mode
;client = yes

;Service-level configuration

[https]
cert=/etc/stunnel/wildcard_com.crt
key=/etc/stunnel/wildcard_com.key
accept=443
connect=81
xforwardedfor=yes
TIMEOUTclose = 0

=
and our haproxy config is as follows:
=

global
log /dev/log user debug
maxconn 2
#chroot /usr/share/haproxy
user haproxy
group haproxy
daemon
#debug
#quiet

defaults
log global
retries 3
option redispatch
maxconn 2000
option dontlognull
balance leastconn
clitimeout 6
srvtimeout 6
contimeout 5000


listen http 0.0.0.0:80
mode http
cookie WEBSERVERID insert
#option httplog
option tcplog
balance source
option forwardfor except 192.168.192.5
option httpclose
option redispatch
maxconn 1
reqadd X-Forwarded-Proto:\ http
server FE04 192.168.20.30:80 cookie A maxconn 5000

listen https 0.0.0.0:81
mode http
cookie WEBSERVERID insert
#option httplog
option tcplog
balance source
option forwardfor except 192.168.192.5
option httpclose
option redispatch
maxconn 1
reqadd X-Forwarded-Proto:\ https
server FE04 192.168.20.30:81 cookie A maxconn 5000



if you can provide some assistance, maybe an example about how to use stunnel 
as a client (and a server in the same time I guess), we will greatly appreciate.
Thank you

Tulga, Erick




- Original Message -
From: Erick Chinchilla Berrocal [mailto:er...@dekp.net]
To: tulga_kala...@hotmail.com
Sent: Wed, 22 Feb 2012 10:40:08 -0600
Subject: Fwd: Re: HAProxy Support



 Original Message 
Subject:Re: HAProxy Support
Date:   Wed, 22 Feb 2012 12:29:07 +0100
From:   Baptiste 
To: Cyril Bonté 

RE: HAProxy Support

2012-02-22 Thread Tulga Kalayci

Hi Cyril (& Baptiste),
My name is Tulga, I work with Erick for our haproxy / stunnel implementation. 
As Erick mentioned before we are trying to achieve following:

a) https client -> stunnel -> haproxy -> stunnel -> https server
b) http client -> haproxy -> http server

At the end; if page customer wants to visit is a https based page like login, 
traffic should flow based on option (a). For regular http pages traffic should 
flow based on option (b). I understand from your answer, flow (a) can be 
achieved using stunnel as a client. However we are not very sure how to 
configure stunnel for this scenario.

Our current implementation is based on examples on the internet. Basically:
stunnel listens 443, and sends stripped traffic to local port 81.
haproxy listens local ports 80 & 81, and after inserting cookies send traffic 
remote ports 80 and 81

However since we lost SSL at the stunnel level, our IIS web server does not 
behave correctly for SSL required pages like login etc.

Current configuration of stunnel as is follows:

---
; Protocol version (all, SSLv2, SSLv3, TLSv1)
sslVersion = SSLv3

; Some security enhancements for UNIX systems - comment them out on Win32
chroot = /var/lib/stunnel4/
setuid = stunnel4
setgid = stunnel4
pid = /stunnel4.pid

; Some performance tunings
socket = l:TCP_NODELAY=1
socket = r:TCP_NODELAY=1
;compression = zlib

; Some debugging stuff useful for troubleshooting
#debug = 7
output = /var/log/stunnel4/stunnel.log

; Use it for client mode
;client = yes

;Service-level configuration

[https]
cert=/etc/stunnel/wildcard_com.crt
key=/etc/stunnel/wildcard_com.key
accept=443
connect=81
xforwardedfor=yes
TIMEOUTclose = 0

=
and our haproxy config is as follows:
=

global
log /dev/log user debug
maxconn 2
#chroot /usr/share/haproxy
user haproxy
group haproxy
daemon
#debug
#quiet

defaults
log global
retries 3
option redispatch
maxconn 2000
option dontlognull
balance leastconn
clitimeout 6
srvtimeout 6
contimeout 5000


listen http 0.0.0.0:80
mode http
cookie WEBSERVERID insert
#option httplog
option tcplog
balance source
option forwardfor except 192.168.192.5
option httpclose
option redispatch
maxconn 1
reqadd X-Forwarded-Proto:\ http
server FE04 192.168.20.30:80 cookie A maxconn 5000

listen https 0.0.0.0:81
mode http
cookie WEBSERVERID insert
#option httplog
option tcplog
balance source
option forwardfor except 192.168.192.5
option httpclose
option redispatch
maxconn 1
reqadd X-Forwarded-Proto:\ https
server FE04 192.168.20.30:81 cookie A maxconn 5000 



if you can provide some assistance, maybe an example about how to use stunnel 
as a client (and a server in the same time I guess), we will greatly appreciate.
Thank you

Tulga, Erick



- Original Message -
From: Erick Chinchilla Berrocal [mailto:er...@dekp.net]

To: tulga_kala...@hotmail.com

Sent: Wed, 22 Feb 2012 10:40:08 -0600

Subject: Fwd: Re: HAProxy Support



  


  
  




 Original Message 

  

  Subject: 
  Re: HAProxy Support


  Date: 
  Wed, 22 Feb 2012 12:29:07 +0100


  From: 
  Baptiste 


  To: 
  Cyril Bonté 


  CC: 
  haproxy@formilux.org, Erick Chinchilla Berrocal


  





Hi Cyril,

I was not available last days and I'm happy you were available to help people:)
Let me keep on helping Erick.

Basically, this kind of configuration is doable with stunnel.
Stunnel has a "client" mode and so can be used to get connected to a
backend server over SSL and keep on using all the layer 7 smart stuff
from haproxy while providing SSL connections from client to server.

cheers

On Tue, Feb 21, 2012 at 10:36 PM, Cyril Bonté  wrote:
> Hi all,
>
> Le 21/02/2012 21:10, Erick Chinchilla Berrocal a écrit :
>
>> Hi
>> Thanks for your reply
>> I made this change but the problem continues
>> any other idea
>
>
> OK, after discussing with Erick, it appears that the configuration worked,
> but Erick had the need to reencode the haproxy output into HTTPS.
>
> Not sure it will still be a requirement, but in case it is, I oriented him
> on stunnel client mode for such a "complex" case :
> https client -> stunnel -> haproxy -> stunnel -> https server
>
> I won't be able to help him those next days, but maybe someone else can take
> the relay, if he needs more help ;-)
>
>
> --
> Cyril Bonté
>
  

Re: Testing Stick table replication with snapshot 20120222

2012-02-22 Thread Baptiste
Hey,

Why not using "balance source" instead of using stick tables to do ip
source affinity?

Note that the behavior you're observing is not a bug, it's by design
:) There is no master on the table.
Each process will only push "writes" to its peers.
So an improvement could be to give the ability to HAProxy to push a
table content to a peer, or at least, if a peer has no entry in its
table for a request, then ask its peer if they have one.
but this would slow down your traffic.

Before you ask for it, be aware that only the IPs and the affected
server are synchroniszed in tables. No counters are sync.

cheers


On Wed, Feb 22, 2012 at 2:32 PM, Mark Brooks  wrote:
> Hi all,
>
> We have been testing with stick table replication and was wondering if
> we could get some clarification on its operation/possibly make a
> feature request, if what we think is happening happens.
>
> Our configuration is as follows -
>
>
> global
>        daemon
>        stats socket /var/run/haproxy.stat mode 600 level admin
>        pidfile /var/run/haproxy.pid
>        maxconn 4
>        ulimit-n 81000
> defaults
>        mode http
>        balance roundrobin
>        timeout connect 4000
>        timeout client 42000
>        timeout server 43000
> peers loadbalancer_replication
>        peer instance1 192.168.66.94:7778
>        peer instance2 192.168.66.95:7778
> listen VIP_Name
>        bind 192.100.1.2:80
>        mode tcp
>        balance leastconn
>        server backup 127.0.0.1:9081 backup  non-stick
>        stick-table type ip size 10240k expire 30m peers
> loadbalancer_replication
>        stick on src
>        option redispatch
>        option abortonclose
>        maxconn 4
>        server RIP_Name 192.168.66.50  weight 1  check port 80  inter
> 2000  rise 2  fall 3 minconn 0  maxconn 0 on-marked-down
> shutdown-sessions
>        server RIP_Name-1 192.168.66.51:80  weight 1  check   inter
> 2000  rise 2  fall 3 minconn 0  maxconn 0 on-marked-down
> shutdown-sessions
>
>
>
> I have replication working between the devices, our issues come when
> one of the nodes is lost and brought back online.
> For example -
>
> We have 2 copies of haproxy running on 2 machines called instance 1
> and instance 2
>
> Starting setup
> instance 1's persistence table
> entry 1
> entry 2
> entry 3
>
> instance 2's persistence table
> entry 1
> entry 2
> entry 3
>
>
> instance 1 now fails and is no longer communicating with instance 2.
> All the users are now connected to instance 2.
>
> Now instance 1 is brought back online.
>
> The users are still connecting to instance 2, but the persistence
> table entries for instance 2 are only copied to instance one if the
> connection is re-established (we see it as the persistence timeout
> counter resetting).
>
> So you can end up with
>
> instance 1's persistence table
> entry 1
>
>
> instance 2's persistence table
> entry 1
> entry 2
> entry 3
>
> If you were to then cause the connections to switch over from instance
> 1 to instance 2, you would be missing 2 persistence entries.
>
> Is this expected behaviour?
>
> If it is, would it be possible to request a feature for a socket
> command of some sort which you can run on a device which force
> synchronises the persistence table with the other peers? So which ever
> instance it is run on it takes its persistence table and pushes it to
> the other peers.
>
>
> Mark
>



Testing Stick table replication with snapshot 20120222

2012-02-22 Thread Mark Brooks
Hi all,

We have been testing with stick table replication and was wondering if
we could get some clarification on its operation/possibly make a
feature request, if what we think is happening happens.

Our configuration is as follows -


global
daemon
stats socket /var/run/haproxy.stat mode 600 level admin
pidfile /var/run/haproxy.pid
maxconn 4
ulimit-n 81000
defaults
mode http
balance roundrobin
timeout connect 4000
timeout client 42000
timeout server 43000
peers loadbalancer_replication
peer instance1 192.168.66.94:7778
peer instance2 192.168.66.95:7778
listen VIP_Name
bind 192.100.1.2:80
mode tcp
balance leastconn
server backup 127.0.0.1:9081 backup  non-stick
stick-table type ip size 10240k expire 30m peers
loadbalancer_replication
stick on src
option redispatch
option abortonclose
maxconn 4
server RIP_Name 192.168.66.50  weight 1  check port 80  inter
2000  rise 2  fall 3 minconn 0  maxconn 0 on-marked-down
shutdown-sessions
server RIP_Name-1 192.168.66.51:80  weight 1  check   inter
2000  rise 2  fall 3 minconn 0  maxconn 0 on-marked-down
shutdown-sessions



I have replication working between the devices, our issues come when
one of the nodes is lost and brought back online.
For example -

We have 2 copies of haproxy running on 2 machines called instance 1
and instance 2

Starting setup
instance 1's persistence table
entry 1
entry 2
entry 3

instance 2's persistence table
entry 1
entry 2
entry 3


instance 1 now fails and is no longer communicating with instance 2.
All the users are now connected to instance 2.

Now instance 1 is brought back online.

The users are still connecting to instance 2, but the persistence
table entries for instance 2 are only copied to instance one if the
connection is re-established (we see it as the persistence timeout
counter resetting).

So you can end up with

instance 1's persistence table
entry 1


instance 2's persistence table
entry 1
entry 2
entry 3

If you were to then cause the connections to switch over from instance
1 to instance 2, you would be missing 2 persistence entries.

Is this expected behaviour?

If it is, would it be possible to request a feature for a socket
command of some sort which you can run on a device which force
synchronises the persistence table with the other peers? So which ever
instance it is run on it takes its persistence table and pushes it to
the other peers.


Mark



Re: Possible SOCAT command bug with "clear table"

2012-02-22 Thread Mark Brooks
On 21 February 2012 19:08, Cyril Bonté  wrote:
> Hi Mark,
>
> Le 21/02/2012 15:59, Mark Brooks a écrit :
>
>> We are working with HAProxy 1.5dev7 and using stick tables for
>> persistence.
>>
>> Connect with several clients to populate the contents of the table.
>>
>> We were trying to drop the contents of the stick table using the command -
>>
>> echo "clear table VIP_Name" | socat unix-connect:/var/run/haproxy.stat
>> stdio
>>
>> It appears that instead of removing all entries as the manual suggests
>> " In the case where no options arguments are given all entries will be
>> removed." it appears to only be removing 1 entry at a time.
>
>
> This bug is already fixed in the git repository :
> http://haproxy.1wt.eu/git?p=haproxy.git;a=commit;h=8fa52f4e0e73239ba84754c9b9568ed4571bc4af
>
> The bugfix will be available in 1.5-dev8 or you can already use the last
> snapshot, here :
> http://haproxy.1wt.eu/download/1.5/src/snapshot/
>
> --
> Cyril Bonté

Thanks very much Cyril, we updated our version of haproxy and it did
indeed solve the problem

Thanks again

Mark



Re: Grouping servers for failover within a backend

2012-02-22 Thread wsq003

you can try 'use_backend' with ACL.

for example you can config two backends named be1 and be2.
then:
acl a1 path_reg "some_regex1"
acl a2 path_reg "some_regex2"
use_backend be1 if a1
use_backend be2 if a2

if you carefully design the regex1 and regex2, it will work fine.
notice that if be1 is down, half of the requests will fail. (can't switch to 
be2)


From: Sachin Shetty
Date: 2012-02-22 18:49
To: haproxy@formilux.org
Subject: Grouping servers for failover within a backend
Hi,
 
We have four web servers in a single backend. Physically these four servers are 
on two different machines. A new sessions is made sticky by hashing on one of 
the headers. 
 
Regular flow is ok, but when one of the webservers are down for an in-flight  
session, the request should be re-dispatched to the webserver on the same 
machine if available. I looked at various options in the config, but couldn't 
figure out a way to do it.  Has anybody achieved any thing similar with some 
config tweaks?
 
 
Thanks
Sachin

Re: HAProxy Support

2012-02-22 Thread Baptiste
Hi Cyril,

I was not available last days and I'm happy you were available to help people:)
Let me keep on helping Erick.

Basically, this kind of configuration is doable with stunnel.
Stunnel has a "client" mode and so can be used to get connected to a
backend server over SSL and keep on using all the layer 7 smart stuff
from haproxy while providing SSL connections from client to server.

cheers

On Tue, Feb 21, 2012 at 10:36 PM, Cyril Bonté  wrote:
> Hi all,
>
> Le 21/02/2012 21:10, Erick Chinchilla Berrocal a écrit :
>
>> Hi
>> Thanks for your reply
>> I made this change but the problem continues
>> any other idea
>
>
> OK, after discussing with Erick, it appears that the configuration worked,
> but Erick had the need to reencode the haproxy output into HTTPS.
>
> Not sure it will still be a requirement, but in case it is, I oriented him
> on stunnel client mode for such a "complex" case :
> https client -> stunnel -> haproxy -> stunnel -> https server
>
> I won't be able to help him those next days, but maybe someone else can take
> the relay, if he needs more help ;-)
>
>
> --
> Cyril Bonté
>



Grouping servers for failover within a backend

2012-02-22 Thread Sachin Shetty
Hi,

 

We have four web servers in a single backend. Physically these four servers
are on two different machines. A new sessions is made sticky by hashing on
one of the headers. 

 

Regular flow is ok, but when one of the webservers are down for an in-flight
session, the request should be re-dispatched to the webserver on the same
machine if available. I looked at various options in the config, but
couldn't figure out a way to do it.  Has anybody achieved any thing similar
with some config tweaks?

 

 

Thanks

Sachin