Re: http responses randomly getting RSTs

2014-02-24 Thread Klavs Klavsen
Willy Tarreau said the following on 02/20/2014 08:44 PM:
[CUT]
> Or you can also use this old script I used many years ago to detect such 
> issues :
> 
> #!/bin/bash
> old=$(date +%s);
> while : ; do
>   new=$(date +%s);
>   if [ $new -lt $old ]; then
> echo "Time jumps backwards : $old => $new"
>   elif [ $new -gt $((old+1)) ]; then
> echo "Time jumps forwards  : $old => $new"
>   fi
>   old=$new
> done
> 
I had this running for a few minutes - while I was able to reproduce the
408 issue.. and it echo'ed nothing. I do monitor time and normally catch
timejumps (I didn't even have to have tinker panic 0 set in ntpd.conf).

[CUT]
> I just checked, and the two only situations where we may report a 408 are
>   - when detecting a timeout waiting for a request ;
>   - when detecting a timeout waiting for a body in "balance url_param 
> check-post"
> 
> But only the former will return "cR". So there's no risk of inheritating this
> from somewhere else.
> 
> The condition to enter this block is the following (proto_http.c:2486) :
> 
>   if (req->flags & CF_READ_TIMEOUT || tick_is_expired(req->analyse_exp, 
> now_ms))
> 
> So I'm seeing two possibilities :
>   - the CF_READ_TIMEOUT flag was present on the channel
>   - the http-request delay was really expired.
> 
> Since the logs showed that the delay was very short, either it's the first
> case, or we're facing a case there analyse_exp is not properly set. This
> expiration is set from "timeout http-request" or from "timeout 
> http-keep-alive"
> when processing the second and next requests. So here I don't think it can be
> the issue.
> 
> Thus in my opinion, the only remaining possibility is that the CF_READ_TIMEOUT
> flag is set. It is normally set if the read timeout is reached. So here it 
> will
> be "timeout client". But that doesn't seem to make much sense, especially if
> you manage to reproduce it easily.
> 

last time it took me ~20 reloads, in a new firefox, of the page to
reproduce it.. but on IE (on a buddys machine) it reproduces more easily.

> One thing I'm noticing is that this flag is not cleared between two 
> consecutive
> requests over the same connection. So we could imagine a scenario where a read
> timeout is detected on the first request, but an event arrives just in time, 
> at
> the exact same moment, resulting in the data still being processed while the
> flag is set. The request completes and reinitializes to handle the next 
> request
> over the connection, but the flag is not cleared. Thus the second request 
> would
> be the victim of this inherited timeout. It seems a bit far-fetched, but it
> could explain what you're seeing.
> 

> Could you please add the following patch to proto_http.c (1.5.22) ? It will 
> log
> in debug level a few flags and timers to try to better find what is happeneing
> when this happens. You'll need that your syslog deamon logs debug level 
> entries,
> maybe they're stored in a differen file (eg: /var/log/debug). Otherwise you 
> can
> change the level LOG_DEBUG for something else such as LOG_INFO. Warning, I 
> copy
> pasted both the patches below, so you'll have to copy the lines or they won't
> apply due to mangled spaces.
> 
I'll apply the patch and build a new rpm.. will return back later today.

Thank you very much for your assistance.

-- 
Regards,
Klavs Klavsen, GSEC - k...@vsen.dk - http://www.vsen.dk - Tlf. 61281200

"Those who do not understand Unix are condemned to reinvent it, poorly."
  --Henry Spencer




Re: http responses randomly getting RSTs

2014-02-24 Thread Klavs Klavsen
Finally came back here :)

Lukas Tribus said the following on 02/20/2014 07:16 PM:
[CUT]
> Try removing "timeout client" as well (never ever do this in production).
> You will see a startup warning, ignore it and test if you still can
> reproduce it.
> 
> Without "timeout http-request" and "timeout client" you probably don't see
> this issue.
> 

I tried removing timeout client - and it did complain loudly about it
when restarting, but I still get 408's.. (you can also verify yourself..
it shows up pretty much consistently, if I press "enter" on the url -
and then press f5 after it has loaded.

I'll run the script to test timejumps in a second :)

-- 
Regards,
Klavs Klavsen, GSEC - k...@vsen.dk - http://www.vsen.dk - Tlf. 61281200

"Those who do not understand Unix are condemned to reinvent it, poorly."
  --Henry Spencer




inspecting incoming tcp content

2014-02-24 Thread anup katariya
Hi,

I wanted to inspect incoming tcp request. I wanted to something like below

payload(0, 100) match with string like 49=ABC.

Thanks,
Anup


Goodbye from our Newsletter

2014-02-24 Thread Tu Informe
  
  Goodbye from our Newsletter, sorry to see you go.

  You have been unsubscribed from our newsletters.

  This is the last email you will receive from us. Our newsletter system,
phpList,
  will refuse to send you any further messages, without manual intervention
by our administrator.

  If there is an error in this information, you can re-subscribe:
  please go to http://tuinforme.com.ar/lists/?p=subscribe and follow the
steps.

  Thank you
  
  




Re: consistent server status between config reloads

2014-02-24 Thread Baptiste
Hi,

Your script must flag your server as disabled in your configuration file.

Baptiste
 Le 24 févr. 2014 19:06, "Craig Craig"  a écrit :

>   Hi,
>
>  I'm running some scripts that can disable a server for
> maintainance/application deployments. However a config reload enables the
> server again, and we have frequent changes to our haproxy config. Would it
> be possible to leave disabled servers in that state between reloads? Maybe
> with an additional config option like disable_permanent or the like?
>  Any opinions on this?
>
>  Best regards,
>
>  Craig
>


Re: When is it needed to reload HAProxy process?

2014-02-24 Thread Behrooz Nobakht
Thanks Jonathan for clarifying the problem further for me.

So, how I would go about what I want is to:

  1.  Apply the necessary changes to haproxy.cfg such that in the next 
restart/reload they would be picked up and in a consistent state
  2.  Use socat to apply the same changes at runtime for them to take effect

Now, for me the question becomes about the runtime part that:

  1.  Using socat completely allow me to add, remove, enable, and disable 
endpoints and rules in runtime?
  2.  Is there another way but socat to achieve this at runtime?

Thanks in advance,
Behrooz

On 02/24/2014 08:18 PM, Jonathan Matthews wrote:

On 24 February 2014 18:31, Behrooz Nobakht 
 wrote:


Hello there,

I am not an expert in HAProxy and tried to find an answer in the docs but
not really yet clear to me.

Here is my situation, I have a script that modifies haproxy.cfg; e.g. it may
add new endpoints, remove them, enable or disable them.

I want to verify on which of the situations above, it is required that
HAProxy process is reloaded/restarted so that the configuration takes
effect?


To answer your exact question: absolutely no changes to the config
file on disk will be picked up by the HAProxy process(es) without you
first restarting/reloading the process.

NB Do note that that doesn't answer the wider question about making
changes to a running process' *idea* of what its configuration should
be. That can be done in a variety of ways, but not via the config file
itself. I'm not best placed to help you with that, however.

Jonathan



--
Behrooz Nobakht | Senior Software Engineer | SDL | (m) +31 (0) 611770610 | 
Twitter: @behruz


http://www.sdl.com/?utm_source=Email&utm_medium=Email%2BSignature&utm_campaign=SDL%2BStandard%2BEmail%2BSignature";>
http://www.sdl.com/Content/images/SDLlogo2014.png"; 
border=0>www.sdl.com




SDL PLC confidential, all rights reserved.

If you are not the intended recipient of this mail SDL requests and requires 
that you delete it without acting upon or copying any of its contents, 
and we further request that you advise us.
SDL PLC is a public limited company registered in England and Wales.  
Registered number: 02675207.
Registered address: Globe House, Clivemont Road, Maidenhead, Berkshire SL6 7DY, 
UK.



This message has been scanned for malware by Websense. www.websense.com


Premier sur Google en 2014

2014-02-24 Thread E-visibilite

Pour vous, être présent sur Google c’est trouver de nouveaux clients.
Mais combien coûte un bon référencement ? 
http://www.e-visibilite.com/referencement3.php


Re: When is it needed to reload HAProxy process?

2014-02-24 Thread Jonathan Matthews
On 24 February 2014 18:31, Behrooz Nobakht  wrote:
> Hello there,
>
> I am not an expert in HAProxy and tried to find an answer in the docs but
> not really yet clear to me.
>
> Here is my situation, I have a script that modifies haproxy.cfg; e.g. it may
> add new endpoints, remove them, enable or disable them.
>
> I want to verify on which of the situations above, it is required that
> HAProxy process is reloaded/restarted so that the configuration takes
> effect?

To answer your exact question: absolutely no changes to the config
file on disk will be picked up by the HAProxy process(es) without you
first restarting/reloading the process.

NB Do note that that doesn't answer the wider question about making
changes to a running process' *idea* of what its configuration should
be. That can be done in a variety of ways, but not via the config file
itself. I'm not best placed to help you with that, however.

Jonathan



When is it needed to reload HAProxy process?

2014-02-24 Thread Behrooz Nobakht
Hello there,

I am not an expert in HAProxy and tried to find an answer in the docs but not 
really yet clear to me.

Here is my situation, I have a script that modifies haproxy.cfg; e.g. it may 
add new endpoints, remove them, enable or disable them.

I want to verify on which of the situations above, it is required that HAProxy 
process is reloaded/restarted so that the configuration takes effect?

Thanks in advance,
Behrooz


http://www.sdl.com/?utm_source=Email&utm_medium=Email%2BSignature&utm_campaign=SDL%2BStandard%2BEmail%2BSignature";>
http://www.sdl.com/Content/images/SDLlogo2014.png"; 
border=0>www.sdl.com




SDL PLC confidential, all rights reserved.

If you are not the intended recipient of this mail SDL requests and requires 
that you delete it without acting upon or copying any of its contents, 
and we further request that you advise us.
SDL PLC is a public limited company registered in England and Wales.  
Registered number: 02675207.
Registered address: Globe House, Clivemont Road, Maidenhead, Berkshire SL6 7DY, 
UK.



This message has been scanned for malware by Websense. www.websense.com


consistent server status between config reloads

2014-02-24 Thread Craig Craig
Hi,

I'm running some scripts that can disable a server for maintainance/application
deployments. However a config reload enables the server again, and we have
frequent changes to our haproxy config. Would it be possible to leave disabled
servers in that state between reloads? Maybe with an additional config option
like disable_permanent or the like?
Any opinions on this?

Best regards,

Craig

AW: AW: Release Date of HAProxy version 1.5

2014-02-24 Thread Andreas Mock
Hi Kobus,

so, the most interesting part is missing...  :-)

I really hoped you will use a clonable resource agent
running a haproxy instance on every node transmitting
the current status via peer config, so that a switchover
wouldn't loose state. In this scenario it would be interesting
to see which IP resource is used as haproxy would try to
bind to nonexisting IP. Probably a ClusterIP resource.

Thank you anyway.

Best regards
Andreas Mock


Von: Kobus Bensch [mailto:kobus.ben...@trustpayglobal.com]
Gesendet: Montag, 24. Februar 2014 16:53
An: Andreas Mock; haproxy@formilux.org
Betreff: Re: AW: Release Date of HAProxy version 1.5

Hi Andreas

Sure. Here is my pacemaker config. I have tried to use the HAproxy resource, 
but for some reason it does not switch properly when one node fails. So for 
now, due to time constraints, I have written a script to check which noide is 
master and to start haproxy on that node and stop it on the slave. I will have 
another go at getting the HAproxy resource to work, unless of course there is 
someone on the list that is already doing it and is willing to share some info.

node dc1ntgalb0101.domain.com
node dc1ntgalb0102.domain.com
primitive dc1ntgaip01 ocf:heartbeat:IPaddr2 \
params ip="10.11.115.110" cidr_netmask="24" \
op monitor interval="30s" \
meta target-role="Started"
primitive dc1ntgblv01 ocf:heartbeat:IPaddr2 \
params ip="10.11.115.119" cidr_netmask="24" \
op monitor interval="30s" \
meta target-role="Started"
primitive dc1ntgilv01 ocf:heartbeat:IPaddr2 \
params ip="10.11.115.125" cidr_netmask="24" \
op monitor interval="30s" \
meta target-role="Started"
primitive dc1ntgrlv01 ocf:heartbeat:IPaddr2 \
params ip="10.11.115.122" cidr_netmask="24" \
op monitor interval="30s" \
meta target-role="Started"
primitive dc1ntgxlv01 ocf:heartbeat:IPaddr2 \
params ip="10.11.115.116" cidr_netmask="24" \
op monitor interval="30s" \
meta target-role="Started"
primitive dc1ntiblv01 ocf:heartbeat:IPaddr2 \
params ip="10.11.115.121" cidr_netmask="24" \
op monitor interval="30s" \
meta target-role="Started"
primitive dc1ntiilv01 ocf:heartbeat:IPaddr2 \
params ip="10.11.115.127" cidr_netmask="24" \
op monitor interval="30s" \
meta target-role="Started"
primitive dc1ntirlv01 ocf:heartbeat:IPaddr2 \
params ip="10.11.115.124" cidr_netmask="24" \
op monitor interval="30s" \
meta target-role="Started"
primitive dc1ntixlv01 ocf:heartbeat:IPaddr2 \
params ip="10.11.115.118" cidr_netmask="24" \
op monitor interval="30s" \
meta target-role="Started"
primitive dc1ntmblv01 ocf:heartbeat:IPaddr2 \
params ip="10.11.115.120" cidr_netmask="24" \
op monitor interval="30s" \
meta target-role="Started"
primitive dc1ntmilv01 ocf:heartbeat:IPaddr2 \
params ip="10.11.115.126" cidr_netmask="24" \
op monitor interval="30s" \
meta target-role="Started"
primitive dc1ntmrlv01 ocf:heartbeat:IPaddr2 \
params ip="10.11.115.123" cidr_netmask="24" \
op monitor interval="30s" \
meta target-role="Started"
primitive dc1ntmxlv01 ocf:heartbeat:IPaddr2 \
params ip="10.11.115.117" cidr_netmask="24" \
op monitor interval="30s" \
meta target-role="Started"
colocation proxyservices inf: dc1ntgaip01 dc1ntgxlv01 dc1ntmxlv01 dc1ntixlv01 
dc1ntgblv01 dc1ntmblv01 dc1ntiblv01 dc1ntgrlv01 dc1ntmrlv01 dc1ntirlv01 
dc1ntgilv01 dc1ntmilv01 dc1ntiilv01
property $id="cib-bootstrap-options" \
dc-version="1.1.10-14.el6_5.2-368c726" \
cluster-infrastructure="classic openais (with plugin)" \
expected-quorum-votes="2" \
stonith-enabled="false" \
no-quorum-policy="ignore"
rsc_defaults $id="rsc-options" \
resource-stickiness="100"
On 24/02/2014 15:38, Andreas Mock wrote:
Hi Kobus,

I would be interested in your pacemaker agent config?
Can you post your pacemaker snippet?
What resource agent are you using?

Thank you in advance.

Regards
Andreas Mock

Von: Kobus Bensch [mailto:kobus.ben...@trustpayglobal.com]
Gesendet: Montag, 24. Februar 2014 10:55
An: haproxy@formilux.org
Betreff: Re: Release Date of HAProxy version 1.5

Hi

I asked the same question and was advised to at least start testing. I have 
done so and can say that we have not experienced any downtime due to haproxy. I 
use it in conjunction with corosync/pacemaker.

I would probably advise the same. Start testing now.

Kobus
On 24/02/2014 09:29, Haroon Asher wrote:
Hello,
We are using ha-proxy from past 3 years in our production environment.
Now we are looking to use SSL Certificate on our site for which we will require 
haproxy version 1.5. Can you please let us know then when its release is 
planned or by when it will be stable enough to be used in production.
I appreciate your response on that

Re: AW: Release Date of HAProxy version 1.5

2014-02-24 Thread Kobus Bensch

Hi Andreas

Sure. Here is my pacemaker config. I have tried to use the HAproxy 
resource, but for some reason it does not switch properly when one node 
fails. So for now, due to time constraints, I have written a script to 
check which noide is master and to start haproxy on that node and stop 
it on the slave. I will have another go at getting the HAproxy resource 
to work, unless of course there is someone on the list that is already 
doing it and is willing to share some info.


node dc1ntgalb0101.domain.com
node dc1ntgalb0102.domain.com
primitive dc1ntgaip01 ocf:heartbeat:IPaddr2 \
params ip="10.11.115.110" cidr_netmask="24" \
op monitor interval="30s" \
meta target-role="Started"
primitive dc1ntgblv01 ocf:heartbeat:IPaddr2 \
params ip="10.11.115.119" cidr_netmask="24" \
op monitor interval="30s" \
meta target-role="Started"
primitive dc1ntgilv01 ocf:heartbeat:IPaddr2 \
params ip="10.11.115.125" cidr_netmask="24" \
op monitor interval="30s" \
meta target-role="Started"
primitive dc1ntgrlv01 ocf:heartbeat:IPaddr2 \
params ip="10.11.115.122" cidr_netmask="24" \
op monitor interval="30s" \
meta target-role="Started"
primitive dc1ntgxlv01 ocf:heartbeat:IPaddr2 \
params ip="10.11.115.116" cidr_netmask="24" \
op monitor interval="30s" \
meta target-role="Started"
primitive dc1ntiblv01 ocf:heartbeat:IPaddr2 \
params ip="10.11.115.121" cidr_netmask="24" \
op monitor interval="30s" \
meta target-role="Started"
primitive dc1ntiilv01 ocf:heartbeat:IPaddr2 \
params ip="10.11.115.127" cidr_netmask="24" \
op monitor interval="30s" \
meta target-role="Started"
primitive dc1ntirlv01 ocf:heartbeat:IPaddr2 \
params ip="10.11.115.124" cidr_netmask="24" \
op monitor interval="30s" \
meta target-role="Started"
primitive dc1ntixlv01 ocf:heartbeat:IPaddr2 \
params ip="10.11.115.118" cidr_netmask="24" \
op monitor interval="30s" \
meta target-role="Started"
primitive dc1ntmblv01 ocf:heartbeat:IPaddr2 \
params ip="10.11.115.120" cidr_netmask="24" \
op monitor interval="30s" \
meta target-role="Started"
primitive dc1ntmilv01 ocf:heartbeat:IPaddr2 \
params ip="10.11.115.126" cidr_netmask="24" \
op monitor interval="30s" \
meta target-role="Started"
primitive dc1ntmrlv01 ocf:heartbeat:IPaddr2 \
params ip="10.11.115.123" cidr_netmask="24" \
op monitor interval="30s" \
meta target-role="Started"
primitive dc1ntmxlv01 ocf:heartbeat:IPaddr2 \
params ip="10.11.115.117" cidr_netmask="24" \
op monitor interval="30s" \
meta target-role="Started"
colocation proxyservices inf: dc1ntgaip01 dc1ntgxlv01 dc1ntmxlv01 
dc1ntixlv01 dc1ntgblv01 dc1ntmblv01 dc1ntiblv01 dc1ntgrlv01 dc1ntmrlv01 
dc1ntirlv01 dc1ntgilv01 dc1ntmilv01 dc1ntiilv01

property $id="cib-bootstrap-options" \
dc-version="1.1.10-14.el6_5.2-368c726" \
cluster-infrastructure="classic openais (with plugin)" \
expected-quorum-votes="2" \
stonith-enabled="false" \
no-quorum-policy="ignore"
rsc_defaults $id="rsc-options" \
resource-stickiness="100"

On 24/02/2014 15:38, Andreas Mock wrote:

Kobus Bensch Trustpay Global LTD email signature

Hi Kobus,

I would be interested in your pacemaker agent config?

Can you post your pacemaker snippet?

What resource agent are you using?

Thank you in advance.

Regards

Andreas Mock

*Von:*Kobus Bensch [mailto:kobus.ben...@trustpayglobal.com]
*Gesendet:* Montag, 24. Februar 2014 10:55
*An:* haproxy@formilux.org
*Betreff:* Re: Release Date of HAProxy version 1.5

Hi

I asked the same question and was advised to at least start testing. I 
have done so and can say that we have not experienced any downtime due 
to haproxy. I use it in conjunction with corosync/pacemaker.


I would probably advise the same. Start testing now.

Kobus

On 24/02/2014 09:29, Haroon Asher wrote:

Hello,

We are using ha-proxy from past 3 years in our production environment.

Now we are looking to use SSL Certificate on our site for which we
will require haproxy version 1.5. Can you please let us know then
when its release is planned or by when it will be stable enough to
be used in production.

I appreciate your response on that.

Regards,

Haroon

--
Kobus Bensch
Senior Systems Administrator
Address:  22 & 24 | Frederick Sanger Road | Guildford | Surrey | GU2 7YD
DDI:  0207 871 3958
Tel:  0207 871 3890
Email: kobus.ben...@trustpayglobal.com 



Trustpay Global Limited is an authorised Electronic Money Institution 
regulated by the Financial Conduct Authority registration number 
900043. Company No 07427913 Registered in England and Wales with 
registered address 130 Wood Street, London, EC2V 6DL, United Kingdom.


For further details please v

AW: Release Date of HAProxy version 1.5

2014-02-24 Thread Andreas Mock
Hi Kobus,

I would be interested in your pacemaker agent config?
Can you post your pacemaker snippet?
What resource agent are you using?

Thank you in advance.

Regards
Andreas Mock

Von: Kobus Bensch [mailto:kobus.ben...@trustpayglobal.com]
Gesendet: Montag, 24. Februar 2014 10:55
An: haproxy@formilux.org
Betreff: Re: Release Date of HAProxy version 1.5

Hi

I asked the same question and was advised to at least start testing. I have 
done so and can say that we have not experienced any downtime due to haproxy. I 
use it in conjunction with corosync/pacemaker.

I would probably advise the same. Start testing now.

Kobus
On 24/02/2014 09:29, Haroon Asher wrote:
Hello,
We are using ha-proxy from past 3 years in our production environment.
Now we are looking to use SSL Certificate on our site for which we will require 
haproxy version 1.5. Can you please let us know then when its release is 
planned or by when it will be stable enough to be used in production.
I appreciate your response on that.
Regards,
Haroon

--
Kobus Bensch
Senior Systems Administrator
Address:  22 & 24 | Frederick Sanger Road | Guildford | Surrey | GU2 7YD
DDI:  0207 871 3958
Tel:  0207 871 3890
Email:  kobus.ben...@trustpayglobal.com
[cid:image001.png@01CF317E.D5B939C0]


Trustpay Global Limited is an authorised Electronic Money Institution regulated 
by the Financial Conduct Authority registration number 900043. Company No 
07427913 Registered in England and Wales with registered address 130 Wood 
Street, London, EC2V 6DL, United Kingdom.

For further details please visit our website at 
www.trustpayglobal.com.

The information in this email and any attachments are confidential and remain 
the property of Trustpay Global Ltd unless agreed by contract. It is intended 
solely for the person to whom or the entity to which it is addressed. If you 
are not the intended recipient you may not use, disclose, copy, distribute, 
print or rely on the content of this email or its attachments. If this email 
has been received by you in error please advise the sender and delete the email 
from your system. Trustpay Global Ltd does not accept any liability for any 
personal view expressed in this message.
<>

Keeping statistics after a reload

2014-02-24 Thread Andreas Mock
Hi all,

is there a way to reload a haproxy config without resetting the
statistics shown on the stats page?

I used

haproxy -p /var/run/haproxy.pid -sf $(cat /var/run/haproxy.pid)

to make such a reload. But after that all statistics are reset.

Best regards
Andreas Mock




Re: Regression in 503 handling in v1.5-dev19-345-g6b726ad

2014-02-24 Thread Willy Tarreau
On Mon, Feb 24, 2014 at 03:38:44PM +0100, Finn Arne Gangstad wrote:
> Thanks, verified both in the trivial test case and on a production server,
> now seems to work as before.

cool, thanks for testing.

Willy




Re: Regression in 503 handling in v1.5-dev19-345-g6b726ad

2014-02-24 Thread Finn Arne Gangstad
Thanks, verified both in the trivial test case and on a production server,
now seems to work as before.

- Finn Arne



On Mon, Feb 24, 2014 at 2:56 PM, Willy Tarreau  wrote:

> Hi Finn Arne,
>
> On Mon, Feb 24, 2014 at 01:44:46PM +0100, Finn Arne Gangstad wrote:
> > For clients that are keeping their connection open, 503 replies from
> > haproxy seem to be silenced. The connection is just closed without
> sending
> > the 503 reply, if an earlier request has already been served on the same
> > connection.
> >
> > git bisect tells me this appeared in
> > 6b726adb35d998eb55671c0d98ef889cb9fd64ab, MEDIUM: http: do not report
> > connection errors for second and further requests
> >
> > This kills haproxy in all our production setups since we base pretty much
> > all monitoring and small time static file serving on empty haproxy
> backends
> > with more or less clever 503 error files. As far as I can see, none of
> the
> > clients (will) retry the request.
>
> Indeed, I forgot about this usage... I think that we'll really need the
> return directive to get rid of this definitely.
>
> Anyway, could you please try with the following patch ? It limits
> the test to errors resulting from a reuse of the *same* server-side
> connection, which does not happen when you "abuse" the 503 to return
> static contents. Normally it should be OK.
>
> I'm waiting for your confirmation before merging it.
>
> Thanks,
> Willy
>
> diff --git a/include/types/session.h b/include/types/session.h
> index 9b5a5bf..02772a8 100644
> --- a/include/types/session.h
> +++ b/include/types/session.h
> @@ -89,6 +89,7 @@
>  #define SN_IGNORE_PRST 0x0008  /* ignore persistence */
>
>  #define SN_COMP_READY   0x0010 /* the compression is initialized
> */
> +#define SN_SRV_REUSED   0x0020 /* the server-side connection was
> reused */
>
>  /* WARNING: if new fields are added, they must be initialized in
> session_accept()
>   * and freed in session_free() !
> diff --git a/src/backend.c b/src/backend.c
> index e561919..4d0dda1 100644
> --- a/src/backend.c
> +++ b/src/backend.c
> @@ -1072,6 +1072,7 @@ int connect_server(struct session *s)
> else {
> /* the connection is being reused, just re-attach it */
> si_attach_conn(s->req->cons, srv_conn);
> +   s->flags |= SN_SRV_REUSED;
> }
>
> /* flag for logging source ip/port */
> diff --git a/src/proto_http.c b/src/proto_http.c
> index 1f31dd0..ceae4b9 100644
> --- a/src/proto_http.c
> +++ b/src/proto_http.c
> @@ -992,7 +992,7 @@ void http_return_srv_error(struct session *s, struct
> stream_interface *si)
>   http_error_message(s, HTTP_ERR_503));
> else if (err_type & SI_ET_CONN_ERR)
> http_server_error(s, si, SN_ERR_SRVCL, SN_FINST_C,
> - 503, (s->txn.flags & TX_NOT_FIRST) ?
> NULL :
> + 503, (s->flags & SN_SRV_REUSED) ? NULL :
>   http_error_message(s, HTTP_ERR_503));
> else if (err_type & SI_ET_CONN_RES)
> http_server_error(s, si, SN_ERR_RESOURCE, SN_FINST_C,
> @@ -4461,7 +4461,7 @@ void http_end_txn_clean_session(struct session *s)
> s->req->flags &=
> ~(CF_SHUTW|CF_SHUTW_NOW|CF_AUTO_CONNECT|CF_WRITE_ERROR|CF_STREAMER|CF_STREAMER_FAST|CF_NEVER_WAIT);
> s->rep->flags &=
> ~(CF_SHUTR|CF_SHUTR_NOW|CF_READ_ATTACHED|CF_READ_ERROR|CF_READ_NOEXP|CF_STREAMER|CF_STREAMER_FAST|CF_WRITE_PARTIAL|CF_NEVER_WAIT);
> s->flags &=
> ~(SN_DIRECT|SN_ASSIGNED|SN_ADDR_SET|SN_BE_ASSIGNED|SN_FORCE_PRST|SN_IGNORE_PRST);
> -   s->flags &= ~(SN_CURR_SESS|SN_REDIRECTABLE);
> +   s->flags &= ~(SN_CURR_SESS|SN_REDIRECTABLE|SN_SRV_REUSED);
>
> if (s->flags & SN_COMP_READY)
> s->comp_algo->end(&s->comp_ctx);
>


Re: Release Date of HAProxy version 1.5

2014-02-24 Thread Gabriel Sosa
I've been using 1.5.dev19 for a while now without issues. Again, like I
said in the past, I'm aware dev21 comes with keep-alive which I haven't
tested.

regards


On Mon, Feb 24, 2014 at 7:24 AM, Ghislain  wrote:

> Le 24/02/2014 10:29, Haroon Asher a écrit :
>
>  Hello,
>>
>> We are using ha-proxy from past 3 years in our production environment.
>>
>> Now we are looking to use SSL Certificate on our site for which we will
>> require haproxy version 1.5. Can you please let us know then when its
>> release is planned or by when it will be stable enough to be used in
>> production.
>>
>>
>>  hi,
>
> i actually use it in production without any issue since months. This is
> very basic usage but it works fine for us with no glitch for now.
>
> For the 1.5 we do not have big traffic web sites only moderate one where
> we use haproxy just to protect apache from basic attacks.
>
> regards,
> Ghislain.
>
>


-- 
Gabriel Sosa
Sometimes the questions are complicated and the answers are simple. -- Dr.
Seuss


Re: Regression in 503 handling in v1.5-dev19-345-g6b726ad

2014-02-24 Thread Willy Tarreau
Hi Finn Arne,

On Mon, Feb 24, 2014 at 01:44:46PM +0100, Finn Arne Gangstad wrote:
> For clients that are keeping their connection open, 503 replies from
> haproxy seem to be silenced. The connection is just closed without sending
> the 503 reply, if an earlier request has already been served on the same
> connection.
> 
> git bisect tells me this appeared in
> 6b726adb35d998eb55671c0d98ef889cb9fd64ab, MEDIUM: http: do not report
> connection errors for second and further requests
> 
> This kills haproxy in all our production setups since we base pretty much
> all monitoring and small time static file serving on empty haproxy backends
> with more or less clever 503 error files. As far as I can see, none of the
> clients (will) retry the request.

Indeed, I forgot about this usage... I think that we'll really need the
return directive to get rid of this definitely.

Anyway, could you please try with the following patch ? It limits
the test to errors resulting from a reuse of the *same* server-side
connection, which does not happen when you "abuse" the 503 to return
static contents. Normally it should be OK.

I'm waiting for your confirmation before merging it.

Thanks,
Willy

diff --git a/include/types/session.h b/include/types/session.h
index 9b5a5bf..02772a8 100644
--- a/include/types/session.h
+++ b/include/types/session.h
@@ -89,6 +89,7 @@
 #define SN_IGNORE_PRST 0x0008  /* ignore persistence */
 
 #define SN_COMP_READY   0x0010 /* the compression is initialized */
+#define SN_SRV_REUSED   0x0020 /* the server-side connection was 
reused */
 
 /* WARNING: if new fields are added, they must be initialized in 
session_accept()
  * and freed in session_free() !
diff --git a/src/backend.c b/src/backend.c
index e561919..4d0dda1 100644
--- a/src/backend.c
+++ b/src/backend.c
@@ -1072,6 +1072,7 @@ int connect_server(struct session *s)
else {
/* the connection is being reused, just re-attach it */
si_attach_conn(s->req->cons, srv_conn);
+   s->flags |= SN_SRV_REUSED;
}
 
/* flag for logging source ip/port */
diff --git a/src/proto_http.c b/src/proto_http.c
index 1f31dd0..ceae4b9 100644
--- a/src/proto_http.c
+++ b/src/proto_http.c
@@ -992,7 +992,7 @@ void http_return_srv_error(struct session *s, struct 
stream_interface *si)
  http_error_message(s, HTTP_ERR_503));
else if (err_type & SI_ET_CONN_ERR)
http_server_error(s, si, SN_ERR_SRVCL, SN_FINST_C,
- 503, (s->txn.flags & TX_NOT_FIRST) ? NULL :
+ 503, (s->flags & SN_SRV_REUSED) ? NULL :
  http_error_message(s, HTTP_ERR_503));
else if (err_type & SI_ET_CONN_RES)
http_server_error(s, si, SN_ERR_RESOURCE, SN_FINST_C,
@@ -4461,7 +4461,7 @@ void http_end_txn_clean_session(struct session *s)
s->req->flags &= 
~(CF_SHUTW|CF_SHUTW_NOW|CF_AUTO_CONNECT|CF_WRITE_ERROR|CF_STREAMER|CF_STREAMER_FAST|CF_NEVER_WAIT);
s->rep->flags &= 
~(CF_SHUTR|CF_SHUTR_NOW|CF_READ_ATTACHED|CF_READ_ERROR|CF_READ_NOEXP|CF_STREAMER|CF_STREAMER_FAST|CF_WRITE_PARTIAL|CF_NEVER_WAIT);
s->flags &= 
~(SN_DIRECT|SN_ASSIGNED|SN_ADDR_SET|SN_BE_ASSIGNED|SN_FORCE_PRST|SN_IGNORE_PRST);
-   s->flags &= ~(SN_CURR_SESS|SN_REDIRECTABLE);
+   s->flags &= ~(SN_CURR_SESS|SN_REDIRECTABLE|SN_SRV_REUSED);
 
if (s->flags & SN_COMP_READY)
s->comp_algo->end(&s->comp_ctx);



Re: Just a simple thought on health checks after a soft reload of HAProxy....

2014-02-24 Thread Patrick Hemmer
Unfortunately retry doesn't work in our case as we run haproxy on 2
layers, frontend servers and backend servers (to distribute traffic
among multiple processes on each server). So when an app on a server
goes down, the haproxy on that server is still up and accepting
connections, but the layer 7 http checks from the frontend haproxy are
failing. But since the backend haproxy is still accepting connections,
the retry option does not work.

-Patrick


*From: *Baptiste 
*Sent: * 2014-02-24 07:18:00 E
*To: *Malcolm Turnbull 
*CC: *Neil , Patrick Hemmer
, HAProxy 
*Subject: *Re: Just a simple thought on health checks after a soft
reload of HAProxy

> Hi Malcolm,
>
> Hence the retry and redispatch options :)
> I know it's a dirty workaround.
>
> Baptiste
>
>
> On Sun, Feb 23, 2014 at 8:42 PM, Malcolm Turnbull
>  wrote:
>> Neil,
>>
>> Yes, peers are great for passing stick tables to the new HAProxy
>> instance and any current connections bound to the old process will be
>> fine.
>> However  any new connections will hit the new HAProxy process and if
>> the backend server is down but haproxy hasn't health checked it yet
>> then the user will hit a failed server.
>>
>>
>>
>> On 23 February 2014 10:38, Neil  wrote:
>>> Hello
>>>
>>> Regarding restarts, rather that cold starts, if you configure peers the
>>> state from before the restart should be kept. The new process haproxy
>>> creates is automatically a peer to the existing process and gets the state
>>> as was.
>>>
>>> Neil
>>>
>>> On 23 Feb 2014 03:46, "Patrick Hemmer"  wrote:



 
 From: Sok Ann Yap 
 Sent: 2014-02-21 05:11:48 E
 To: haproxy@formilux.org
 Subject: Re: Just a simple thought on health checks after a soft reload of
 HAProxy

 Patrick Hemmer  writes:

   From: Willy Tarreau  1wt.eu>

   Sent:  2014-01-25 05:45:11 E

 Till now that's exactly what's currently done. The servers are marked
 "almost dead", so the first check gives the verdict. Initially we had
 all checks started immediately. But it caused a lot of issues at several
 places where there were a high number of backends or servers mapped to
 the same hardware, because the rush of connection really caused the
 servers to be flagged as down. So we started to spread the checks over
 the longest check period in a farm.

 Is there a way to enable this behavior? In my
 environment/configuration, it causes absolutely no issue that all
 the checks be fired off at the same time.
 As it is right now, when haproxy starts up, it takes it quite a
 while to discover which servers are down.
 -Patrick

 I faced the same problem in http://thread.gmane.org/
 gmane.comp.web.haproxy/14644

 After much contemplation, I decided to just patch away the initial spread
 check behavior: https://github.com/sayap/sayap-overlay/blob/master/net-
 proxy/haproxy/files/haproxy-immediate-first-check.diff



 I definitely think there should be an option to disable the behavior. We
 have an automated system which adds and removes servers from the config, 
 and
 then bounces haproxy. Every time haproxy is bounced, we have a period where
 it can send traffic to a dead server.


 There's also a related bug on this.
 The bug is that when I have a config with "inter 30s fastinter 1s" and no
 httpchk enabled, when haproxy first starts up, it spreads the checks over
 the period defined as fastinter, but the stats output says "UP 1/3" for the
 full 30 seconds. It also says "L4OK in 30001ms", when I know it doesn't 
 take
 the server 30 seconds to simply accept a connection.
 Yet you get different behavior when using httpchk. When I add "option
 httpchk", it still spreads the checks over the 1s fastinter value, but the
 stats output goes full "UP" immediately after the check occurs, not "UP
 1/3". It also says "L7OK/200 in 0ms", which is what I expect to see.

 -Patrick

>>
>>
>> --
>> Regards,
>>
>> Malcolm Turnbull.
>>
>> Loadbalancer.org Ltd.
>> Phone: +44 (0)870 443 8779
>> http://www.loadbalancer.org/
>>



Re: Fix for rare EADDRNOTAVAIL error

2014-02-24 Thread Denis Malyshkin

Hi Willy,

Thank you a lot for your help. It is the best support I've ever seen.
Will test 'source' on our environment to see does it helps to resolve 
the issue.



Alternately, you can use the "source" parameter either on each server
or in the backend to fix a port range. Haproxy will then use an explicit
bind. This is normally used when you want to have more than 64k conns on
multiple servers. But here you could try this :

  source 0.0.0.0:32678-61000
  
Great! As I understand we can set port range to exclude listening ports 
and so eliminate such errors?  Probably it may be a good workaround. 
Thank you again for the idea.   


Yes, but the range is contiguous, so you cannot puch holes in it.
  
Tried to configure but have some questions. We have several 'listen' 
sections with 'server' keyword in our config file (and so there are no 
'backend' sections).


A "listen" is in fact the union of a "frontend" and a "backend", so
whatever you're suggested to put in a "backend" will also apply to a
"listen".
  
1. As I understand 'source' should be put into each of our 'listen' 
section. Should we divide port ranges between several 'listen' sections 
or may use the same wide range for all?


It depends if you have some servers in common or not. The system will
always allow multiple outgoing connections to share the same local
source ip:port as long as they don't go to the same destination ip:ports
since a connection is defined by (proto,srcip,sport,dstip,dport).

So if the servers in your "listen" sections are all different, there
is no problem. Otherwise you might indeed have to use separate ranges.
  
2. Will there be EADDRNOTAVAIL errors if several 'listen' section use 
the same source port range and two connections choose the same port?



Only if they try to connect to the same destination ip:port. This subject
comes back from time to time in fact. It was planned to work on a per-
destination port range, but this is a bit complex, at least in terms of
configuration, so nothing has been done in this direction yet. This would
allow multiple similar servers from different backends to share the same
source ip:port ranges without ever colliding. But this creates new issues
(resource sharing).
  



--
Best regards,
 Denis Malyshkin,
Senior C++ Developer
of ISS Art, Ltd., Omsk, Russia.
Mobile Phone: +7 913 669 2896
Office tel/fax +7 3812 396959
Yahoo Messenger: dmalyshkin
Web: http://www.issart.com
E-mail: dmalysh...@issart.com



Regression in 503 handling in v1.5-dev19-345-g6b726ad

2014-02-24 Thread Finn Arne Gangstad
For clients that are keeping their connection open, 503 replies from
haproxy seem to be silenced. The connection is just closed without sending
the 503 reply, if an earlier request has already been served on the same
connection.

git bisect tells me this appeared in
6b726adb35d998eb55671c0d98ef889cb9fd64ab, MEDIUM: http: do not report
connection errors for second and further requests

This kills haproxy in all our production setups since we base pretty much
all monitoring and small time static file serving on empty haproxy backends
with more or less clever 503 error files. As far as I can see, none of the
clients (will) retry the request.

Command to reproduce: echo -n -e 'POST /push HTTP/1.1\r\nContent-Length:
2\r\n\r\n[]\r\nGET /status HTTP/1.1\r\n\r\n' | nc localhost 

Somewhat minimal config:

global
log  local6
maxconn 262144

defaults
logglobal
modehttp
optionhttplog
timeoutqueue120s
timeoutconnect5s
timeoutclient50s
timeoutserver50s
timeoutcheck  5s
option http-server-close

frontend fe
bind :
use_backend ana-events if { url_beg /push }
use_backend status-ok if { url_beg /status }

backend status-ok
# empty backend to force 503
response
#errorfile 503
errors/ok.json


backend ana-events
option httpchk GET /status
server  :13362 maxconn 4 check

Expected result : 2 replies, actual result: only one reply.


Re: Just a simple thought on health checks after a soft reload of HAProxy....

2014-02-24 Thread Baptiste
Hi Malcolm,

Hence the retry and redispatch options :)
I know it's a dirty workaround.

Baptiste


On Sun, Feb 23, 2014 at 8:42 PM, Malcolm Turnbull
 wrote:
> Neil,
>
> Yes, peers are great for passing stick tables to the new HAProxy
> instance and any current connections bound to the old process will be
> fine.
> However  any new connections will hit the new HAProxy process and if
> the backend server is down but haproxy hasn't health checked it yet
> then the user will hit a failed server.
>
>
>
> On 23 February 2014 10:38, Neil  wrote:
>> Hello
>>
>> Regarding restarts, rather that cold starts, if you configure peers the
>> state from before the restart should be kept. The new process haproxy
>> creates is automatically a peer to the existing process and gets the state
>> as was.
>>
>> Neil
>>
>> On 23 Feb 2014 03:46, "Patrick Hemmer"  wrote:
>>>
>>>
>>>
>>>
>>> 
>>> From: Sok Ann Yap 
>>> Sent: 2014-02-21 05:11:48 E
>>> To: haproxy@formilux.org
>>> Subject: Re: Just a simple thought on health checks after a soft reload of
>>> HAProxy
>>>
>>> Patrick Hemmer  writes:
>>>
>>>   From: Willy Tarreau  1wt.eu>
>>>
>>>   Sent:  2014-01-25 05:45:11 E
>>>
>>> Till now that's exactly what's currently done. The servers are marked
>>> "almost dead", so the first check gives the verdict. Initially we had
>>> all checks started immediately. But it caused a lot of issues at several
>>> places where there were a high number of backends or servers mapped to
>>> the same hardware, because the rush of connection really caused the
>>> servers to be flagged as down. So we started to spread the checks over
>>> the longest check period in a farm.
>>>
>>> Is there a way to enable this behavior? In my
>>> environment/configuration, it causes absolutely no issue that all
>>> the checks be fired off at the same time.
>>> As it is right now, when haproxy starts up, it takes it quite a
>>> while to discover which servers are down.
>>> -Patrick
>>>
>>> I faced the same problem in http://thread.gmane.org/
>>> gmane.comp.web.haproxy/14644
>>>
>>> After much contemplation, I decided to just patch away the initial spread
>>> check behavior: https://github.com/sayap/sayap-overlay/blob/master/net-
>>> proxy/haproxy/files/haproxy-immediate-first-check.diff
>>>
>>>
>>>
>>> I definitely think there should be an option to disable the behavior. We
>>> have an automated system which adds and removes servers from the config, and
>>> then bounces haproxy. Every time haproxy is bounced, we have a period where
>>> it can send traffic to a dead server.
>>>
>>>
>>> There's also a related bug on this.
>>> The bug is that when I have a config with "inter 30s fastinter 1s" and no
>>> httpchk enabled, when haproxy first starts up, it spreads the checks over
>>> the period defined as fastinter, but the stats output says "UP 1/3" for the
>>> full 30 seconds. It also says "L4OK in 30001ms", when I know it doesn't take
>>> the server 30 seconds to simply accept a connection.
>>> Yet you get different behavior when using httpchk. When I add "option
>>> httpchk", it still spreads the checks over the 1s fastinter value, but the
>>> stats output goes full "UP" immediately after the check occurs, not "UP
>>> 1/3". It also says "L7OK/200 in 0ms", which is what I expect to see.
>>>
>>> -Patrick
>>>
>>
>
>
>
> --
> Regards,
>
> Malcolm Turnbull.
>
> Loadbalancer.org Ltd.
> Phone: +44 (0)870 443 8779
> http://www.loadbalancer.org/
>



Re: Release Date of HAProxy version 1.5

2014-02-24 Thread Ghislain

Le 24/02/2014 10:29, Haroon Asher a écrit :

Hello,

We are using ha-proxy from past 3 years in our production environment.

Now we are looking to use SSL Certificate on our site for which we 
will require haproxy version 1.5. Can you please let us know then when 
its release is planned or by when it will be stable enough to be used 
in production.




hi,

i actually use it in production without any issue since months. This is 
very basic usage but it works fine for us with no glitch for now.


For the 1.5 we do not have big traffic web sites only moderate one where 
we use haproxy just to protect apache from basic attacks.


regards,
Ghislain.



Re: Release Date of HAProxy version 1.5

2014-02-24 Thread Kobus Bensch

Hi

I asked the same question and was advised to at least start testing. I 
have done so and can say that we have not experienced any downtime due 
to haproxy. I use it in conjunction with corosync/pacemaker.


I would probably advise the same. Start testing now.

Kobus
On 24/02/2014 09:29, Haroon Asher wrote:

Hello,

We are using ha-proxy from past 3 years in our production environment.

Now we are looking to use SSL Certificate on our site for which we 
will require haproxy version 1.5. Can you please let us know then when 
its release is planned or by when it will be stable enough to be used 
in production.


I appreciate your response on that.

Regards,
Haroon


--
Kobus Bensch Trustpay Global LTD email signature Kobus Bensch
Senior Systems Administrator
Address:  22 & 24 | Frederick Sanger Road | Guildford | Surrey | GU2 7YD
DDI:  0207 871 3958
Tel:  0207 871 3890
Email: kobus.ben...@trustpayglobal.com 



--


Trustpay Global Limited is an authorised Electronic Money Institution 
regulated by the Financial Conduct Authority registration number 900043. 
Company No 07427913 Registered in England and Wales with registered address 
130 Wood Street, London, EC2V 6DL, United Kingdom.


For further details please visit our website at www.trustpayglobal.com.

The information in this email and any attachments are confidential and 
remain the property of Trustpay Global Ltd unless agreed by contract. It is 
intended solely for the person to whom or the entity to which it is 
addressed. If you are not the intended recipient you may not use, disclose, 
copy, distribute, print or rely on the content of this email or its 
attachments. If this email has been received by you in error please advise 
the sender and delete the email from your system. Trustpay Global Ltd does 
not accept any liability for any personal view expressed in this message.
<>

Release Date of HAProxy version 1.5

2014-02-24 Thread Haroon Asher
Hello,

We are using ha-proxy from past 3 years in our production environment.

Now we are looking to use SSL Certificate on our site for which we will
require haproxy version 1.5. Can you please let us know then when its
release is planned or by when it will be stable enough to be used in
production.

I appreciate your response on that.

Regards,
Haroon