Official check_haproxy.pl source

2010-10-26 Thread Timh B
Hi,

Could someone point me to the official and latest source of
check_haproxy.pl, the Nagios plugin mentioned on this list before?

Thanks!
-- 
//Timh




A la recherche d'un graphiste?

2010-10-26 Thread Bobex France
Bobex.fr

Recevez des devis en conception graphique

BOBEX est une place de marché qui améliorent la collaboration entre
des acheteurs et des fournisseurs. +250.000 utilisateurs +90.000
projets par an.

Plus d'informations: 
http://track.effiliation.com/servlet/effi.redir?id_compteur=11378052url=http://www.bobex.fr/bobexfr/control/campaign_general?campaign.id=105aff=effiliationaction=focusutm_source=mailing%20201061utm_medium=emailpage=homeutm_campaign=FR%3A%20Mailingutm_term=homeutm_content=home


Conformément à la loi informatique  libertés du 6 janvier 1978, je
dispose d'un droit d'accès, de rectification et d'opposition aux
données personnelles me concernant. 
Ce message commercial vous est envoyé par “Team Leaders”.. Vous
recevez ce message parce que vous vous êtes inscrit sur l'un des
sites partenaires de “Team Leaders”. Vos données nominatives
n'ont pas été transmises à l'annonceur. Si vous ne souhaitez plus
recevoir notre lettre d'information Remplissez ce formulaire: 
http://87.255.69.213/unsubscribe/index.php?q=hapr...@formilux.org



VM benchmarks

2010-10-26 Thread Ariel
Does anyone know of studies done comparing haproxy on dedicated hardware vs 
virtual machine?  Or perhaps some virtual machine specific considerations?
-a


Slow TCP open on haproxy

2010-10-26 Thread Maxime Ducharme

Hi guys

I am new to haproxy  list, my experience with this software is very
good yet.

Got a question about sockets tuning. We have web site running on 10
different httpd with 2 haproxy in front.

We configured 3 IPs on each haproxy, we get about 2200 req/s each, peak
time is 3500 req/s each.

Current load is very low actually on haproxy boxes, but we have noticed
some slow access to the website. Doing analysis we found out that
sometime opening a TCP socket on haproxy box is slower than opening a
socket directly on one of httpd behind.

The actual configuration is quite simple, here is snippet :

global
maxconn 32768
nbproc 8

defaults
log global
retries 3
maxconn 32768
contimeout 5000
clitimeout 5
srvtimeout 5

listen weblb1 1.1.1.1:80
bind 1.1.1.2:80
bind 1.1.1.3:80

mode http
balance roundrobin  

option forwardfor
option httpchk HEAD / HTTP/1.0
option httpclose
stats enable
server web1 1.1.2.1:80 weight 10 check port 80
..
server web10 1.1.2.10:80 weight 10 check port 80


We put nbprocs to the same amount of CPU cores we have.

We noticed problem by tracing HTTP request with curl, ex:

15:14:13.684549 * About to connect() to www.website.com port 80 (#0)
15:14:13.685620 *   Trying 1.1.1.1... connected
-- 3 seconds here to open TCP connection
15:14:16.796281 * Connected to www.website.com (1.1.1.1) port 80 (#0)
15:14:16.797173  GET / HTTP/1.1
-- httpd replies here in less than 1 second

This issue happens sometime, not always.

My question, can someone point me a direction to look for for sockets
optimization / debugging. I am currently unable to explain why it is
slow, I know this is not hardware related since it is very powerful box.
I believe some tuning will make a big difference. Maybe we have kernel
tuning to do in here, if someone can enlighten me it would be very
appreciated.

Another question :

can I enable stats on a particular IP ?

Thanks and have a nice day

-- 
Maxime Ducharme
Systems Architect
Techboom Inc







Re: Slow TCP open on haproxy

2010-10-26 Thread Willy Tarreau
Hi Maxime,

On Tue, Oct 26, 2010 at 12:47:37PM -0400, Maxime Ducharme wrote:
 
 Hi guys
 
 I am new to haproxy  list, my experience with this software is very
 good yet.
 
 Got a question about sockets tuning. We have web site running on 10
 different httpd with 2 haproxy in front.
 
 We configured 3 IPs on each haproxy, we get about 2200 req/s each, peak
 time is 3500 req/s each.
 
 Current load is very low actually on haproxy boxes, but we have noticed
 some slow access to the website. Doing analysis we found out that
 sometime opening a TCP socket on haproxy box is slower than opening a
 socket directly on one of httpd behind.
 
 The actual configuration is quite simple, here is snippet :
 
 global
 maxconn 32768
 nbproc 8
 
 defaults
 log global
 retries 3
 maxconn 32768
 contimeout 5000
 clitimeout 5
 srvtimeout 5
 
 listen weblb1 1.1.1.1:80
 bind 1.1.1.2:80
 bind 1.1.1.3:80
 
 mode http
 balance roundrobin  
 
 option forwardfor
 option httpchk HEAD / HTTP/1.0
 option httpclose
 stats enable
 server web1 1.1.2.1:80 weight 10 check port 80
 ..
 server web10 1.1.2.10:80 weight 10 check port 80
 
 
 We put nbprocs to the same amount of CPU cores we have.
 
 We noticed problem by tracing HTTP request with curl, ex:
 
 15:14:13.684549 * About to connect() to www.website.com port 80 (#0)
 15:14:13.685620 *   Trying 1.1.1.1... connected
 -- 3 seconds here to open TCP connection
 15:14:16.796281 * Connected to www.website.com (1.1.1.1) port 80 (#0)
 15:14:16.797173  GET / HTTP/1.1
 -- httpd replies here in less than 1 second

A 3 second delay is a typical SYN retransmit.

 This issue happens sometime, not always.
 
 My question, can someone point me a direction to look for for sockets
 optimization / debugging. I am currently unable to explain why it is
 slow, I know this is not hardware related since it is very powerful box.
 I believe some tuning will make a big difference. Maybe we have kernel
 tuning to do in here, if someone can enlighten me it would be very
 appreciated.

Two things to look for :
  - if you have ip_conntrack / nf_conntrack loaded, either you have to
unload it, or to properly tune it for your usage (I'd recommend the
former, it's easier).

  - check sys.net.core.somaxconn. If it's 128, then your TCP stack is not
tuned for a high connection rate, and you're surely dropping incoming
connections from time to time. Try to first increase that single
parameter to 1, restart haproxy and check if it changes anything.

Note that you don't need 8 processes with that load, it will be harder to
debug, health checks will not be synced, and stats will only be per-process.

 Another question :
 
 can I enable stats on a particular IP ?

yes, simply put the stats enable statement in its own listen section.

Last, with version 1.4, you can also reduce the connection rate by using
option http-server-close instead of option httpclose. It will enable
keep-alive on the client side. Do that only when you have fixed your
issues, because doing so can mask the problem without fixing it, and you'll
get it again later.

Regards,
Willy




RE: Strange latency

2010-10-26 Thread Simon Green - Centric IT Ltd
Don't think there's hasn't been any traffic on this thread, so I thought I'd 
just chip in and say we run HAProxy on ESX4.1 with Stunnel in front on the same 
server and Apache servers behind and don't experience anything like the latency 
you mention below.

-Original Message-
From: Ariel [mailto:ar...@bidcactus.com] 
Sent: 25 October 2010 18:45
To: haproxy
Subject: Strange latency

I am using Rackspace cloud servers and trying to convince my boss that we 
should be using haproxy instead of apache at our frontend doing load balancing. 
 For the most part I have set up what I consider a fairly successful staging 
environment (I have working ACL's and cookie based routing).  The problem 
however is that when I use haproxy as my load balancer my round-trip time for a 
request goes up by about 50ms.  With apache as the proxy every request has RTT 
of ~50ms, but now they are at over 100ms.

I am using the same backend servers to test both apache and haproxy, all 
configuration rules the same as I could make them (client side keep-alive 
enabled).  Also for a comparison I also set up a quick nginx server to do its 
(very dumb) load balancing solution, and its results are at the same speed or 
better of apache.  Also, even when apache is terminating SSL and forwarding it 
on, the RTT does not go up.  All three software is running (one at a time) on 
the same virtual server, so I don't think it is that I got a bad VPS slice or 
something like that.

Also, when I use stunnel in front of haproxy to terminate https requests, it 
adds another ~50ms to the total RTT.  And if I have to make the request go 
through another stunnel to the backend (a requirement for PCI compliance), it 
adds another ~50ms again.  So now using the site with SSL is over 300ms per 
request just from the start.  That may not be *terrible* but the site is very 
interactive and calls one AJAX request per second to keep lots of things 
updated.  For general users around the internet the site is going to appear 
unresponsive and slow...

I was wondering if anyone using haproxy in a virtualized environment as ever 
experienced something like this?  Or maybe some configuration options to try to 
debug this?

-a



RE: VM benchmarks

2010-10-26 Thread Simon Green - Centric IT Ltd
Hi,

I'd be interested to see the same test with all devices in the same location. 
There shouldn't be any reason for this much difference! We run HAProxy on ESX 
so I might take a spare server to the DC and V2P the servers over to that for 
testing.

Will let you know on this one...



-Original Message-
From: Daniel Storjordet [mailto:dan...@desti.no] 
Sent: 26 October 2010 22:00
To: Ariel; haproxy@formilux.org
Subject: Re: VM benchmarks


We just moved HAProxy from ESXi servers into two dedicated Atom servers.

In the first setup the HAProxy innstallations balanced two webservers in the 
same ESXi enviorment. The web access times for this config was inbetween 
120-150ms (Connect, Request, Download).

In the new config the dedicated HAProxy boxes are located in a seperate 
datacenter 500km away from the same ESXi web servers. With this config we get 
lower web access times. Inbetween 110-130ms (Connect, Request, Download).

I expect that also moving the web servers to the new datacenter will result in 
an even better results.

--
mvh.

Daniel Storjordet

D E S T ! N O :: Strandgata 117 :: 4307 Sandnes Mob 45 51 73 71 :: Tel 51 62 50 
14  dan...@desti.no :: http://www.desti.no www.destinet.no - Webpublisering på 
nett www.func.no - Flysøk på nett



On 26.10.2010 16:38, Ariel wrote:
 Does anyone know of studies done comparing haproxy on dedicated hardware vs 
 virtual machine?  Or perhaps some virtual machine specific considerations?
 -a





Re: VM benchmarks

2010-10-26 Thread Hank A. Paulson
I don't have benchmarks, but have sites running haproxy on Xen VMs with apache 
on Xen VMs and can pump 120 Mbps and 80 million hits a day through one haproxy 
VM and that is with haproxy doing rsysloging of all requests to 2 remote 
rsyslog servers on top of the serving of requests with some layer 7 acls to 
route requests to different backends. Only 50-75 backend servers total though.


http keepalive helped alot with the type of requests that haproxy serves so it 
reduced the work load some from the non-keepalive version.


I also use auto-splice on there to reduce overhead somewhat.

On 10/26/10 7:38 AM, Ariel wrote:

Does anyone know of studies done comparing haproxy on dedicated hardware vs 
virtual machine?  Or perhaps some virtual machine specific considerations?
-a




Re: Strange latency

2010-10-26 Thread Hank A. Paulson
Just a guess, but is there something that might be doing reverse dns lookups 
for each request when using haproxy? I find when I turn on tcpdump on port 53 
on a firewall or router, I and others are surprised at how much reverse lookup 
traffic there is going on in any given environment.


On 10/26/10 2:02 PM, Simon Green - Centric IT Ltd wrote:

Don't think there's hasn't been any traffic on this thread, so I thought I'd 
just chip in and say we run HAProxy on ESX4.1 with Stunnel in front on the same 
server and Apache servers behind and don't experience anything like the latency 
you mention below.

-Original Message-
From: Ariel [mailto:ar...@bidcactus.com]
Sent: 25 October 2010 18:45
To: haproxy
Subject: Strange latency

I am using Rackspace cloud servers and trying to convince my boss that we 
should be using haproxy instead of apache at our frontend doing load balancing. 
 For the most part I have set up what I consider a fairly successful staging 
environment (I have working ACL's and cookie based routing).  The problem 
however is that when I use haproxy as my load balancer my round-trip time for a 
request goes up by about 50ms.  With apache as the proxy every request has RTT 
of ~50ms, but now they are at over 100ms.

I am using the same backend servers to test both apache and haproxy, all 
configuration rules the same as I could make them (client side keep-alive 
enabled).  Also for a comparison I also set up a quick nginx server to do its 
(very dumb) load balancing solution, and its results are at the same speed or 
better of apache.  Also, even when apache is terminating SSL and forwarding it 
on, the RTT does not go up.  All three software is running (one at a time) on 
the same virtual server, so I don't think it is that I got a bad VPS slice or 
something like that.

Also, when I use stunnel in front of haproxy to terminate https requests, it 
adds another ~50ms to the total RTT.  And if I have to make the request go 
through another stunnel to the backend (a requirement for PCI compliance), it 
adds another ~50ms again.  So now using the site with SSL is over 300ms per 
request just from the start.  That may not be *terrible* but the site is very 
interactive and calls one AJAX request per second to keep lots of things 
updated.  For general users around the internet the site is going to appear 
unresponsive and slow...

I was wondering if anyone using haproxy in a virtualized environment as ever 
experienced something like this?  Or maybe some configuration options to try to 
debug this?

-a





Re: Strange latency

2010-10-26 Thread Ariel
That's interesting, I would have never thought of that.  I did run `tcpdump -i 
eth0 -w dns.pcap` (eth0 is the internet facing interface) and ran my site for a 
while but nothing matched a DNS request.  I don't have something in front of 
the proxy towards the internet to listen on at the moment either but I will 
definitely keep that in mind for later, thanks.

-a


On Oct 26, 2010, at 5:52 PM, Hank A. Paulson wrote:

 Just a guess, but is there something that might be doing reverse dns lookups 
 for each request when using haproxy? I find when I turn on tcpdump on port 53 
 on a firewall or router, I and others are surprised at how much reverse 
 lookup traffic there is going on in any given environment.
 
 On 10/26/10 2:02 PM, Simon Green - Centric IT Ltd wrote:
 Don't think there's hasn't been any traffic on this thread, so I thought I'd 
 just chip in and say we run HAProxy on ESX4.1 with Stunnel in front on the 
 same server and Apache servers behind and don't experience anything like the 
 latency you mention below.
 
 -Original Message-
 From: Ariel [mailto:ar...@bidcactus.com]
 Sent: 25 October 2010 18:45
 To: haproxy
 Subject: Strange latency
 
 I am using Rackspace cloud servers and trying to convince my boss that we 
 should be using haproxy instead of apache at our frontend doing load 
 balancing.  For the most part I have set up what I consider a fairly 
 successful staging environment (I have working ACL's and cookie based 
 routing).  The problem however is that when I use haproxy as my load 
 balancer my round-trip time for a request goes up by about 50ms.  With 
 apache as the proxy every request has RTT of ~50ms, but now they are at over 
 100ms.
 
 I am using the same backend servers to test both apache and haproxy, all 
 configuration rules the same as I could make them (client side keep-alive 
 enabled).  Also for a comparison I also set up a quick nginx server to do 
 its (very dumb) load balancing solution, and its results are at the same 
 speed or better of apache.  Also, even when apache is terminating SSL and 
 forwarding it on, the RTT does not go up.  All three software is running 
 (one at a time) on the same virtual server, so I don't think it is that I 
 got a bad VPS slice or something like that.
 
 Also, when I use stunnel in front of haproxy to terminate https requests, it 
 adds another ~50ms to the total RTT.  And if I have to make the request go 
 through another stunnel to the backend (a requirement for PCI compliance), 
 it adds another ~50ms again.  So now using the site with SSL is over 300ms 
 per request just from the start.  That may not be *terrible* but the site is 
 very interactive and calls one AJAX request per second to keep lots of 
 things updated.  For general users around the internet the site is going to 
 appear unresponsive and slow...
 
 I was wondering if anyone using haproxy in a virtualized environment as ever 
 experienced something like this?  Or maybe some configuration options to try 
 to debug this?
 
 -a