Re: [haproxy]: Performance of haproxy-to-4-nginx vs direct-to-nginx

2015-04-29 Thread Krishna Kumar (Engineering)
Dear all,

Sorry, my lab systems were down for many days and I could not get back on
this earlier. After
new systems were allocated, I managed to get all the requested information
with a fresh ru
(Sorry, this is a long mail too!). There are now 4 physical servers,
running Debian 3.2.0-4-amd64,
connected directly to a common switch:

server1: Run 'ab' in a container, no cpu/memory restriction.
server2: Run haproxy in a container, configured with 4 nginx's,
cpu/memory configured as
  shown below.
server3: Run 2 different nginx containers, no cpu/mem restriction.
server4: Run 2 different nginx containers, for a total of 4 nginx, no
cpu/mem restriction.

The servers have 2 sockets, each with 24 cores. Socket 0 has cores
0,2,4,..,46 and Socket 1 has
cores 1,3,5,..,47. The NIC (ixgbe) is bound to CPU 0. Haproxy is started on
cpu's:
2,4,6,8,10,12,14,16, so that is in the same cache line as the nic (nginx is
run on different servers
as explained above). No tuning on nginx servers as the comparison is between
'ab' - 'nginx' and 'ab' and 'haproxy' - nginx(s). The cpus are Intel(R)
Xeon(R) CPU E5-2670 v3
@ 2.30GHz. The containers are all configured with 8GB, server having 128GB
memory.

mpstat and iostat were captured during the test, where the capture started
after 'ab' started and
capture ended just before 'ab' finished so as to get warm numbers.


Request directly to 1 nginx backend server, size=256 bytes:

Command: ab -k -n 10 -c 1000 nginx:80/256
Requests per second:69749.02 [#/sec] (mean)
Transfer rate:  34600.18 [Kbytes/sec] received

Request to haproxy configured with 4 nginx backends (nbproc=4), size=256
bytes:

Command: ab -k -n 10 -c 1000 haproxy:80/256
Requests per second:19071.55 [#/sec] (mean)
Transfer rate:  9461.28 [Kbytes/sec] received

mpstat (first 4 processors only, rest are almost zero):
Average: CPU%usr   %nice%sys %iowait%irq   %soft  %steal
%guest  %gnice   %idle
Average: all0.440.001.590.000.002.960.00
0.000.00   95.01
Average:   00.250.000.750.000.00   98.010.00
0.000.001.00
Average:   11.260.005.280.000.002.510.00
0.000.00   90.95
Average:   22.760.008.790.000.005.780.00
0.000.00   82.66
Average:   31.510.006.780.000.003.020.00
0.000.00   88.69

pidstat:
Average:  105   4715.00   33.500.00   38.50 -  haproxy
Average:  105   4726.50   44.000.00   50.50 -  haproxy
Average:  105   4738.50   40.000.00   48.50 -  haproxy
Average:  105   4752.50   14.000.00   16.50 -  haproxy

Request directly to 1 nginx backend server, size=64K

Command: ab -k -n 10 -c 1000 nginx:80/64K
Requests per second:3342.56 [#/sec] (mean)
Transfer rate:  214759.11 [Kbytes/sec] received

Request to haproxy configured with 4 nginx backends (nbproc=4), size=64K

Command: ab -k -n 10 -c 1000 haproxy:80/64K

Requests per second:1283.62 [#/sec] (mean)
Transfer rate:  82472.35 [Kbytes/sec] received

mpstat (first 4 processors only, rest are almost zero):
Average: CPU%usr   %nice%sys %iowait%irq   %soft  %steal
%guest  %gnice   %idle
Average: all0.080.000.740.010.002.620.00
0.000.00   96.55
Average:   00.000.000.000.000.00  100.000.00
0.000.000.00
Average:   11.030.009.980.210.007.670.00
0.000.00   81.10
Average:   20.700.006.320.000.004.500.00
0.000.00   88.48
Average:   30.150.002.040.060.001.730.00
0.000.00   96.03

pidstat:
Average:  UID   PID%usr %system  %guest%CPU   CPU  Command
Average:  105   4710.93   14.700.00   15.63 -  haproxy
Average:  105   4721.12   21.550.00   22.67 -  haproxy
Average:  105   4731.41   20.950.00   22.36 -  haproxy
Average:  105   4750.224.850.005.07 -  haproxy
--
Build information:

HA-Proxy version 1.5.8 2014/10/31
Copyright 2000-2014 Willy Tarreau w...@1wt.eu

Build options :
  TARGET  = linux2628
  

LED tubes,a new type of structure has high quality and high competitive price

2015-04-29 Thread Tracy
Dearfriend, 
Wishyouawonderf=ulday.I'dliketointrod=uceanewproductthathasahighcompetitivepriceandahighlm/w.Welcometocontact=usformoredetailsandlookforwardtogetyourreply.Thanksamp;BestRegards!Tracy---=-ShenzhenBDLAYElectronicsamp;Te=chnologyCo.,LtdEmail:sale...@bdlled.com
   Skyp=e:bdlay003 Tel:0086-755-27350487 =nbsp;  
Mobi=le:+86-18927412485  Web:www.bdlled.com

Ultrathin Credit Card Mobile Power Charger with iPhone Connector

2015-04-29 Thread Bessie
Hello, 
Doyoustlllo=okingforpowerbankthatprefectfitintopurse,andsuitabletomake=fullcolorprintinglogo?
 
Wehaveacreditcardshapep=owerbank,thisitemisbestchoicetocustomyourbrand,plsfindnbs=p;attachedproductpicture.Thispowerbankcanbeusedforallkindsofmobile=phones.
 
Features:=1.Slimbattery,morethinn=er,safterandconvenient2.Prefectfitintopurse3.Withbulit-inmicroUSB=cable4.WithiPhone5connector=slot5.Only66g/pc
  Specifications:1. Item#:SK-P0612. MOQ:100pcs=3. 
Material:ABS+PC4.Capacity:2500mAh(Agra=debattery)5. Battery:Lithium-po=lymer6. 
Size:96*62*6.6mm Pleaseemailbacktogetmo=reinformationaboutus.  
Ifyouwanttoknowmore=aboutthisproduct,pls contactwith us. BestregardsBessie 

Golf product

2015-04-29 Thread nice

  
  
Hey lady or man,Golftrading here, exportingGolfwith good quality and low price in china.
We canprovide technical support andafter saleservice ,Able to provide free samples
 

Call me, let's talk details.

Rgds,

Rick
  tel:00852-97966836
  fax:0085297966836

Skype: xue12345694 
  
  E-mail for ze hao:(chuanghua2...@outlook.com)
  
  
  

  


Re: SMTPS and L7 health-checks

2015-04-29 Thread iain
On 29/04/15 04:26, Baptiste wrote:

 Hi,
 You need to enable the check-ssl on the server line.
 In your case haproxy sends a check in clear, while the server expects a
 ciphered connexion.

That's correct, because I am trying to keep the health checks on the
cleartext TCP/25 port.

However, I did try your suggestion to kick it down to SSL. I changed the
server lines to:

---CUT---8---CUT---
server MTA1 xx.xx.xx.xx:465 check-send-proxy send-proxy check-ssl verify
none
server MTA2 xx.xx.xx.xx:465 check-send-proxy send-proxy check-ssl verify
none
---CUT---8---CUT---

...but got the same results, connection fails to establish and as it
terminates, the following appears in the logs:

---CUT---8---CUT---
Apr 29 08:57:58 lb1 haproxy[21820]: 172.23.0.197:35845
[29/Apr/2015:08:57:38.331] MTASSL MTASSL/MTA1 1/-1/20005 0 sC 1/0/0/0/3 0/0
Apr 29 08:57:58 lb1 haproxy[21820]: 172.23.0.197:35845
[29/Apr/2015:08:57:38.331] MTASSL MTASSL/MTA1 1/-1/20005 0 sC 1/0/0/0/3 0/0
---CUT---8---CUT---

The MTA's logs contain only the follow repeating entries:

---CUT---8---CUT---
2015-04-29 09:11:15 SMTP connection from [xx.xx.xx.xx]:46670
I=[xx.xx.xx.xx]:25 (TCP/IP connection count = 1)
2015-04-29 09:11:15 SMTP connection from [xx.xx.xx.xx]:60941
I=[xx.xx.xx.xx]:25 (TCP/IP connection count = 2)
2015-04-29 09:11:15 SMTP connection from lb2.example.org
[xx.xx.xx.xx]:46670 I=[xx.xx.xx.xx]:25 lost (error: Connection reset by
peer)
2015-04-29 09:11:15 SMTP connection from lb1.example.org
[xx.xx.xx.xx]:60941 I=[xx.xx.xx.xx]:25 lost (error: Connection reset by
peer)
---CUT---8---CUT---

I should perhaps have mentioned that I'm running this on Debian 7 with
HAproxy version 1.5.8.




Re: [haproxy]: Performance of haproxy-to-4-nginx vs direct-to-nginx

2015-04-29 Thread Pavlos Parissis
On 29/04/2015 12:56 μμ, Krishna Kumar (Engineering) wrote:
 Dear all,
 
 Sorry, my lab systems were down for many days and I could not get back
 on this earlier. After
 new systems were allocated, I managed to get all the requested
 information with a fresh ru
 (Sorry, this is a long mail too!). There are now 4 physical servers,
 running Debian 3.2.0-4-amd64,
 connected directly to a common switch:
 
 server1: Run 'ab' in a container, no cpu/memory restriction.
 server2: Run haproxy in a container, configured with 4 nginx's,
 cpu/memory configured as
   shown below.
 server3: Run 2 different nginx containers, no cpu/mem restriction.
 server4: Run 2 different nginx containers, for a total of 4 nginx,
 no cpu/mem restriction.
 
 The servers have 2 sockets, each with 24 cores. Socket 0 has cores
 0,2,4,..,46 and Socket 1 has
 cores 1,3,5,..,47. The NIC (ixgbe) is bound to CPU 0. 

It is considered bad thing to bind all queues of NIC to 1 CPU as it
creates a major bottleneck. HAProxy will have to wait for the interrupt
to be processed by a single CPU which is saturated.

 Haproxy is started
 on cpu's:
 2,4,6,8,10,12,14,16, so that is in the same cache line as the nic (nginx
 is run on different servers
 as explained above). No tuning on nginx servers as the comparison is between

how many workers to run on Nginx?

 'ab' - 'nginx' and 'ab' and 'haproxy' - nginx(s). The cpus are
 Intel(R) Xeon(R) CPU E5-2670 v3
 @ 2.30GHz. The containers are all configured with 8GB, server having
 128GB memory.
 
 mpstat and iostat were captured during the test, where the capture
 started after 'ab' started and
 capture ended just before 'ab' finished so as to get warm numbers.
 
 
 Request directly to 1 nginx backend server, size=256 bytes:
 
 Command: ab -k -n 10 -c 1000 nginx:80/256
 Requests per second:69749.02 [#/sec] (mean)
 Transfer rate:  34600.18 [Kbytes/sec] received
 
 Request to haproxy configured with 4 nginx backends (nbproc=4), size=256
 bytes:
 
 Command: ab -k -n 10 -c 1000 haproxy:80/256
 Requests per second:19071.55 [#/sec] (mean)
 Transfer rate:  9461.28 [Kbytes/sec] received
 
 mpstat (first 4 processors only, rest are almost zero):
 Average: CPU%usr   %nice%sys %iowait%irq   %soft 
 %steal  %guest  %gnice   %idle
 Average: all0.440.001.590.000.002.96   
 0.000.000.00   95.01
 Average:   00.250.000.750.000.00   98.01   
 0.000.000.001.00

All network interrupts are processed by CPU 0 which is saturated.
You need to spread the queues of NIC to different CPUs. Either use
irqbalancer or the following 'ugly' script which you need to modify a
bit as I have 2 NICs and you have only 1. You also need to adjust the
number of queues, grep eth /proc/interrupts and you will find out how
many you have.

#!/bin/sh

awk '
function get_affinity(cpus) {
split(cpus,list,/,/)
mask=0
for (val in list) {
mask+=lshift(1,list[val])
}
return mask
}
BEGIN {
# Interrupt - CPU core(s) mapping
map[eth0-q0]=0
map[eth0-q1]=1
map[eth0-q2]=2
map[eth0-q3]=3
map[eth0-q4]=4
map[eth0-q5]=5
map[eth0-q6]=6
map[eth0-q7]=7
map[eth1-q0]=12
map[eth1-q1]=13
map[eth1-q2]=14
map[eth1-q3]=15
map[eth1-q4]=16
map[eth1-q5]=17
map[eth1-q6]=18
map[eth1-q7]=19
}
/eth/ {
irq=substr($1,0,length($1)-1)
queue=$NF
printf %s (%s) - %s
(%08X)\n,queue,irq,map[queue],get_affinity(map[queue])
system(sprintf(echo %08X 
/proc/irq/%s/smp_affinity\n,get_affinity(map[queue]),irq))
}
' /proc/interrupts

 Average:   11.260.005.280.000.002.51   
 0.000.000.00   90.95
 Average:   22.760.008.790.000.005.78   
 0.000.000.00   82.66
 Average:   31.510.006.780.000.003.02   
 0.000.000.00   88.69
 
 pidstat:
 Average:  105   4715.00   33.500.00   38.50 -  haproxy 
 Average:  105   4726.50   44.000.00   50.50 -  haproxy
 Average:  105   4738.50   40.000.00   48.50 -  haproxy 
 Average:  105   4752.50   14.000.00   16.50 -  haproxy
 
 Request directly to 1 nginx backend server, size=64K
 

I would like to see pidstat and mpstat while you test nginx.

Cheers,
Pavlos




signature.asc
Description: OpenPGP digital 

Re: SMTPS and L7 health-checks

2015-04-29 Thread Baptiste
On Wed, Apr 29, 2015 at 9:18 AM, iain expat.i...@gmail.com wrote:
 On 29/04/15 04:26, Baptiste wrote:

 Hi,
 You need to enable the check-ssl on the server line.
 In your case haproxy sends a check in clear, while the server expects a
 ciphered connexion.

 That's correct, because I am trying to keep the health checks on the
 cleartext TCP/25 port.

 However, I did try your suggestion to kick it down to SSL. I changed the
 server lines to:

 ---CUT---8---CUT---
 server MTA1 xx.xx.xx.xx:465 check-send-proxy send-proxy check-ssl verify
 none
 server MTA2 xx.xx.xx.xx:465 check-send-proxy send-proxy check-ssl verify
 none
 ---CUT---8---CUT---

 ...but got the same results, connection fails to establish and as it
 terminates, the following appears in the logs:

 ---CUT---8---CUT---
 Apr 29 08:57:58 lb1 haproxy[21820]: 172.23.0.197:35845
 [29/Apr/2015:08:57:38.331] MTASSL MTASSL/MTA1 1/-1/20005 0 sC 1/0/0/0/3 0/0
 Apr 29 08:57:58 lb1 haproxy[21820]: 172.23.0.197:35845
 [29/Apr/2015:08:57:38.331] MTASSL MTASSL/MTA1 1/-1/20005 0 sC 1/0/0/0/3 0/0
 ---CUT---8---CUT---

 The MTA's logs contain only the follow repeating entries:

 ---CUT---8---CUT---
 2015-04-29 09:11:15 SMTP connection from [xx.xx.xx.xx]:46670
 I=[xx.xx.xx.xx]:25 (TCP/IP connection count = 1)
 2015-04-29 09:11:15 SMTP connection from [xx.xx.xx.xx]:60941
 I=[xx.xx.xx.xx]:25 (TCP/IP connection count = 2)
 2015-04-29 09:11:15 SMTP connection from lb2.example.org
 [xx.xx.xx.xx]:46670 I=[xx.xx.xx.xx]:25 lost (error: Connection reset by
 peer)
 2015-04-29 09:11:15 SMTP connection from lb1.example.org
 [xx.xx.xx.xx]:60941 I=[xx.xx.xx.xx]:25 lost (error: Connection reset by
 peer)
 ---CUT---8---CUT---

 I should perhaps have mentioned that I'm running this on Debian 7 with
 HAproxy version 1.5.8.




Hi Iain,

You were right, sorry, my fault.
Could you try a tcpdump when (capturing whole packets) you do the
health check on the port 25?

What does HAProxy reports in its logs?

Baptiste



Recommendations for a new haproxy installation

2015-04-29 Thread Shawn Heisey
I have an existing load balancer installation that I have been slowly
migrating from IPVS to haproxy.  It's CentOS 6, so many components are
out of date, such as TLS support.

Once that migration is done, I would like to entirely replace the
hardware and load an ideal software environment for haproxy.

The new machines have Ubuntu 14, so the openssl version is fairly new,
but not the newest available.  The CPU is an Intel Xeon E5-2430, which
has built-in TLS acceleration.  It has 16GB of memory.  The machine is
dedicated for load balancing.

How can I be sure that openssl is compiled with support for TLS
acceleration in the CPU?  I am compiling haproxy from source.  Would you
recommend that I install a separate and newer openssl from source for
explicit use with haproxy, and tweak its config for the specific
hardware it's on?

The CPU has 6 hyperthreaded CPU cores.  I know that haproxy can be run
in multiprocess mode to take advantage of multiple CPU cores, but is
that a recommended and stable config?  If it is, then I will do it just
so I'm taking full advantage of the hardware.  I know from the list
history that stats don't aggregate across processes, but as long as I
can figure out how to look at all the stats, that shouldn't be a problem.

Is there anything else I should be aware of or think about as I work on
the OS and software for this replacement hardware?

Thanks,
Shawn



Re: [PATCH 1/2] MEDIUM: Do not send email alerts corresponding to log-health-checks messages

2015-04-29 Thread Simon Horman
On Tue, Apr 28, 2015 at 09:24:42AM +0200, Willy Tarreau wrote:
 On Tue, Apr 28, 2015 at 02:25:02PM +0900, Simon Horman wrote:
  On Tue, Apr 28, 2015 at 06:43:38AM +0200, Willy Tarreau wrote:
   Hi Simon,
   
   On Tue, Apr 28, 2015 at 10:58:56AM +0900, Simon Horman wrote:
This seems only to lead to excessive verbosity which seems
much more appropriate for logs than email.

Signed-off-by: Simon Horman ho...@verge.net.au
---
 src/checks.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/src/checks.c b/src/checks.c
index 3702d9a4b0fe..efcaff20219b 100644
--- a/src/checks.c
+++ b/src/checks.c
@@ -316,7 +316,6 @@ static void set_server_check_status(struct check 
*check, short status, const cha
 
Warning(%s.\n, trash.str);
send_log(s-proxy, LOG_NOTICE, %s.\n, trash.str);
-   send_email_alert(s, LOG_NOTICE, %s, trash.str);
   
   Just a question, shouldn't we keep it and send it as LOG_INFO instead ?
   That way users can choose whether to have them or not. Just a suggestion,
   otherwise I'm fine with this as well.
  
  Good idea, I'll re-spin.
  
  In the mean time could you look at the second patch of the series?
  It is (currently) independent of this one.
 
 Sorry, I wasn't clear, I did so and found it fine. I can merge it
 if you want but just like you I know that merging only parts of
 series causes more trouble than they solve.

Understood, I'll resubmit the entire series.



Re: [haproxy]: Performance of haproxy-to-4-nginx vs direct-to-nginx

2015-04-29 Thread Willy Tarreau
Hi,

On Wed, Apr 29, 2015 at 04:26:56PM +0530, Krishna Kumar (Engineering) wrote:
 
 Request directly to 1 nginx backend server, size=256 bytes:
 
 Command: ab -k -n 10 -c 1000 nginx:80/256
 Requests per second:69749.02 [#/sec] (mean)
 Transfer rate:  34600.18 [Kbytes/sec] received
 
 Request to haproxy configured with 4 nginx backends (nbproc=4), size=256
 bytes:
 
 Command: ab -k -n 10 -c 1000 haproxy:80/256
 Requests per second:19071.55 [#/sec] (mean)
 Transfer rate:  9461.28 [Kbytes/sec] received

These numbers are extremely low and very likely indicate an http
close mode combined with an untuned nf_conntrack.

 mpstat (first 4 processors only, rest are almost zero):
 Average: CPU%usr   %nice%sys %iowait%irq   %soft  %steal
 %guest  %gnice   %idle
 Average: all0.440.001.590.000.002.960.00
 0.000.00   95.01
 Average:   00.250.000.750.000.00   98.010.00
 0.000.001.00

This CPU is spending its time in softirq, probably due to conntrack
spending a lot of time looking for the session for each packet in too
small a hash table.

 
 Request directly to 1 nginx backend server, size=64K
 
 Command: ab -k -n 10 -c 1000 nginx:80/64K
 Requests per second:3342.56 [#/sec] (mean)
 Transfer rate:  214759.11 [Kbytes/sec] received
 

Note, this is about 2 Gbps. How is your network configured ? You should
normally see either 1 Gbps with a gig NIC or 10 Gbps with a 10G NIC,
because retrieving a static file is very cheap. Would you happen to be
using bonding in round-robin mode maybe ? If that's the case, it's a
performance disaster due to out-of-order packets and could explain some
of the high %softirq.

 Request to haproxy configured with 4 nginx backends (nbproc=4), size=64K
 
 Command: ab -k -n 10 -c 1000 haproxy:80/64K
 
 Requests per second:1283.62 [#/sec] (mean)
 Transfer rate:  82472.35 [Kbytes/sec] received

That's terribly low. I'm doing more than that on a dockstar that fits
in my hand and is powered over USB!

 pidstat:
 Average:  UID   PID%usr %system  %guest%CPU   CPU  Command
 Average:  105   4710.93   14.700.00   15.63 -  haproxy
 Average:  105   4721.12   21.550.00   22.67 -  haproxy
 Average:  105   4731.41   20.950.00   22.36 -  haproxy
 Average:  105   4750.224.850.005.07 -  haproxy

Far too much time is spent in the system, the TCP stack is waiting for
the softirqs on CPU0 to do their job.

 --
 Configuration file:
 global
 daemon
 maxconn  6
 quiet
 nbproc 4
 maxpipes 16384
 user haproxy
 group haproxy
 stats socket /var/run/haproxy.sock mode 600 level admin
 stats timeout 2m
 
 defaults
 option forwardfor
 option http-server-close

Please retry without http-server-close to maintain keep-alive to the
servers, that will avoid the session setup/teardown. If that becomes
better, there's definitely something to fix in the conntrack or maybe
in iptables rules if you have some. But in any case don't put such a
system in production like this, it almost does not work, you should
see roughly 10 times the numbers you're currently getting.

It can be interesting as well to see what ab to nginx does without
-k, as it will do part of the job haproxy is doing with nginx as
well and can help troubleshoot the issue in a simplified setup first.

Willy




Updating a stick table from the HTTP response

2015-04-29 Thread Holger Just
Hello all,

with HAProxy 1.5.11, we have implemented rate limiting based on some
aspects of the request (Host header, path, ...). In our implementation,
we delay limited requests by forcing a WAIT_END in order to prevent
brute-force attacks against e.g. passwords or login tokens:


acl bruteforce_slowdown sc2_http_req_rate gt 20
acl limited_path path_beg /sensitive/stuff

stick-table type ip size 100k expire 30m store http_req_rate(300s)
tcp-request content track-sc2 src if METH_POST limited_path

# Delay the request for 10 seconds if we have too many requests
tcp-request inspect-delay 10s
tcp-request content accept unless bruteforce_slowdown limited_path
tcp-request content accept if WAIT_END


As you can see above, we track only certain requests to sensitive
resources and delay further requests after 20 req / 300 s without taking
the actual response into account. This is good enough for e.g. a web
form to login or change a password.

Now, unfortunately we have some endpoints which are protected with Basic
Auth which is validated by the application. If the password is
incorrect, we return an HTTP 401.

In order to prevent brute-forcing of passwords against these endpoints,
we would like to employ a similar delay mechanism. Unfortunately, we
can't detect from the request headers alone if we have a bad request but
have to inspect the response and increase the sc2 counter only of we
have seen a 401.

In the end, I would like to use a fetch similar to sc1_http_err_rate but
reduced to only specific cases, i.e. 401 responses on certain paths or
Host names.

Now the problem is that we apparently can't manipulate the stick table
from a HTTP response, or more precisely: I have not found a way to do it.

We would like to do something like


tcp-request content track-sc2 src if { status 401 }


which would allow us to track these error-responses similar to the first
approach and handle the next requests the same way as above.

Now my questions are:

* Is something like this possible/feasible right now?
* Is there some other way to implement rate limiting based on certain
  server responses?
* If this is not possible right now, would it be feasible to implement
  the possibility to track responses similar to what is possible with
  requests right now?

Thank you for your feedback,
Holger Just



Re: [PATCH v2 0/3] MEDIUM: Change verbosity of email alerts

2015-04-29 Thread Willy Tarreau
Hi Simon,

On Thu, Apr 30, 2015 at 01:10:32PM +0900, Simon Horman wrote:
 Hi,
 
 the aim of this series is to make the send more email alerts when
 they are likely to be useful and less when they are likely to be
 unwanted.
(...)

Whole series applied, thank you very much!

Willy




Re: Show outgoing headers when full debug enabled

2015-04-29 Thread Willy Tarreau
On Mon, Apr 27, 2015 at 06:56:23PM -0400, CJ Ess wrote:
 When you run HAProxy in full debugging mode there is a debug_hdrs() call
 that displays all of the http headers read from the frontend, I'd also like
 to be able to see the headers being sent to the backend.
 
 So far I haven't pinpointed where the headers are being sent from so that I
 can add another debug_hdrs() call. Anyone point me to the right place?

There's no single place, a request leaves once all request analysers
are removed. Also even after that, a last change may be operated due
to the http-server-name option. If this is just for debugging, you
can add some printf in connect_server(), that might be the easiest
way to do so.

Hoping this helps,
Willy




Re: [PATCH v2 0/3] MEDIUM: Change verbosity of email alerts

2015-04-29 Thread Simon Horman
On Thu, Apr 30, 2015 at 07:31:28AM +0200, Willy Tarreau wrote:
 Hi Simon,
 
 On Thu, Apr 30, 2015 at 01:10:32PM +0900, Simon Horman wrote:
  Hi,
  
  the aim of this series is to make the send more email alerts when
  they are likely to be useful and less when they are likely to be
  unwanted.
 (...)
 
 Whole series applied, thank you very much!

Thanks!



Re: Recommendations for a new haproxy installation

2015-04-29 Thread Shawn Heisey
On 4/29/2015 3:00 PM, Shawn Heisey wrote:
 How can I be sure that openssl is compiled with support for TLS
 acceleration in the CPU?  I am compiling haproxy from source.  Would you
 recommend that I install a separate and newer openssl from source for
 explicit use with haproxy, and tweak its config for the specific
 hardware it's on?

Followup on the openssl part of my email.

I built and installed openssl 1.0.2a from source, with this config line:

./config no-shared enable-ec_nistp_64_gcc_128 threads

Then I built haproxy using this command:

make TARGET=linux2628 USE_PCRE=1 USE_OPENSSL=1 USE_ZLIB=1 CPU=native
SSL_INC=/usr/local/ssl/include SSL_LIB=/usr/local/ssl/lib ADDLIB=-ldl

Here's the 'haproxy -vv' and 'uname -a' output:

---
HA-Proxy version 1.5.11 2015/01/31
Copyright 2000-2015 Willy Tarreau w...@1wt.eu

Build options :
  TARGET  = linux2628
  CPU = native
  CC  = gcc
  CFLAGS  = -O2 -march=native -g -fno-strict-aliasing
  OPTIONS = USE_ZLIB=1 USE_OPENSSL=1 USE_PCRE=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 200

Encrypted password support via crypt(3): yes
Built with zlib version : 1.2.8
Compression algorithms supported : identity, deflate, gzip
Built with OpenSSL version : OpenSSL 1.0.2a 19 Mar 2015
Running on OpenSSL version : OpenSSL 1.0.2a 19 Mar 2015
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes
Built with PCRE version : 8.31 2012-07-06
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Built with transparent proxy support using: IP_TRANSPARENT
IPV6_TRANSPARENT IP_FREEBIND

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.
---
Linux lb1 3.13.0-49-generic #83-Ubuntu SMP Fri Apr 10 20:11:33 UTC 2015
x86_64 x86_64 x86_64 GNU/Linux
---

Can anyone who's knowledgeable about this look over what I've done and
tell me if they'd do something different?  I also still need some
assistance with the rest of my original email.

Side issue, mentioning in case it's important, though I suspect it
isn't:  When I built openssl with the indicated config, 'make test'
failed, but it passed on an earlier build with 'shared' instead of
'no-shared'.  I rebuilt with no-shared because haproxy was dynamically
linking the older openssl library installed from ubuntu packages,
instead of the newer library used for compile.

Thanks,
Shawn




[PATCH v2 0/3] MEDIUM: Change verbosity of email alerts

2015-04-29 Thread Simon Horman
Hi,

the aim of this series is to make the send more email alerts when
they are likely to be useful and less when they are likely to be
unwanted.

Changes in v2:

* As suggested by Willy Tarreau, lower the priority at which email alerts
  for of log-health-checks messages are sent rather never sending them
* Added documentation patch

Simon Horman (3):
  MEDIUM: Lower priority of email alerts for log-health-checks messages
  MEDIUM: Send email alerts when servers are marked as UP or enter the
drain state
  MEDIUM: Document when email-alerts are sent

 doc/configuration.txt | 9 +
 src/checks.c  | 2 +-
 src/server.c  | 2 ++
 3 files changed, 12 insertions(+), 1 deletion(-)

-- 
2.1.4




[PATCH v2 3/3] MEDIUM: Document when email-alerts are sent

2015-04-29 Thread Simon Horman
Document the influence of email-alert level and other configuration
parameters on when email-alerts are sent.

Signed-off-by: Simon Horman ho...@verge.net.au
---
 doc/configuration.txt | 9 +
 1 file changed, 9 insertions(+)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index f72339a04588..780d6b505408 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -2786,6 +2786,15 @@ email-alert level level
   email-alert to to be set and if so sending email alerts is enabled
   for the proxy.
 
+  Alerts are sent when :
+
+  * An un-paused server is marked as down and level is alert or lower
+  * A paused server is marked as down and level is notice or lower
+  * A server is marked as up or enters the drain state and level
+is notice or lower
+  * option log-health-checks is enabled, level is info or lower,
+ and a health check status update occurs
+
   See also : email-alert from, email-alert mailers,
  email-alert myhostname, email-alert to,
  section 3.6 about mailers.
-- 
2.1.4




[PATCH v2 2/3] MEDIUM: Send email alerts when servers are marked as UP or enter the drain state

2015-04-29 Thread Simon Horman
This is similar to the way email alerts are sent when servers are marked as
DOWN.

Like the log messages corresponding to these state changes the messages
have log level notice. Thus they are suppressed by the default email-alert
level of 'alert'. To allow these messages the email-alert level should
be set to 'notice', 'info' or 'debug'. e.g:

email-alert level notice

email-alert mailers and email-alert to settings are also required in
order for any email alerts to be sent.

A follow-up patch will document the above.

Signed-off-by: Simon Horman ho...@verge.net.au
---
 src/server.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/src/server.c b/src/server.c
index a50f9e123741..ee6b8508dac0 100644
--- a/src/server.c
+++ b/src/server.c
@@ -332,6 +332,7 @@ void srv_set_running(struct server *s, const char *reason)
srv_append_status(trash, s, reason, xferred, 0);
Warning(%s.\n, trash.str);
send_log(s-proxy, LOG_NOTICE, %s.\n, trash.str);
+   send_email_alert(s, LOG_NOTICE, %s, trash.str);
 
for (srv = s-trackers; srv; srv = srv-tracknext)
srv_set_running(srv, NULL);
@@ -484,6 +485,7 @@ void srv_set_admin_flag(struct server *s, enum srv_admin 
mode)
 
Warning(%s.\n, trash.str);
send_log(s-proxy, LOG_NOTICE, %s.\n, trash.str);
+   send_email_alert(s, LOG_NOTICE, %s, trash.str);
 
if (prev_srv_count  s-proxy-srv_bck == 0  
s-proxy-srv_act == 0)
set_backend_down(s-proxy);
-- 
2.1.4




[PATCH v2 1/3] MEDIUM: Lower priority of email alerts for log-health-checks messages

2015-04-29 Thread Simon Horman
Lower the priority of email alerts for log-health-checks messages from
LOG_NOTICE to LOG_INFO.

This is to allow set-ups with log-health-checks enabled to disable email
for health check state changes while leaving other email alerts enabled.

In order for email alerts to be sent for health check state changes
log-health-checks needs to be set and email-alert level needs to be 'info'
or lower. email-alert mailers and email-alert to settings are also
required in order for any email alerts to be sent.

A follow-up patch will document the above.

Signed-off-by: Simon Horman ho...@verge.net.au
---
 src/checks.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/checks.c b/src/checks.c
index 8a0231deb0a8..32c992195ec1 100644
--- a/src/checks.c
+++ b/src/checks.c
@@ -316,7 +316,7 @@ static void set_server_check_status(struct check *check, 
short status, const cha
 
Warning(%s.\n, trash.str);
send_log(s-proxy, LOG_NOTICE, %s.\n, trash.str);
-   send_email_alert(s, LOG_NOTICE, %s, trash.str);
+   send_email_alert(s, LOG_INFO, %s, trash.str);
}
 }
 
-- 
2.1.4