Re: Config option for staging/dev backends?

2015-04-30 Thread Pavlos Parissis
On 30/04/2015 08:31 μμ, Shawn Heisey wrote:
 I have a number of backend configs that handle requests to dev and
 staging webservers.  These backend configs only have one server.  If
 that server goes down briefly because the server process is restarted,
 which happens frequently precisely because they are for dev/staging, I
 get a console notification from syslog.
 
 I definitely DO want this kind of console notification if one of the
 production backends has no server available, but I don't want the
 interruption for staging or dev.  If a config option to reduce the
 severity of the no server available notification on an individual
 backend isn't available currently, can one be added?
 
 Thanks,
 Shawn
 


Just disable health checking for those backends.

Cheers,
Pavlos



signature.asc
Description: OpenPGP digital signature


Re: Config option for staging/dev backends?

2015-04-30 Thread Pavlos Parissis
On 30/04/2015 09:57 μμ, Shawn Heisey wrote:
 On 4/30/2015 1:03 PM, Pavlos Parissis wrote:
 On 30/04/2015 08:31 μμ, Shawn Heisey wrote:
 I definitely DO want this kind of console notification if one of the
 production backends has no server available, but I don't want the
 interruption for staging or dev.  If a config option to reduce the
 severity of the no server available notification on an individual
 backend isn't available currently, can one be added?

 Just disable health checking for those backends.
 
 There are a couple of reasons that I include dev/staging sites in the
 haproxy config.
 
 1) It ensures that haproxy is *always* part of the equation, since it
 will be part of the equation when the code is in production.  Site
 behavior might change in subtle ways if we don't connect in exactly the
 same way for dev, staging, and production.
 
 2) I need to verify that health checks actually work.
 
 If health checks are disabled on my dev/staging back ends, then I can't
 verify that those health checks actually work unless we deploy the new
 website code to a production server, which defeats part of the purpose
 of having a staging server in the first place.
 
 One thing that I can do is increase the fall parameter for checks on
 the dedicated dev/staging servers, but there's a downside: haproxy won't
 notice that a server is down very quickly.  I don't mind that haproxy
 *logs* the server going down and the entire backend being unavailable
 ... in fact, that's a good thing ... I just don't want to see it on the
 console or in ssh sessions.  A message that's logged to the console
 implies that there's a problem requiring immediate attention.  A dev
 server rebooting does NOT require immediate attention.
 

ah ok, so you do want health checking, I misunderstood your initial
question. Then it is matter to configure your log daemon to emit haproxy
logs to console but that will emit all messages and not just the ones
from staging backend and I guess you want to get messages on the console
for failures on other backends.

I guess if you use syslog-ng you can set a filter to skip log messages
matching a specific pattern(backend names in your case)

Cheers,
Pavlos




signature.asc
Description: OpenPGP digital signature


Re: [haproxy]: Performance of haproxy-to-4-nginx vs direct-to-nginx

2015-04-29 Thread Pavlos Parissis
On 29/04/2015 12:56 μμ, Krishna Kumar (Engineering) wrote:
 Dear all,
 
 Sorry, my lab systems were down for many days and I could not get back
 on this earlier. After
 new systems were allocated, I managed to get all the requested
 information with a fresh ru
 (Sorry, this is a long mail too!). There are now 4 physical servers,
 running Debian 3.2.0-4-amd64,
 connected directly to a common switch:
 
 server1: Run 'ab' in a container, no cpu/memory restriction.
 server2: Run haproxy in a container, configured with 4 nginx's,
 cpu/memory configured as
   shown below.
 server3: Run 2 different nginx containers, no cpu/mem restriction.
 server4: Run 2 different nginx containers, for a total of 4 nginx,
 no cpu/mem restriction.
 
 The servers have 2 sockets, each with 24 cores. Socket 0 has cores
 0,2,4,..,46 and Socket 1 has
 cores 1,3,5,..,47. The NIC (ixgbe) is bound to CPU 0. 

It is considered bad thing to bind all queues of NIC to 1 CPU as it
creates a major bottleneck. HAProxy will have to wait for the interrupt
to be processed by a single CPU which is saturated.

 Haproxy is started
 on cpu's:
 2,4,6,8,10,12,14,16, so that is in the same cache line as the nic (nginx
 is run on different servers
 as explained above). No tuning on nginx servers as the comparison is between

how many workers to run on Nginx?

 'ab' - 'nginx' and 'ab' and 'haproxy' - nginx(s). The cpus are
 Intel(R) Xeon(R) CPU E5-2670 v3
 @ 2.30GHz. The containers are all configured with 8GB, server having
 128GB memory.
 
 mpstat and iostat were captured during the test, where the capture
 started after 'ab' started and
 capture ended just before 'ab' finished so as to get warm numbers.
 
 
 Request directly to 1 nginx backend server, size=256 bytes:
 
 Command: ab -k -n 10 -c 1000 nginx:80/256
 Requests per second:69749.02 [#/sec] (mean)
 Transfer rate:  34600.18 [Kbytes/sec] received
 
 Request to haproxy configured with 4 nginx backends (nbproc=4), size=256
 bytes:
 
 Command: ab -k -n 10 -c 1000 haproxy:80/256
 Requests per second:19071.55 [#/sec] (mean)
 Transfer rate:  9461.28 [Kbytes/sec] received
 
 mpstat (first 4 processors only, rest are almost zero):
 Average: CPU%usr   %nice%sys %iowait%irq   %soft 
 %steal  %guest  %gnice   %idle
 Average: all0.440.001.590.000.002.96   
 0.000.000.00   95.01
 Average:   00.250.000.750.000.00   98.01   
 0.000.000.001.00

All network interrupts are processed by CPU 0 which is saturated.
You need to spread the queues of NIC to different CPUs. Either use
irqbalancer or the following 'ugly' script which you need to modify a
bit as I have 2 NICs and you have only 1. You also need to adjust the
number of queues, grep eth /proc/interrupts and you will find out how
many you have.

#!/bin/sh

awk '
function get_affinity(cpus) {
split(cpus,list,/,/)
mask=0
for (val in list) {
mask+=lshift(1,list[val])
}
return mask
}
BEGIN {
# Interrupt - CPU core(s) mapping
map[eth0-q0]=0
map[eth0-q1]=1
map[eth0-q2]=2
map[eth0-q3]=3
map[eth0-q4]=4
map[eth0-q5]=5
map[eth0-q6]=6
map[eth0-q7]=7
map[eth1-q0]=12
map[eth1-q1]=13
map[eth1-q2]=14
map[eth1-q3]=15
map[eth1-q4]=16
map[eth1-q5]=17
map[eth1-q6]=18
map[eth1-q7]=19
}
/eth/ {
irq=substr($1,0,length($1)-1)
queue=$NF
printf %s (%s) - %s
(%08X)\n,queue,irq,map[queue],get_affinity(map[queue])
system(sprintf(echo %08X 
/proc/irq/%s/smp_affinity\n,get_affinity(map[queue]),irq))
}
' /proc/interrupts

 Average:   11.260.005.280.000.002.51   
 0.000.000.00   90.95
 Average:   22.760.008.790.000.005.78   
 0.000.000.00   82.66
 Average:   31.510.006.780.000.003.02   
 0.000.000.00   88.69
 
 pidstat:
 Average:  105   4715.00   33.500.00   38.50 -  haproxy 
 Average:  105   4726.50   44.000.00   50.50 -  haproxy
 Average:  105   4738.50   40.000.00   48.50 -  haproxy 
 Average:  105   4752.50   14.000.00   16.50 -  haproxy
 
 Request directly to 1 nginx backend server, size=64K
 

I would like to see pidstat and mpstat while you test nginx.

Cheers,
Pavlos




signature.asc
Description: OpenPGP digital 

Re: Achieving Zero Downtime Restarts at Yelp

2015-04-14 Thread Pavlos Parissis
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256


On 13/04/2015 07:24 ??, Joseph Lynch wrote:
 Hello,
 
 I published an article today on Yelp's engineering blog 
 (http://engineeringblog.yelp.com/2015/04/true-zero-downtime-haproxy-re
loads.html)

 
that shows a technique we use for low latency, zero downtime restarts of
 HAProxy. This solves the when I restart HAProxy some of my clients
 get RSTs problems that can occur. We built it to solve the RSTs in
 our internal load balancing, so there is a little more work to be
 done to modify the method to work with external traffic, which I
 talk about in the post.
 

thanks for sharing this very detailed article.

You wrote that
'As of version 1.5.11, HAProxy does not support zero downtime restarts
or reloads of configuration. Instead, it supports fast...'

Was zero downtime supported before 1.5.11? I believe not.

Cheers,
Pavlos
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQIcBAEBCAAGBQJVLM1AAAoJEIP8ktofcXa5c68P/jVowVkpPduCSXX9I4UzoCe4
NjpboenXCJJb3ubvoHvWVJE4DTv2WfQMbAUXH//0NnPhO6RvxMKCQ8Qa2QmtS4UF
kIn1FSJ/Olbo4Kf4jlH80gjiammFSvo6Dc7v/IPdYhgvTTOeKDNcV/tR5NvQG9yJ
y7Y6aFHgrZtgcOriIv/reus77L4USDFRDikzrPrI/J2wWCZDSkjsJPF+YNrENcm3
kiOSbVo6ZF1EByM16vruOua2i0fG6MmnM73TVZwLqfKNYJfLP0VwB2FoJYI4JyKR
K77jdDStDg8PUYEUcwhAr5eFzSaJUglnbYA7zNHaDGQWyu0LE26gFw4AMCB8jDaE
4bveTI9sLnD4PPbIIpscDtOc0zp+xeSY3DLh+v2TP7YbMncjkyGsHGGhj9a7AxFf
Ne6WKHcbh2szLfvvAYxRZWr8ltl5xIud03p75HBMYUGRf37RlOcK7cBhMEHiPaCM
hF26KEZFem6AUjlB6TyOXYg0WlifR0o1Z+gm8FT+0my4fDLp82XJ+2O0Vg5Cc9Np
iNcdEYB6x2W2zhlhwpCIa+JVeLyBmpPo9gUzhPRi/jwhvnrwD8IJV2e+jN5VATr8
8sR/ht8GZLtQ1ZviXt31BtEGQwPAH4g7eRuHLbNSEIrDFjb+w23Ki62gvn3NEGe8
JGouYKKyFMcMgZdwJHM0
=WCRB
-END PGP SIGNATURE-



Re: AW: forward client disconnects in http mode

2015-04-09 Thread Pavlos Parissis
On 09/04/2015 02:52 μμ, Dieter van Zeder wrote:
 ‎Here's the the stripped-down configuration. Http-server-close is required in 
 order to use leastconn. The frontend actually contains various acl rules, 
 thus mode http.
 

I had a look at the doc and it isn't mentioned that http-server-close is
required by leastcon balance method. Am I missing something here?

Hold on a second. HAProxy 1.5 (I assume you use that version) runs in
keep-alive mode by default, which means your app will see the TCP
connection on the server-side closed as soon as the client closes the
connection. Unless defaults timeouts play a role here.

Remove option http-server-close and recheck if curl and crtl+c.

Cheers,
Pavlos




signature.asc
Description: OpenPGP digital signature


Re: AW: forward client disconnects in http mode

2015-04-09 Thread Pavlos Parissis
On 09/04/2015 02:11 μμ, Dieter van Zeder wrote:
 It's not about idle connections, it's about connections closed by the client 
 before the server fully sent the response. I have an apache module which can 
 detect client disconnects and then stops processing.‎ Having haproxy before 
 those servers, a process keeps running, even though the client has 
 disconnected (by sending FIN,ACK?). I wonder if haproxy can forward it, even 
 in http mode.
 

Can u share your conf?
Pavlos



signature.asc
Description: OpenPGP digital signature


Re: forward client disconnects in http mode

2015-04-09 Thread Pavlos Parissis
On 09/04/2015 12:52 μμ, Dieter van Zeder wrote:
 Hi there, is it possible to forward packets indicating a client
 disconnect, with haproxy running in http mode? The webserver is able to
 cancel long running requests, but the disconnect cannot be detected at
 the backend.
 

I don't quite understand what you want to achieve with this?
Do you concern about idle TCP connections on the backend?

Cheers,
Pavlos





signature.asc
Description: OpenPGP digital signature


Re: how to make HAproxy itself reply to a health check from another load balancer?

2015-04-07 Thread Pavlos Parissis
On 07/04/2015 09:55 μμ, Florin Andrei wrote:
 Let's say HAproxy is used for a second layer of load balancers, with the
 first layer being AWS ELBs.
 
 When you create an ELB, you can specify a health check. This should
 actually check the health of the HAproxy instances that the ELB is
 pointing at.
 
 Is there a way to make HAproxy answer a health check from an ELB? This
 health check cannot be passed all the way to the backend web servers,
 because they all answer different URL prefixes.
 
 

You can use monitor-uri, here is an example

acl site_dead nbsrv(foo_backend) lt 2
monitor-uri   /site_alive
monitor fail  if site_dead

then point healthcheck from ELB to ip/site_alive

Cheers,
Pavlos






signature.asc
Description: OpenPGP digital signature


server-side connection pool manager

2015-04-06 Thread Pavlos Parissis
Hoi,

While I was reading commit descriptions I saw in
REORG/MAJOR: session: rename the session entity to stream

[..snip..]
Some more cleanup is needed because some code was already far from
being clean. The server queue management still refers to sessions at
many places while comments talk about connections. This will have to
be cleaned up once we have a server-side connection pool manager.

I was wondering if we are going to see server-side connection pooling in
1.6. I know that HTTP/2 will bring in it at the client-side.

Cheers,
Pavlos



signature.asc
Description: OpenPGP digital signature


Re: 1.5, reload and zero downtime

2015-04-06 Thread Pavlos Parissis
On 06/04/2015 08:41 μμ, Brian Fleming wrote:
 I can do reload and there will be no downtime?

Yes, reload is a safe operation. But, don't be surprised if you see the
old process alive for long time(days). This behavior is caused by insane
timeout values on the client-side used by some people(including my self).

Cheers,
Pavlos




signature.asc
Description: OpenPGP digital signature


using a fetcher in wrong context, performance tip

2015-03-30 Thread Pavlos Parissis
Hi all,

During a stress test I discovered a drop of 5% performance at rate of
380K req/s when the following 3 statements were added in a frontend
where HTTPS is not used

http-request add-header X-Cipher-Name %sslc
http-request add-header X-Cipher-Version %sslv
http-request add-header X-Cipher-Bits %[ssl_fc_use_keysize]

Here is the stress result
# wrk --timeout 3s --latency -c 1000 -d 5m -t 24
http://10.190.3.1/
Running 5m test @ http://10.190.3.1/
  24 threads and 1000 connections
  Thread Stats   Avg  Stdev Max   ± Stdev
Latency 2.31ms  815.14us  27.06ms   74.32%
Req/Sec16.98k 2.25k   32.00k85.12%
  Latency Distribution
 50%2.43ms
 75%2.71ms
 90%3.15ms
 99%3.88ms
  115019521 requests in 5.00m, 16.50GB read
  Socket errors: connect 0, read 0, write 0, timeout 13264
Requests/sec: 383420.54
Transfer/sec: 56.31MB

After I removed only the ssl_fc_use_keysize fetcher
http-request add-header X-Cipher-Bits %[ssl_fc_use_keysize]

performance was improved by 5%, see below
# wrk --timeout 3s --latency -c 1000 -d 5m -t 24
http://10.190.3.1/
Running 5m test @ http://10.190.3.1/
  24 threads and 1000 connections
  Thread Stats   Avg  Stdev Max   ± Stdev
Latency 2.12ms  831.01us 206.61ms   74.86%
Req/Sec17.88k 2.22k   31.56k80.62%
  Latency Distribution
 50%2.30ms
 75%2.62ms
 90%2.88ms
 99%3.72ms
  120947683 requests in 5.00m, 17.35GB read
  Socket errors: connect 0, read 0, write 0, timeout 17255
Requests/sec: 403180.76
Transfer/sec: 59.21MB

When I added it back but with a condition if traffic is HTTPS
performance at that high rate of request was increased
 http-request add-header X-Cipher-Bits %[ssl_fc_use_keysize] if
https_traffic

stress results:
# wrk --timeout 3s --latency -c 1000 -d 5m -t 24
http://10.190.3.1/
Running 5m test @ http://10.190.3.1/
  24 threads and 1000 connections
  Thread Stats   Avg  Stdev Max   ± Stdev
Latency 2.07ms  823.41us  32.08ms   75.64%
Req/Sec17.86k 2.27k   29.56k81.81%
  Latency Distribution
 50%2.27ms
 75%2.54ms
 90%2.76ms
 99%3.80ms
  120945989 requests in 5.00m, 17.35GB read
  Socket errors: connect 0, read 0, write 0, timeout 19828
Requests/sec: 403177.77
Transfer/sec: 59.21MB


I also added the same condition for other 2 variables accessed as log
formatters and the performance was improved even more

stress results with
 http-request add-header X-Cipher-Name %sslc if https_traffic
 http-request add-header X-Cipher-Version %sslv if https_traffic
 http-request add-header X-Cipher-Bits %[ssl_fc_use_keysize] if
https_traffic

# wrk --timeout 3s --latency -c 1000 -d 5m -t 24
http://10.190.3.1/
Running 5m test @ http://10.190.3.1/
  24 threads and 1000 connections
  Thread Stats   Avg  Stdev Max   ± Stdev
Latency 2.12ms9.64ms 607.23ms   99.79%
Req/Sec19.43k 3.28k   33.56k82.82%
  Latency Distribution
 50%1.95ms
 75%2.20ms
 90%2.41ms
 99%3.36ms
  131646991 requests in 5.00m, 18.88GB read
  Socket errors: connect 0, read 0, write 0, timeout 30179
Requests/sec: 438828.20
Transfer/sec: 64.45MB

Lesson learned here is to either condition all your statements or pay
attention at the context you apply a logic.


Cheers,
Pavlos



signature.asc
Description: OpenPGP digital signature


Re: [haproxy]: Performance of haproxy-to-4-nginx vs direct-to-nginx

2015-03-30 Thread Pavlos Parissis
On 30/03/2015 07:13 πμ, Krishna Kumar Unnikrishnan (Engineering) wrote:
 Hi all,
 
 I am testing haproxy as follows:
 
 System1: 24 Intel(R) Xeon(R) CPU E5-2697 v3 @ 2.60GHz, 64 GB. This system
 is running 3.19.0 kernel, and hosts the following servers:
 1. nginx1 server - cpu 1-2, 1G memory, runs as a Linux
 container using cpuset.cpus feature.
 2. nginx2 server - cpu 3-4, 1G memory, runs via LXC.
 3. nginx3 server - cpu 5-6, 1G memory, runs via LXC.
 4. nginx4 server - cpu 7-8, 1G memory, runs via LXC.
 5. haproxy - cpu 9-10, 1G memory runs via LXC. Runs haproxy
 ver 1.5.8: configured with above 4 container's ip
 addresses as the backend.
 
 System2: 56 Intel(R) Xeon(R) CPU E5-2697 v3 @ 2.60GHz, 128 GB. This system
 is running 3.19.0, and run's 'ab' either to the haproxy node, or
 directly to an nginx container. System1  System2 are locally
 connected via a switch with Intel 10G cards.
 
 With very small packets of 64 bytes, I am getting the following results:
 
 A. ab -n 10 -c 4096 http://nginx1:80/64
 -
 
 Concurrency Level:  4096
 Time taken for tests:   3.232 seconds
 Complete requests:  10
 Failed requests:0
 Total transferred:  2880 bytes
 HTML transferred:   640 bytes
 Requests per second:30943.26 [#/sec] (mean)
 Time per request:   132.371 [ms] (mean)
 Time per request:   0.032 [ms] (mean, across all concurrent requests)
 Transfer rate:  8702.79 [Kbytes/sec] received
 
 Connection Times (ms)
   min  mean[+/-sd] median   max
 Connect:9   65 137.4 451050
 Processing: 4   52  25.3 51 241
 Waiting:3   37  19.2 35 234
 Total: 16  117 146.11111142
 
 Percentage of the requests served within a certain time (ms)
   50%111 66%119 75%122
   80%124 90%133 95%215
   98%254 99%   1126 100%   1142 (longest request)
 
 B. ab -n 10 -c 4096 http://haproxy:80/64
 --
 
 Concurrency Level:  4096
 Time taken for tests:   5.503 seconds
 Complete requests:  10
 Failed requests:0
 Total transferred:  2880 bytes
 HTML transferred:   640 bytes
 Requests per second:18172.96 [#/sec] (mean)
 Time per request:   225.390 [ms] (mean)
 Time per request:   0.055 [ms] (mean, across all concurrent requests)
 Transfer rate:  5111.15 [Kbytes/sec] received
 
 Connection Times (ms)
   min  mean[+/-sd] median   max
 Connect:0  134 358.3 233033
 Processing: 2   61  47.7 51 700
 Waiting:2   50  43.0 42 685
 Total:  7  194 366.7 793122
 
 Percentage of the requests served within a certain time (ms)
   50% 79 66%105 75%134
   80%159 90%318 95%   1076
   98%   1140 99%   1240 100%   3122 (longest request)
 
 I expected haproxy to deliver better results with multiple connections,
 since
 haproxy will round-robin between the 4 servers. I have done no tuning,
 and have
 used the config file at the end of this mail. With 256K file size, the times
 are slightly better for haproxy vs nginx. I notice that %requests served is
 similar for both cases till about 90%.
 
 Any help is very much appreciated.
 

You have mentioned the CPU load on the host and on the guest systems.
Use pidstat -p $(pgrep -d ',' haproxy) -u  1 to monitor CPU stats of
haproxy processes and  mpstat -P ALL 1 and check CPU load for software
interrupts.


Cheers,
Pavlos




signature.asc
Description: OpenPGP digital signature


Long living TCP connections

2015-03-02 Thread Pavlos Parissis
Hi,

Today I noticed after a reload that previous process was alive for long
time( 8hours). This is a HAProxy which runs in HTTP mode in front of
few squid servers, conf is quite simple[1] and the version is 1.5.6[2]

I had a lsof watcher for the old pid and the number of connections were
very slowly dropping from 2K to 200 right now.

For few of the connections that were in established state( for the old
process) I run tcpdump and saw no activity at all, I have attached a
network trace from one those and you can see that client sends
periodically every 10min 5bytes. The HAProxy is used by normal browsers
but also from cronjobs with various languages(Perl,Python,C,Go etc)

I was surprised about this very long inactivity period for TCP
connection on a system which has reasonable settings for TCP keepalive[3].

But setting 'timeout tunnel' is not set, and since this HAProxy is
serving proxy traffic to squid all client/server connections are treated
as tunnels. am I right?

My question is about TCP keepalive and tunnel. Are system keepalive
settings ignored when HAProxy treads client/servers connections as tunnels?

Cheers,
Pavlos

[1]
global
log 127.0.0.1 local2
chroot  /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 65536
tune.bufsize65536
userhaproxy
group   haproxy
daemon


stats socket /var/lib/haproxy/stats uid 0 gid 0 mode 0440 level admin


defaults
modehttp
log global
option  httplog clf
option  dontlognull
option  forwardfor except 10.0.0.0/8
option  redispatch
option  http-server-close
option  http-use-proxy-header
option  tcp-smart-accept
option  tcp-smart-connect
no option   checkcache
retries 3
maxconn 65536
timeout queue   1m
timeout connect 4s
timeout client  30m
timeout server  30m
timeout check   10s
timeout http-request10s
timeout http-keep-alive 10s
errorfile 408   /dev/null

listen haproxy :8080
modehttp
stats   enable
stats   uri /
stats   show-node
stats   refresh 10s


frontend http_in *:3128
default_backend squid_http

backend squid_http
balance leastconn
server  squid-01  squid-01:3128
server  squid-02  squid-02:3128
server  squid-03  squid-03:3128
server  squid-04  squid-04:3128


[2]
haproxy -vv
HA-Proxy version 1.5.6 2014/10/18
Copyright 2000-2014 Willy Tarreau w...@1wt.eu

Build options :
  TARGET  = linux2628
  CPU = generic
  CC  = gcc
  CFLAGS  =
  OPTIONS = USE_LINUX_TPROXY=1 USE_ZLIB=1 USE_REGPARM=1 USE_OPENSSL=1
USE_PCRE=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 200

Encrypted password support via crypt(3): yes
Built with zlib version : 1.2.3
Compression algorithms supported : identity, deflate, gzip
Built with OpenSSL version : OpenSSL 1.0.1e-fips 11 Feb 2013
Running on OpenSSL version : OpenSSL 1.0.1e-fips 11 Feb 2013
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes
Built with PCRE version : 7.8 2008-09-05
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Built with transparent proxy support using: IP_TRANSPARENT
IPV6_TRANSPARENT IP_FREEBIND

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

pparissis at corplbout-201 in ~

[3]
sudo sysctl -a|grep keepalive
net.ipv4.tcp_keepalive_time = 30
net.ipv4.tcp_keepalive_probes = 2
net.ipv4.tcp_keepalive_intvl = 1
 tcpdump host 10.155.96.64 and port 64473
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes
18:56:52.528548 IP 10.155.96.64.64473  haproxyserver.squid: Flags [P.], seq 
2900223928:2900223933, ack 3414209439, win 8210, options [nop,nop,TS val 
1306751675 ecr 1014245525], length 5
18:56:52.548765 IP haproxyserver.squid  10.155.96.64.64473: Flags [P.], seq 
1:6, ack 5, win 31, options [nop,nop,TS val 1014785499 ecr 1306751675], length 5
18:56:52.575487 IP 10.155.96.64.64473  haproxyserver.squid: Flags [.], ack 6, 
win 8209, options [nop,nop,TS val 1306751720 ecr 1014785499], length 0
19:06:00.052136 IP 10.155.96.64.64473  haproxyserver.squid: Flags [P.], seq 
5:10, ack 6, win 8209, options [nop,nop,TS val 1307290366 ecr 1014785499], 
length 5
19:06:00.069963 IP haproxyserver.squid  10.155.96.64.64473: Flags [P.], seq 
6:11, ack 10, win 31, options [nop,nop,TS val 1015333020 ecr 1307290366], 
length 5
19:06:00.094985 IP 10.155.96.64.64473  

Re: [PATCH 2/2] DOC: Document the new tls-ticket-keys bind keyword

2015-02-25 Thread Pavlos Parissis
On 24/02/2015 04:57 μμ, Nenad Merdanovic wrote:
 Hello Vincent, Lucas
 
 On 2/24/2015 4:56 PM, Lukas Tribus wrote:
 It would be nice to add a note that without proper rotation, PFS is
 compromised by the use of TLS tickets. People may not understand why
 they need to put 3 keys in this file and may never change them.

 Agreed, we have to clarify that a never changing tls-tickets-keys
 file is worse than no file at all.

 
 Done! I'll wait for more comments from ML before sending the updated patchset.
 


-- Use stats socket to update the list without reload

-- Update Session state at disconnection log schema to include
something useful in case server receives a ticket which was encrypted with key
that is not anymore in the list. Debugging SSL problems is a nightmare
by definition and having a lot of debug information is very much appreciated
by sysadmins

-- Possible use peer logic to sync the list to others, tricky but it is
required when you have several LBs, alternatively users can deploy the logic
that twitter has used


Excellent work guys, thank you.
Pavlos





signature.asc
Description: OpenPGP digital signature


Re: NOSRV/BADREQ from some Java based clients [SSL handshake issue]

2015-02-23 Thread Pavlos Parissis
On 23/02/2015 10:55 μμ, NuSkooler wrote:
 Attached is the information you requested -- and hopefully performed
 correctly :)
 
 * no_haproxy.pcap: This is a successful connection + POST to the
 original Mochiweb server. Note that here the port is 8443 not 443
 (IP=10.3.3.3)
 * ha_self_signed.pcap: Failed attempt against HAProxy with a self
 signed certificate  key.
 * TEST_cert_and_key.pem: The self signed cert/key from above.
 
 The bind line for ha_self_signed.pcap looks like this:
 bind *:443 ssl crt /home/bashby/Lukas/TEST_cert_and_key.pem ciphers AES128-SHA
 
 Thanks again to you and everyone here taking the time to look at this!
 

I am not an expert but from the following I can understand
that client and server agreed to use
TLS_RSA_WITH_AES_128_CBC_SHA cipher but over SSLv3. I am wondering if
AES cipher suite is supported on SSLv3

ssldump -k TEST_cert_and_key.pem -r ha_self_signed.pcap
New TCP connection #1: 10.1.1.93(56835) - 10.3.2.74(443)
1 1  0.0138 (0.0138)  CS  Handshake
  ClientHello
Version 3.1
cipher suites
TLS_RSA_WITH_RC4_128_MD5
TLS_RSA_WITH_RC4_128_SHA
TLS_RSA_WITH_AES_128_CBC_SHA
TLS_RSA_WITH_AES_256_CBC_SHA
Unknown value 0xc002
Unknown value 0xc004
Unknown value 0xc005
Unknown value 0xc00c
Unknown value 0xc00e
Unknown value 0xc00f
Unknown value 0xc007
Unknown value 0xc009
Unknown value 0xc00a
Unknown value 0xc011
Unknown value 0xc013
Unknown value 0xc014
TLS_DHE_RSA_WITH_AES_128_CBC_SHA
TLS_DHE_RSA_WITH_AES_256_CBC_SHA
TLS_DHE_DSS_WITH_AES_128_CBC_SHA
TLS_DHE_DSS_WITH_AES_256_CBC_SHA
TLS_RSA_WITH_3DES_EDE_CBC_SHA
Unknown value 0xc003
Unknown value 0xc00d
Unknown value 0xc008
Unknown value 0xc012
TLS_DHE_RSA_WITH_3DES_EDE_CBC_SHA
TLS_DHE_DSS_WITH_3DES_EDE_CBC_SHA
TLS_RSA_WITH_DES_CBC_SHA
TLS_DHE_RSA_WITH_DES_CBC_SHA
TLS_DHE_DSS_WITH_DES_CBC_SHA
TLS_RSA_EXPORT_WITH_RC4_40_MD5
TLS_RSA_EXPORT_WITH_DES40_CBC_SHA
TLS_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA
TLS_DHE_DSS_EXPORT_WITH_DES40_CBC_SHA
Unknown value 0xff
compression methods
  NULL
1 2  0.0181 (0.0043)  SC  Handshake
  ServerHello
Version 3.1
session_id[32]=
  61 c5 71 7e 28 35 69 4e b4 de 72 ff c1 18 e4 d4
  6f f3 af 24 7c fc ab f4 51 2e c8 be e9 84 58 c1
cipherSuite TLS_RSA_WITH_AES_128_CBC_SHA
compressionMethod   NULL
1 3  0.0181 (0.)  SC  Handshake
  Certificate
1 4  0.0181 (0.)  SC  Handshake
  ServerHelloDone
1 5  0.0240 (0.0058)  CS  Handshake
  ClientKeyExchange
1 6  0.0240 (0.)  CS  ChangeCipherSpec
1 7  0.0240 (0.)  CS  Handshake
10.0245 (0.0005)  CS  TCP FIN
1 8  0.1077 (0.0832)  SC  ChangeCipherSpec
1 9  0.1077 (0.)  SC  Handshake
1 10 0.1885 (0.0807)  SC  application_data
1 11 0.1890 (0.0005)  SC  Alert
10.1891 (0.0001)  SC  TCP FIN




signature.asc
Description: OpenPGP digital signature


Re: Active/Active

2015-02-17 Thread Pavlos Parissis
On 17/02/2015 01:11 μμ, Mariusz Gronczewski wrote:
 On Mon, 16 Feb 2015 12:41:06 +0100, Klavs Klavsen k...@vsen.dk wrote:
 
 As I understand anycast and ECMP (and I only know guys who use it and 
 know what they are doing ;) - it needs to be two different routes (ie. 
 routers) that are active/active.. ie. multiple location.. but I guess 
 one could do it in the same datacenter as well..

 
 our setup(1 DC):
 
 * active-active ECMP
 * 4 loadbalancers + bird OSPF
 * 2 routers + OSPF
 * IPs are on loopback interface, added and removed when haproxy service
 starts/stops
 * OSPF distributes routes to these IPs to routers
 * routers route by source IP so same IP always lands on same
 loadbalancer
 
 works pretty well ;) you just have to make sure that when you stop
 haproxy (maintenance etc) you also down IPs that haproxy used so routers
 stop sending traffic to that node
 
 

I have a similar setup here with following differences

* BGP instead of OSPF
* BFD in use for fast removal of prefixes when server/bird/switches are dead
* Bonding in use on load balancers
* Traffic is coming from multiple locations, local and remote (branches /
Internet)
* Anycast between DCs
  -- traffic generated in DC is served locally unless all local LBs are dead
  -- Traffic generated remotely goes to the nearest DC
  -- Traffic generated remotely travels over dedicated links/MPLS, from PoPs
and branches.

MTU path discovery is an issue but we haven't noticed yet happening because
remote users are using our global network infrastructure which we control,
users from Internet it is a different story, as you can't control Internet:-)

A more detailed description of the problem can be found here,
https://blog.cloudflare.com/path-mtu-discovery-in-practice/

Cheers,
Pavlos






signature.asc
Description: OpenPGP digital signature


Re: HAProxy 1.5.10 on FreeBSD 9.3 - status page questions

2015-02-17 Thread Pavlos Parissis
On 10/02/2015 10:56 πμ, Tobias Feldhaus wrote:
 
 
 On Thu, Feb 5, 2015 at 9:38 PM, Pavlos Parissis
 pavlos.paris...@gmail.com mailto:pavlos.paris...@gmail.com wrote:
 
 On 04/02/2015 11:38 πμ, Tobias Feldhaus wrote:
  Hi,
 
  To refresh the page did not help (the number of seconds the PRIMARY
  backend was considered to be down increased continuously, but not the
  number of Bytes or the color).
 
  [deploy@haproxy-tracker-one /var/log] /usr/local/sbin/haproxy -vv
  HA-Proxy version 1.5.10 2014/12/31
  Copyright 2000-2014 Willy Tarreau w...@1wt.eu mailto:w...@1wt.eu
 mailto:w...@1wt.eu mailto:w...@1wt.eu
 
  Build options :
TARGET  = freebsd
CPU = generic
CC  = cc
CFLAGS  = -O2 -pipe -fstack-protector -fno-strict-aliasing
 -DFREEBSD_PORTS
OPTIONS = USE_GETADDRINFO=1 USE_ZLIB=1 USE_OPENSSL=1
 USE_STATIC_PCRE=1
  USE_PCRE_JIT=1
 
  Default settings :
maxconn = 2000, bufsize = 16384, maxrewrite = 8192,
 maxpollevents = 200
 
  Encrypted password support via crypt(3): yes
  Built with zlib version : 1.2.8
  Compression algorithms supported : identity, deflate, gzip
  Built with OpenSSL version : OpenSSL 0.9.8za-freebsd 5 Jun 2014
  Running on OpenSSL version : OpenSSL 0.9.8za-freebsd 5 Jun 2014
  OpenSSL library supports TLS extensions : yes
  OpenSSL library supports SNI : yes
  OpenSSL library supports prefer-server-ciphers : yes
  Built with PCRE version : 8.35 2014-04-04
  PCRE library supports JIT : yes
  Built with transparent proxy support using: IP_BINDANY IPV6_BINDANY
 
  Available polling systems :
   kqueue : pref=300,  test result OK
 poll : pref=200,  test result OK
   select : pref=150,  test result OK
  Total: 3 (3 usable), will use kqueue.
 
 
  - haproxy.conf -
 
  global
daemon
stats socket /var/run/haproxy.sock level admin
log /var/run/log local0 notice
 
  defaults
mode http
stats enable
stats hide-version
stats uri /lbstats
global log
 
  frontend LBSTATS *:
mode http
 
  frontend KAFKA *:8090
mode tcp
default_backend KAFKA_BACKEND
 
  backend KAFKA_BACKEND
mode tcp
log global
option tcplog
option dontlog-normal
option httpchk GET /
 
 httpcheck in tcp mode? Have you manage to load HAProxy with this setting
 without getting an error like
 [ALERT] 035/213450 (17326) : Unable to use proxy 'foo_com' with wrong
 mode, required: http, has: tcp.
 [ALERT] 035/213450 (17326) : You may want to use 'mode http'.
 
 
 The KAFKA v0.6 service speaks only TCP and it does not allow direct
 checking of HAProxy. (HAProxy does not check if data is _really_ flowing
 through the sockets e.g. it does not speak the KAFKA protocol.)  This is
 why we have a local app on the machine that checks KAFKA's functionality
 and communicates it to HAProxy on port 9093. Is there any better way of
 doing this?
  

Yes, I believe you can use agent-check:
http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#5.2-agent-check


Cheers,
Pavlos



signature.asc
Description: OpenPGP digital signature


Re: Load Problem with v1.5.5+

2015-02-16 Thread Pavlos Parissis
On 16/02/2015 09:45 μμ, Michael Holmes wrote:
[...snip..]
   * @ 9:05 a.m. stopping and starting HAProxy v1.5.11 didn't resolve the
 problem. Waited six minutes for processing which didn't catch up.
   * @ 9:12 a.m. I downgraded HAProxy from v1.5.11 to v1.5.3 and
 everything normalized in less than a minute.
   * @ 9:16 a.m. I upgraded HAProxy from v1.5.3 to v1.5.5 and the problem
 surfaced again and didn't heal in five minutes' time.
   * @ 9:22 a.m. I downgraded HAProxy from v1.5.5 to v1.5.4 and
 everything normalized in less than a minute. It has been stable all
 day so far.
 
 Each time I would build HAProxy I would
 
   * wget http://haproxy.1wt.eu/download/1.5/src/haproxy-1.x.x.tar.gz
   * tar -xf haproxy-1.x.x.tar.gz
   * cd haproxy-1.x.x
   * service haproxy stop
   * make TARGET=linux2628 CPU=generic USE_PCRE=1 USE_OPENSSL=1 USE_ZLIB=1
   * make install
   * service haproxy start
 
 I've reviewed the ChangeLog found here:
 http://www.haproxy.org/download/1.5/src/CHANGELOG, but I haven't been
 able to pinpoint any specific change in v1.5.5 which might be affecting
 my deployment based on my configuration.
 


Is it possible for your to replay or generate traffic on test system ans
use git bisect on 1.5.5 release?

Cheers,
Pavlos





signature.asc
Description: OpenPGP digital signature


TCP Fast Open towards to backend servers

2015-02-06 Thread Pavlos Parissis
Hi,

I see tfo setting for bind directive but it isn't clear to me if HAProxy
will use TCP Fast Open towards the backend server.
Shall I assume that if client uses TCP Fast Open HAProxy will do the
same for server side?

Cheers,
Pavlos



signature.asc
Description: OpenPGP digital signature


Re: HAproxy constant memory leak

2015-02-06 Thread Pavlos Parissis
On 06/02/2015 11:19 πμ, Georges-Etienne Legendre wrote:
 Hi Willy,
 
 Yes, please send me the script.
 

Willy,
If it isn't against the policies of this ML to send attachments and the
script is few kilobytes size, could you please send it to the list?

Thanks,
Pavlos




signature.asc
Description: OpenPGP digital signature


Re: HAProxy backend server AWS S3 Static Web Hosting

2015-02-05 Thread Pavlos Parissis
On 03/02/2015 02:02 πμ, Thomas Amsler wrote:
 Hello,
 
 Is it possible to front AWS S3 Static Web Hosting with HAProxy? I have
 tried to setup a backend to proxy requests to
 SomeHost.s3-website-us-east-1.amazonaws.com:80
 http://SomeHost.s3-website-us-east-1.amazonaws.com:80. But I am
 getting an error from S3 indicating that the bucket SomeHost does not
 exist. Has anybody tried to do that?
 
 Best,
 Thomas Amsler

Please provide more information on what you are trying to achieve and
paste your HAProxy configuration.

Cheers,
Pavlos




signature.asc
Description: OpenPGP digital signature


Re: Global ACLs

2015-02-05 Thread Pavlos Parissis
On 02/02/2015 05:31 μμ, Willy Tarreau wrote:
 Hi Christian,
 

[...snip...]

 
 We've been considering this for a while now without any elegant solution.
 Recently while discussing with Emeric we got an idea to implement scopes,
 and along these lines I think we could instead try to inherit ACLs from
 other frontends/backends/defaults sections. Currently defaults sections
 support having a name, though this name is not internally used, admins
 often put some notes there such as tcp or a customer's id.
 
 Here we could have something like this :
 
 defaults foo
 acl local src 127.0.0.1
 
 frontend bar
 acl client src 192.168.0.0/24
 use_backend c1 if client
 use_backend c2 if foo/local
 
 It would also bring the extra benefit of allowing complex shared configs
 to use their own global ACLs regardless of what is being used in other
 sections.
 
 That's just an idea, of course.
 

That sounds awesome, please bring in on :-)

Cheers,
Pavlos




signature.asc
Description: OpenPGP digital signature


Re: [PATCH/RFC 0/8] Email Alerts

2015-02-05 Thread Pavlos Parissis
On 04/02/2015 01:26 πμ, Simon Horman wrote:
 On Tue, Feb 03, 2015 at 05:13:02PM +0100, Baptiste wrote:
 On Tue, Feb 3, 2015 at 4:59 PM, Pavlos Parissis
 pavlos.paris...@gmail.com wrote:
 On 01/02/2015 03:15 μμ, Willy Tarreau wrote:
 Hi Simon,

 On Fri, Jan 30, 2015 at 11:22:52AM +0900, Simon Horman wrote:
 Hi Willy, Hi All,

 the purpose of this email is to solicit feedback on an implementation
 of email alerts for haproxy the design of which is based on a discussion
 in this forum some months ago.


 It would be great if we could use something like this
 acl low_capacity nbsrv(foo_backend) lt 2
 mail alert if low_capacity

 In some environments you only care to wake up the on-call sysadmin if you 
 are
 real troubles and not because 1-2 servers failed.

 Nice work,
 Pavlos




 This might be doable using monitor-uri and monitor fail directives in
 a dedicated listen section which would fail if number of server in a
 monitored farm goes below a threshold.

 That said, this is a dirty hack.
 
 A agree entirely that there is a lot to be said for providing a facility
 for alert suppression and escalation. To my mind the current implementation,
 which internally works with a queue, lends itself to these kinds of
 extensions. The key question in mind is how to design advanced such
 as the one you have suggested in such a way that they can be useful in a
 wide range of use-cases.
 
 So far there seem to be three semi-related ideas circulating
 on this list. I have added a fourth:
 
 1. Suppressing alerts based on priority.
e.g. Only send alerts for events whose priority is  x.
 
 2. Combining alerts into a single message.
e.g. If n alerts are queued up to be sent within time t
 then send them in one message rather than n.
 
 3. Escalate alerts
e.g. Only send alerts of priority x if more than n have occurred within
 time t.
This seems to be a combination of 1 and 2.
This may or not involve raising the priority of the resulting combined
alert (internally or otherwise)
 
An extra qualification may be that the events need to relate to something
common:
e.g. servers of the same proxy
 Loosing one may not be bad, loosing all of them I may wish
   to get out of bed for
 
 4. Suppressing transient alerts
e.g. I may not care if server s goes down then comes back up again
 within time t.
But I may if it keeps happening. This part seems like a variant of 3.
 
 
 I expect we can grow this list of use-cases. I also think things
 may become quite complex quite quickly. But it would be nice to implement
 something not overly convoluted yet useful.
 


What you have done so far provides the basic 'monitoring' alert
functionality and it is the first step to something than can become
bigger, better but complex as you say.

The functionality you have listed, it is covered by several monitor
systems, either dummy like nagios or 'smart' which apply real-time
anomaly detection(skyline, etc) by either actively probing services or
passively receiving events.

HAProxy it is another service inside a data center which produces
events, servers go down/up, dip/spike on traffic and etc.

In small companies which can't afford to have a centralized monitor
system and prefer to just receive various e-mail from ~10 systems,
having some monitor intelligence (aggregation, alerts based on
thresholds) build-in is perfect and very much appreciated.

But, in large installation where you have 10K servers and 400 services,
you want to receive raw events without any aggregation and the 'smart'
monitor system will figure out what to do before it wakes up the on-call
sysadmin(I am on of them).

To sum up, the current data exposed over stats socket satisfies the need
of the large installation, I know that because I am quite happy with
amount of data HAProxy exposes and I work in environment where we
utilize these 'smart' monitor systems.

At my friend's start-up company which has 8 services and I don't want to
develop scripts/tools to pull info from stats socket, just mail me and I
will alter my self based on the amount of e-mails I receive, and if
HAProxy can do some kind of aggregation/threshold then my mailbox will
thank HAProxy a lot.

I hope it helps and once again thanks for your hard work,
Pavlos








signature.asc
Description: OpenPGP digital signature


Re: HAProxy 1.5.10 on FreeBSD 9.3 - status page questions

2015-02-05 Thread Pavlos Parissis
On 04/02/2015 11:38 πμ, Tobias Feldhaus wrote:
 Hi,
 
 To refresh the page did not help (the number of seconds the PRIMARY
 backend was considered to be down increased continuously, but not the
 number of Bytes or the color).
 
 [deploy@haproxy-tracker-one /var/log] /usr/local/sbin/haproxy -vv
 HA-Proxy version 1.5.10 2014/12/31
 Copyright 2000-2014 Willy Tarreau w...@1wt.eu mailto:w...@1wt.eu
 
 Build options :
   TARGET  = freebsd
   CPU = generic
   CC  = cc
   CFLAGS  = -O2 -pipe -fstack-protector -fno-strict-aliasing -DFREEBSD_PORTS
   OPTIONS = USE_GETADDRINFO=1 USE_ZLIB=1 USE_OPENSSL=1 USE_STATIC_PCRE=1
 USE_PCRE_JIT=1
 
 Default settings :
   maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 200
 
 Encrypted password support via crypt(3): yes
 Built with zlib version : 1.2.8
 Compression algorithms supported : identity, deflate, gzip
 Built with OpenSSL version : OpenSSL 0.9.8za-freebsd 5 Jun 2014
 Running on OpenSSL version : OpenSSL 0.9.8za-freebsd 5 Jun 2014
 OpenSSL library supports TLS extensions : yes
 OpenSSL library supports SNI : yes
 OpenSSL library supports prefer-server-ciphers : yes
 Built with PCRE version : 8.35 2014-04-04
 PCRE library supports JIT : yes
 Built with transparent proxy support using: IP_BINDANY IPV6_BINDANY
 
 Available polling systems :
  kqueue : pref=300,  test result OK
poll : pref=200,  test result OK
  select : pref=150,  test result OK
 Total: 3 (3 usable), will use kqueue.
 
 
 - haproxy.conf -
 
 global
   daemon
   stats socket /var/run/haproxy.sock level admin
   log /var/run/log local0 notice
 
 defaults
   mode http
   stats enable
   stats hide-version
   stats uri /lbstats
   global log
 
 frontend LBSTATS *:
   mode http
 
 frontend KAFKA *:8090
   mode tcp
   default_backend KAFKA_BACKEND
 
 backend KAFKA_BACKEND
   mode tcp
   log global
   option tcplog
   option dontlog-normal
   option httpchk GET /

httpcheck in tcp mode? Have you manage to load HAProxy with this setting
without getting an error like
[ALERT] 035/213450 (17326) : Unable to use proxy 'foo_com' with wrong
mode, required: http, has: tcp.
[ALERT] 035/213450 (17326) : You may want to use 'mode http'.

   server KAFKA_PRIMARY kafka-primary.acc:9092 check port 9093 inter 2000
 rise 302400 fall 5

rise 302400!! Are you sure? HAProxy will have to wait 302400 * 2 seconds
before it detects the server up

   server KAFKA_SECONDARY kafka-overflow.acc:9092 check port 9093 inter
 2000 rise 2 fall 5 backup
   


I can't reproduce your problem even when I use your server settings but
in http mode for backend.

Cheers,
Pavlos




signature.asc
Description: OpenPGP digital signature


Re: nbproc 1 and stats in ADMIN mode?

2015-02-05 Thread Pavlos Parissis
On 05/02/2015 03:01 μμ, Klavs Klavsen wrote:
 Hi guys,
 
 Just to check.. if I set nbproc to f.ex. 4 - then I understand I need to
 define 4xstats.. and when I visit the webinterface.. I'll actually only
 get stats from one of the 4 processes..
 
 But we have ADMIN enabled for stats - so we can disable backend servers
 etc.. will we have to do that for each of the 4 stats editions -
 before it's actually active or is that state shared among them all?
 

Yes, you have to do all our admin operations on each webinterface.

Cheers,
Pavlos





signature.asc
Description: OpenPGP digital signature


Re: [PATCH/RFC 0/8] Email Alerts

2015-02-03 Thread Pavlos Parissis
On 01/02/2015 03:15 μμ, Willy Tarreau wrote:
 Hi Simon,
 
 On Fri, Jan 30, 2015 at 11:22:52AM +0900, Simon Horman wrote:
 Hi Willy, Hi All,

 the purpose of this email is to solicit feedback on an implementation
 of email alerts for haproxy the design of which is based on a discussion
 in this forum some months ago.


It would be great if we could use something like this
acl low_capacity nbsrv(foo_backend) lt 2
mail alert if low_capacity

In some environments you only care to wake up the on-call sysadmin if you are
real troubles and not because 1-2 servers failed.

Nice work,
Pavlos






signature.asc
Description: OpenPGP digital signature


Re: connection is rejected when using ipad with send-proxy option

2015-02-01 Thread Pavlos Parissis
On 15/01/2015 09:16 μμ, Alex Wu wrote:
 We enable send-proxy for ssl connections, and have the patched apache
 module to deal with proxyprotocol.
 
 From Mac OS, we see it works as designed. But when we repeat the same
 test using ipad, then we the connection rejected. iPad cannot establish
 the connection to haproxy over ssl.
 


Are you getting TCP RST or SSL error? It could be that you are missing
the intermediate certificate chain in your Apache setup and the webtrust
of iPad doesn't contain the certificate of the CA which issued your
certificate.

Cheers,
Pavlos




signature.asc
Description: OpenPGP digital signature


Re: Possible to send backend host and port in healthcheck?

2015-02-01 Thread Pavlos Parissis
On 01/02/2015 03:03 μμ, Willy Tarreau wrote:
 On Sun, Feb 01, 2015 at 08:25:24AM +0100, Pavlos Parissis wrote:
 If I understood Bhaskar's suggestion correctly, we could delegate health
 check for backend servers to a single server which does all the health
 checking. Am I right ?
 
 Yes that was the idea.
 
 If it this is case then the downside of multiple
 health checks when nbproc  1 is gone! But, I would like to see a
 fail-back mechanism as we have with agent check in case that single
 server is gone. Alternatively, we could have Bhaskar's suggestion
 implemented in the agent check.
 
 ... or you can use a local proxy which load-balances between multiple
 servers.
 

Very interesting idea.

 I am re-heating the request of delegate health checks to a central
 service with a fall-back mechanism in place because
 * Reduces checks in setups where you have servers in multiple backends
 * Reduces checks in setups where you have more than 1 HAProxy active
 server(HAProxy servers behind a Layer 4 load balancer - ECMP and etc)
 * Reduces checks when multi-process model is used
 * Reduces CPU stress on firewalls, when they are present between HAProxy
 and backend servers.
 
 Absolutely. And keeps state across reloads, and ensures that all LBs have
 the same view of the service when servers are flapping.
 

Exactly, another good reason to use this solution.

 This assumes that there are enough resources on the 'health-checker'
 server to sustain huge amount of requests. Which is not a big deal if
 'health-checker' solution is designed correctly, meaning that backend
 servers push their availability to that 'health-checker' server and etc.
 Furthermore, 'health-checker' server should have a check in place to
 detect backend servers not sending their health status and declare them
 down after a certain period of inactivity.
 
 We used to work on exactly such a design a few years ago at HAPTech, and
 the principle for it was to be a cache for health checks. That provided
 all the benefits of what you mentionned above, including a more consistent
 state between LBs when servers are flapping. The idea is that each check
 result is associated with a maxage and that any check received while the
 last result's age has not maxed out would be returned from the cache. It
 happens that all the stuff added to health checks since then had complicated
 things significantly (eg: capture of last response, send of the local info,
 etc). We've more or less abandonned that work by lack of time and need for
 a redesign. So I could say that the design is far from being obvious, but
 the gains to expect are very important. Also such a checker should be
 responsible for notifications, and possibly for aggregating states before
 returning composite statuses (that may be one point to reconsider in the
 future to limit complexity though).
 

Well, let's first see if we can get the basic functionality from HAProxy
to send health checks to a server. The design and the implementation
details of a centralized health-checker solution can be done by
combining available solutions(Zookeeper, etc) with some custom parts as
well.

Cheers,
Pavlos



signature.asc
Description: OpenPGP digital signature


Re: Possible to send backend host and port in healthcheck?

2015-01-31 Thread Pavlos Parissis
On 01/02/2015 07:35 πμ, Willy Tarreau wrote:
 Hello Joseph,
 
 I'm CCing Bhaskar since he was the one proposing the first solution, he
 may have some useful insights. Other points below.
 
 On Thu, Jan 15, 2015 at 01:23:59PM -0800, Joseph Lynch wrote:
 Hello,

 I am trying to set up a health check service similar to the inetd solutions
 suggested in the documentation. Unfortunately, my backends run on different
 ports because they are being created dynamically and as far as I can tell I
 cannot include the server port in my healthcheck either as part of the
 server declaration, a header, or as part of the healthcheck uri itself.

 I have been trying to come up with potential solutions that are not overly
 invasive, and I think that the simplest solution is to include the server
 host and port in the existing send-state header. I have included a patch
 that I believe does this at the end of this email. Before I go off
 maintaining a local fork, I wanted to ask if the haproxy devs would be
 sympathetic to me trying to upstream this patch?
 
 I'm personally fine with it. As you say, it's really not invasive, so we
 could merge it and even backport it into 1.5-stable. I'd slightly change
 something however, I'd use address instead of host in the field, since
 that's what you're copying there. Host could be used later to copy the
 equivalent of a host name, so let's not misuse the field name.
 
 As for prior art, I found a few posts on this mailing list about the
 ability to add headers to http checks. I believe that something like
 http://marc.info/?l=haproxym=139181606417120w=2 would be more then what
 we need to solve this problem, but that thread seems to have died. I do
 believe that a general ability to add headers to healthchecks would be
 superior to my patch, but the general solution is significantly harder to
 pull off.
 
 I'd like to re-heat that thread. I didn't even remember about it, indeed
 we were busy finalizing 1.5. Bhaskar, I still think your work makes sense
 for 1.6, so if you still have your patch, it's probably time to resend it :-)
 

If I understood Bhaskar's suggestion correctly, we could delegate health
check for backend servers to a single server which does all the health
checking. Am I right ? If it this is case then the downside of multiple
health checks when nbproc  1 is gone! But, I would like to see a
fail-back mechanism as we have with agent check in case that single
server is gone. Alternatively, we could have Bhaskar's suggestion
implemented in the agent check.

I am re-heating the request of delegate health checks to a central
service with a fall-back mechanism in place because
* Reduces checks in setups where you have servers in multiple backends
* Reduces checks in setups where you have more than 1 HAProxy active
server(HAProxy servers behind a Layer 4 load balancer - ECMP and etc)
* Reduces checks when multi-process model is used
* Reduces CPU stress on firewalls, when they are present between HAProxy
and backend servers.

This assumes that there are enough resources on the 'health-checker'
server to sustain huge amount of requests. Which is not a big deal if
'health-checker' solution is designed correctly, meaning that backend
servers push their availability to that 'health-checker' server and etc.
Furthermore, 'health-checker' server should have a check in place to
detect backend servers not sending their health status and declare them
down after a certain period of inactivity.

In case of servers located across multiple vlans, there is a edge case
where backend servers are reported as healthy but HAProxy fails to send
traffic to them due to missing network routes, firewall holes and etc.

The main gain of this solution is that you make backend servers
responsible for announcing their availability, it is a mindset change as
we have used to have LBs performing the health checks and be the
authoritative source of such information.

Cheers,
Pavlos








signature.asc
Description: OpenPGP digital signature


errorfile on bakend

2015-01-13 Thread Pavlos Parissis
Hoi,

I am trying to return a specific 200 response when URL matches a ACL but I get
back 503. Where is my mistake?

frontend mpla
acl robots.txt path_beg /robots.txt

use_backend bk_robots if robots.txt

default_backend foo_com

backend bk_robots
mode http
errorfile 200 /etc/haproxy/pages/robots.http


cat /etc/haproxy/pages/robots.http
HTTP/1.0 200 OK
Cache-Control: no-cache
Connection: close
User-Agent: *
Disallow: /


curl output
* HTTP 1.0, assume close after body
 HTTP/1.0 503 Service Unavailable
 Cache-Control: no-cache
 Connection: close
 Content-Type: text/html

haproxy log
mpla bk_robots/NOSRV 0/-1/-1/-1/0 503 212 - - SC-- 0/0/0/0/0 0/0 {} GET
/robots.txt HTTP/1.1 -

Cheers,
Pavlos



signature.asc
Description: OpenPGP digital signature


Re: errorfile on bakend

2015-01-13 Thread Pavlos Parissis
On 13/01/2015 12:36 μμ, Jarno Huuskonen wrote:
 Hi,
 
 On Tue, Jan 13, Pavlos Parissis wrote:
 Hoi,

 I am trying to return a specific 200 response when URL matches a ACL but I 
 get
 back 503. Where is my mistake?

 frontend mpla
 acl robots.txt path_beg /robots.txt

 use_backend bk_robots if robots.txt

 default_backend foo_com

 backend bk_robots
 mode http
 errorfile 200 /etc/haproxy/pages/robots.http
 
 Does it work if you replace errorfile 200... with:
 errorfile 503 /etc/haproxy/pages/robots.http ?
 

Yeap that was the trick, thanks a lot, going to re-read the doc
in order to understand why.

 http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#4-errorfile
 (Code 200 is emitted in response to requests matching a monitor-uri
 rule.). So this might work (untested):
 
 frontend mpla
   errorfile 200 /etc/haproxy/pages/robots.http
   monitor-uri /robots.txt

No, it didn't.

Thanks a lot Jarno,
Pavlos



signature.asc
Description: OpenPGP digital signature


Re: ftp load balancing

2015-01-08 Thread Pavlos Parissis
Στις 8 Ιαν 2015 4:39 ΜΜ, ο χρήστης Alfredo Gutierrez 
alfredo.gutierrez...@gmail.com έγραψε:

 I am trying to setup a LB for one of my clients that is for two WS_FTP
windows servers. I have configured HAProxy already but I am not getting any
redirecting when I ftp to the LB server. I have searched through the net
and forums for any help on this, is there any write ups for this type or
setup?


FTP protocol uses 2 ports for communication between end points.
It uses port 21 for establishing a control channel and another random port
for data transfer. The selection for that random port and which end point
initiates the connection for the data transfer depends on the running mode
of ftp server. There are 2 types of running mode for ftp servers, the
active mode in which client informs the server about the port is listening
for a connection and the passive mode where server informs the client about
the port is listening for a connection.

I don't know if HAProxy has the ability to inspect tha data exchanged over
the control channel and balance traffic for the data channel.

Cheers,
Pavlos


Re: using environment variable in headers

2015-01-07 Thread Pavlos Parissis
On 06/01/2015 08:42 μμ, Cyril Bonté wrote:
 Hi Pavlos,
 
 Le 06/01/2015 20:17, Pavlos Parissis a écrit :
 Hi,

 According to the docs I can have the following snippet

 http-request add-header Nodename %[env(HOSTNAME)]

 to set the hostname as the value on a header. But, it doesn't work. I
 network trace and Nginx logs show no value.
 
 Please ensure that you exported the environment variable first, to make it
 available to the process.
 

Oh boy, I am an idiot:-(

Thanks a lot Cyril,
Pavlos



signature.asc
Description: OpenPGP digital signature


using environment variable in headers

2015-01-06 Thread Pavlos Parissis
Hi,

According to the docs I can have the following snippet

http-request add-header Nodename %[env(HOSTNAME)]

to set the hostname as the value on a header. But, it doesn't work. I
network trace and Nginx logs show no value.

While the following works.
http-request add-header Nodename %H

I am using 1.5.10 version.

I also failed to find in the doc the list of environment variables that
can be used. Any ideas where I should look in the code?

Cheers,
Pavlos



signature.asc
Description: OpenPGP digital signature


Re: Multiple backend sets

2015-01-05 Thread Pavlos Parissis
On 05/01/2015 12:04 μμ, Thomas Heil wrote:
 Hi,
 
 On 03.01.2015 16:31, Ram Chander wrote:
 Hi,

 I have a requirement like below:

 Consider there are two sets of backends.  Each has some servers in it
 One is default , other is backup
 Haproxy should try second set  if  first  set  returns 404.
 
 You mean all servers in the first backend return 404? If so, the option
 http-check disable-on-404
 is your best friend.
 
 I assume you have two backends backend be_one and backend be_two.
 
 In the frontend section you need to declare an acl like
 
 --
 acl be_one_available nbsrv(be_one) ge 1
 
 use_backend be_two if ! be_one_available
 default_backend be_one
 --
 

I suspect that Ram wants HAProxy to 'catch' 404 responses for normal
traffic and not for a health-check response.

Cheers,
Pavlos




signature.asc
Description: OpenPGP digital signature


Re: Multiple backend sets

2015-01-05 Thread Pavlos Parissis
On 05/01/2015 12:28 μμ, Thomas Heil wrote:
 Hi,
 On 05.01.2015 12:18, Pavlos Parissis wrote:
 On 05/01/2015 12:04 μμ, Thomas Heil wrote:
 Hi,

 On 03.01.2015 16:31, Ram Chander wrote:
 Hi,

 I have a requirement like below:

 Consider there are two sets of backends.  Each has some servers in it
 One is default , other is backup
 Haproxy should try second set  if  first  set  returns 404.
 You mean all servers in the first backend return 404? If so, the option
 http-check disable-on-404
 is your best friend.

 I assume you have two backends backend be_one and backend be_two.

 In the frontend section you need to declare an acl like

 --
 acl be_one_available nbsrv(be_one) ge 1

 use_backend be_two if ! be_one_available
 default_backend be_one
 --

 I suspect that Ram wants HAProxy to 'catch' 404 responses for normal
 traffic and not for a health-check response.
 so as HAProxy doest not have any glue about files, this is not possible.

Yeap, it doesn't have a glue about what data are going through.

A note about catching 404 for normal traffic, it may sounds a great idea
but it could easily bring a site down by any user requesting a broken or
dead link.

Cheers,
Pavlos





signature.asc
Description: OpenPGP digital signature


Re: HAProxy and MS Remote Desktop Gateway

2014-12-21 Thread Pavlos Parissis
On 19 December 2014 at 12:02, Kevin COUSIN ki...@kiven.fr wrote:

 Hi all,

 I install an HAproxy instance to load balance an Remote Desktop Gateway
 2012 R2. It works fine in Layer 7 with this configuration and a Windows
 8.1, but it dont works with an xfreerdp.I see a difference in logs, with a
 Windows client :  MS-RDGateway/1.0, an a xfreerdp app : MSRPC, so I thing
 HAproxy cannot process the MSRPC requets. Must I switch to Layer 4
 LoadBalacing ?


As far as I know HAProxy doesn't support this protocol, which means you
need to use tcp mode[1], if MSRPC runs over TCP.

Cheers,
Pavlos

[1] I didn't mention Layer 4 loadbalancing as UDP is a also one of the
Layer 4 protocols but it is not supported.


Re: Multiprocess and backends

2014-12-18 Thread Pavlos Parissis
On 18/12/2014 05:24 πμ, Baptiste wrote:
 On Wed, Dec 17, 2014 at 10:39 PM, Pavlos Parissis
 pavlos.paris...@gmail.com wrote:
 Hi,

 I remember someone( maybe Baptiste) saying that in multi process mode
 backends will be picked up by the process which frontend is bound to.
 But, I found not to be the case in 1.5.9.
 I also remember that this works only when you have 1to1 relationship
 between frontend and backends, which is my case.

 In the following output of stat sockets I see both backends to be
 monitored by both processes. If I bind graphite_example.com_SSL backend
 to the some process as the graphite_example.com_SSL frontend, it works
 as expected where graphite_example.com_SSL is monitored only by process 2.

 It isn't a problem to use bind-process in backend settings and I am just
 asking out of curiosity.


[...snip..]
 
 Hi Pavlos,
 
 Your test is not relevant.
 Since you have no bind-process on your SSL backend, HAProxy starts it
 up on both proceses you started up.

OK then I remember wrong, please accept my apologies for my bad memory.


 Please try adding a bind-process 1 in your SSL backend and report us the 
 result.

I have done that and as I wrote it works.

Thanks a lot for the clarification,
Pavlos




signature.asc
Description: OpenPGP digital signature


Multiprocess and backends

2014-12-17 Thread Pavlos Parissis
Hi,

I remember someone( maybe Baptiste) saying that in multi process mode
backends will be picked up by the process which frontend is bound to.
But, I found not to be the case in 1.5.9.
I also remember that this works only when you have 1to1 relationship
between frontend and backends, which is my case.

In the following output of stat sockets I see both backends to be
monitored by both processes. If I bind graphite_example.com_SSL backend
to the some process as the graphite_example.com_SSL frontend, it works
as expected where graphite_example.com_SSL is monitored only by process 2.

It isn't a problem to use bind-process in backend settings and I am just
asking out of curiosity.

Cheers,
Pavlos


 echo 'show stat'|nc -U  /var/lib/haproxy/stats1
haproxy,FRONTEND,,,0,0,5,0,0,0,0,0,0,OPEN,1,2,00,0,2000
haproxy,BACKEND,0,0,0,0,5000,0,0,0,0,0,,0,0,0,0,UP,0,0,0,,0,19,0,,1,2,0,,0,
graphite_example.com,FRONTEND,,,0,1,5,6,1008,7290,0,0,0,OPEN,,,
graphite_example.com,server-101.example.com,0,0,0,1,,3,504,3645,,0,,0,0,
graphite_example.com,server-102.example.com,0,0,0,1,,3,504,3645,,0,,0,
graphite_example.com,BACKEND,0,0,0,1,5000,6,1008,7290,0,0,,0,0,0,0,UP,2,2,0
graphite_example.com_SSL,server-103.example.com,0,0,0,0,,0,0,0,,0,,0,0,0
graphite_example.com_SSL,server-104.example.com,0,0,0,0,,0,0,0,,0,,0,0,
graphite_example.com_SSL,BACKEND,0,0,0,0,5000,0,0,0,0,0,,0,0,0,0,UP,2,2,

 echo 'show stat'|nc -U  /var/lib/haproxy/stats2
haproxy,FRONTEND,,,0,1,5,1,122,1955,0,0,0,OPEN,2,2,00,
haproxy,BACKEND,0,0,0,0,5000,0,122,1955,0,0,,0,0,0,0,UP,0,0,0,,0,28,0,,2,2
graphite_example.com,server-101.example.com,0,0,0,0,,0,0,0,,0,,0,0,0,0,U
graphite_example.com,server-102.example.com,0,0,0,0,,0,0,0,,0,,0,0,0,0,U
graphite_example.com,BACKEND,0,0,0,0,5000,0,0,0,0,0,,0,0,0,0,UP,2,2,0,,0,2
graphite_example.com_SSL,FRONTEND,,,0,0,5,0,0,0,0,0,0,OPEN
graphite_example.com_SSL,server-103.example.com,0,0,0,0,,0,0,0,,0,,0,0,0
graphite_example.com_SSL,server-104.example.com,0,0,0,0,,0,0,0,,0,,0,0,0
graphite_example.com_SSL,BACKEND,0,0,0,0,5000,0,0,0,0,0,,0,0,0,0,UP,2,2,0,


global
log 127.0.0.1 local2
chroot  /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 10
userhaproxy
group   haproxy
daemon

stats socket /var/lib/haproxy/stats uid 0 gid 0 mode 0440 level admin

ssl-server-verify none
tune.ssl.default-dh-param 2048

stats socket /var/lib/haproxy/stats1 uid 0 gid 0 mode 0440 level
admin process 1
stats socket /var/lib/haproxy/stats2 uid 0 gid 0 mode 0440 level
admin process 2
nbproc 2
cpu-map 1 0
cpu-map 2 1

defaults
maxconn 5
rate-limit sessions 2000
modehttp
log global
option  contstats
option  tcplog
option  httplog
no option  dontlognull
option  tcp-smart-accept
option  tcp-smart-connect
option  http-keep-alive
option  redispatch
balance roundrobin
timeout http-request15s
timeout http-keep-alive 15s
retries 2
timeout queue   1m
timeout connect 10s
timeout client  15s
timeout server  15s
timeout check   5s
option forwardfor header F5SourceIP
listen haproxy
bind :8080
stats uri /
stats show-node
stats refresh 10s
stats show-legends
no log

frontend graphite_example.com
bind 10.189.200.1:80
bind-process 1
default_backend graphite_example.com

backend graphite_example.com
#bind-process 1
default-server inter 10s
option httpchk GET / HTTP/1.1\r\nHost:\
graphite.example.com\r\nUser-Agent:\ HAProxy
server server-101.example.com 10.96.70.65:80 check
server server-102.example.com 10.96.70.66:80 check


frontend graphite_example.com_SSL
bind 10.189.200.1:443 ssl crt /somepath/pem
bind-process 2
default_backend graphite_example.com_SSL

backend graphite_example.com_SSL
default-server inter 10s
#bind-process 2
option httpchk GET / HTTP/1.1\r\nHost:\
graphite.example.com\r\nUser-Agent:\ HAProxy
server server-103.example.com 10.96.70.109:443 ssl check check-ssl
server server-104.example.com 10.96.70.160:443 ssl check check-ssl




signature.asc
Description: OpenPGP digital signature


connection pooling

2014-12-09 Thread Pavlos Parissis
Hi,

It has been mentioned that 1.5 version doesn't support connection
pooling, meaning that 1 TCP session to a backend server can serve
multiple HTTP requests originated from than 1 client.

Do you guys have plans to introduce this functionality in 1.6 release?

Cheers,
Pavlos





signature.asc
Description: OpenPGP digital signature


Re: Adding HSTS or custom headers on redirect

2014-12-02 Thread Pavlos Parissis
On 2 December 2014 at 09:17, Samuel Reed samuel.trace.r...@gmail.com
wrote:

 I'm running the latest 1.5 release.

 Our site runs primarily on the `www` subdomain, but we want to enable HSTS
 for
 all subdomains (includeSubdomains). Unfortunately, due to the way HSTS
 works,
 the HSTS header MUST be present on the redirect from https://example.com
 to
 https://www.example.com. I am using configuration like:

 rspadd Strict-Transport-Security:\ max-age=31536000;\ includeSubDomains
 redirect prefix https://www.example.com code 301 if \
 { hdr(host) -i example.com }

 For whatever reason, even when the rspadd line is before the redirect, no
 headers are added to the redirect, making this impossible. I've considered
 a fake backend with a fake 503 file to get around this - something like:

 HTTP/1.1 301 Moved Permanently
 Cache-Control: no-cache
 Content-Length: 0
 Strict-Transport-Security: max-age=31536000; includeSubDomains; preload
 Location: https://www.example.com/
 Connection: close

 While this will work, it feels really hacky. Is there a better way to add a
 header on a redirect?


Have a look at the thread 'add response header based on presence of request
header', your case matches the case I mentioned there.

Cheers,
Pavlos


Re: add response header based on presence of request header

2014-12-01 Thread Pavlos Parissis
Στις 1 Δεκ 2014 2:53 ΜΜ, ο χρήστης Baptiste bed...@gmail.com έγραψε:

  Thanks for solution Baptise but why is it consider a dirty hack? I must
  assume that it may cause problems in a more complex setups.
 

 Hi Pavlos,

 I considered it as a dirty hack because I derouted a feature from
 its original purpose and I knew there will be features in new release
 that would dedicated to what you want to do.
 And so, you should have update your configuration accordingly.
 That's what Willy mentionned: http-request capture rules from 1.6.

 Baptiste

Valid point.

Cheers,
Pavlos


Re: add response header based on presence of request header

2014-11-30 Thread Pavlos Parissis
On 28/11/2014 02:44 μμ, Pavlos Parissis wrote:
 Hi,
 
 I want HAProxy to add a response header if request includes a specific
 header. I implemented the logic [1] but I get the following
 
  parsing [/etc/haproxy/haproxy.cfg:77] : acl 'lb_debug' will never match
 because it only involves keywords that are incompatible with 'frontend
 http-response header rule'
 [WARNING] 331/135906 (6390) : config : log format ignored for proxy
 'haproxy' since it has no log address.
 
 Found few references on Internet and if I understood them correctly it
 fails because at the moment rspadd is evaluated HAProxy doesn't know
 request information like headers. Am I right? and if I am right , do we
 have solution?  Willy mentioned in a similar thread about a dirty way to
 get it but I failed to find it.
 


Baptise provided a solution which captures the mentioned header in
request and check if it exists during response. But, he also made a note
about being a dirty hack.

Here it is
frontend for_bar_com
capture request header User-Agent len 120
capture request header Host   len 32
capture request header LBDEBUGlen 5
bind 10.189.200.1:80
http-response set-header LBNODE uuid if { capture.req.hdr(2) -i yes }
default_backend for_bar_com


After I sent my e-mail, I changed it to always return the header and use
the system UUID which kind of more secure in terms of not exposing any
information to all users. That requires to have an easy mapping
mechanism in place to map UUIDs with actual hostnames, which can be
easily done when you have puppet/salt/REST_APIs available.

Thanks for solution Baptise but why is it consider a dirty hack? I must
assume that it may cause problems in a more complex setups.

Cheers,
Pavlos




signature.asc
Description: OpenPGP digital signature


Re: Better understanding of nbproc vs distributing interrupts for cpu load management

2014-11-30 Thread Pavlos Parissis
On 28/11/2014 01:19 μμ, Baptiste wrote:
 On Wed, Nov 26, 2014 at 9:48 PM, Pavlos Parissis
 pavlos.paris...@gmail.com wrote:
 On 25/11/2014 07:08 μμ, Lukas Tribus wrote:
 Hi,   Thanks for your reply. We have tried this approach and while
 it gives  some benefit, the haproxy process itself  remains cpu-bound,
 with no idle time at all - with both pidstat and perf  reporting that
 it uses close to 100%  of available cpu while running. I think SSL/TLS
 termination is the only use case where HAProxy saturates a CPU core of a
 current generation 3,4Ghz+ CPU, which is why scaling SSL/TLS is more
 complex, requiring nbproc 1. Lukas

 I am experiencing the same 'expected' behavior, where SSL computation
 drives HAProxy CPU user level to high numbers.

 Using SSL tweaks like ECDSA/ECDH algorithms/TLS session id/ticketing
 helps but it is not the ultimate solution. HAProxy guys had a webinar
 about HAProxy and SSL few weeks ago, and they mentioned about using
 multiple processes. They also mentioned about SSL cache being shared
 between all these processes, which is a very efficient.

 Cheers,
 Pavlos

 
 Hi Pavlos,
 
 you're right.
 If you need to scale *a lot* your SSL processing capacity in HAProxy,
 you must use multiple processes.
 That said, multiproc model has some counter parts (stats, server
 status, health checks are local to each process, stick-tables can't be
 synchronized, etc..).
 With HAProxy 1.5, we can now start multiple stats socket and stats
 pages and bind them to each process, lowering the impact.

I don't see it as problem having multiple stats available. I have
written a Python lib(which I need to find to polish it a bit and upload
it to github) which aggregates stats from multiple stats socket(show
info, enable/disable/weight changes cmd). But, it could be tricky when
you have a complex map between CPUs and frontend/backends with one2many
relationships or even many2many relationships.

 That said, if stats, peers, etc matters and you still need a huge SSL
 processing capacity, then the best way is to use a first layer of
 HAProxy multi-process to decipher the traffic and make it point to a
 second layer of HAProxy in single process mode.
 

This is a bit of complex setup.

Pavlos





signature.asc
Description: OpenPGP digital signature


Re: Better understanding of nbproc vs distributing interrupts for cpu load management

2014-11-30 Thread Pavlos Parissis
On 28/11/2014 05:19 μμ, Lukas Tribus wrote:
 Hi,
 
 
 you're right.
 If you need to scale *a lot* your SSL processing capacity in HAProxy,
 you must use multiple processes.
 That said, multiproc model has some counter parts (stats, server
 status, health checks are local to each process, stick-tables can't be
 synchronized, etc..).
 With HAProxy 1.5, we can now start multiple stats socket and stats
 pages and bind them to each process, lowering the impact.
 That said, if stats, peers, etc matters and you still need a huge SSL
 processing capacity, then the best way is to use a first layer of
 HAProxy multi-process to decipher the traffic and make it point to a
 second layer of HAProxy in single process mode.
 
 
 If that still isn't enough and you need full horizontal scalability:
 Handle the SSL load with a two layered load-balancing approach. This
 first layer of load-balancers only forwards in TCP mode (with source IP
 stickiniess or somethings like this) and you terminate SSL/TLS at the
 second load-balancing layer.
 
 That way you achieve horizontal scalability in the second, SSL/TLS
 terminating layer.
 
 

I have this setup in a different way, where  X HAProxies participating
in BGP peering(with BFD protocol enabled as well) and upstream routers
use ECMP with flow round-robing balancing enabled.

But, terminating SSL in multiple end points without any kind of peer
information about TLS ticketing and Session IDS, causes problems when
you want to implement server-side TLS session resumption. Other people
has accomplished this [1] and I am hoping to see support of this setup
in 1.6 release:-)

 I hope that one day we can move the SSL handshake to dedicated threads,
 completely eliminating the event loop blocking and allowing a single
 process to forward all the traffic while some parallel threads do all
 the heavy SSL handshake lifting.
 

I was always under the impression that SSL sucks all your CPU resources,
therefore it should be used when it is really necessary and when
vertical scaling is not a major issue. After the past VelocityConf in
Barcelona, I changed my option about it. There are several things that
can be done to eliminate the heavy SSL handshake lifting, you mentioned.
I have mentioned before [2] that tunning the cipher suite can reduce a
lot the CPU load. Other techniques are available as well.
By doing all these nice stuff(better products, SSL tuning techinques)
not only we safe CPU cycles but most importantly we make user experience
better and faster.


[1] https://blog.twitter.com/2013/forward-secrecy-at-twitter
[2] http://article.gmane.org/gmane.comp.web.haproxy/17663/match=pavlos



signature.asc
Description: OpenPGP digital signature


Re: http-keep-alive with SSL backend

2014-11-30 Thread Pavlos Parissis
On 30/11/2014 01:17 μμ, Cyril Bonté wrote:
 
 Hi again Sachin,
 
 Le 30/11/2014 13:01, Sachin Shetty a écrit :
 Thanks Cyril, but no luck, I still see no connection reuse. For every new
 connection from the same client, haproxy make a new connection to the
 server and terminates it right after.
 
 Then, ensure that it can't be due to a explicit behaviour asked by the
 client or the server.
 
 Lukas, as per the documentation, the 1.5 dev version does support server
 side pooling.
 http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#4.2-option%20h

 ttp-keep-alive
 
 No, Lukas is right, there's no pooling yet in haproxy.
 haproxy will only try to reuse the previous server side connection for
 the same client connection. This is not a pool of connections. Once the
 client closes it's connection, there won't be any connection reuse on
 the server side.
 
 

and I was about to open a similar thread as I was wondering about the same.

Shall we except connection pooling in 1.6?

This feature will improve the performance for clients, HAProxy and
backend servers, especially in setups with mini POPs around the globe
and backends in few centralized places.



signature.asc
Description: OpenPGP digital signature


and response header based on presence of request header

2014-11-28 Thread Pavlos Parissis
Hi,

I want HAProxy to add a response header if request includes a specific
header. I implemented the logic [1] but I get the following

 parsing [/etc/haproxy/haproxy.cfg:77] : acl 'lb_debug' will never match
because it only involves keywords that are incompatible with 'frontend
http-response header rule'
[WARNING] 331/135906 (6390) : config : log format ignored for proxy
'haproxy' since it has no log address.

Found few references on Internet and if I understood them correctly it
fails because at the moment rspadd is evaluated HAProxy doesn't know
request information like headers. Am I right? and if I am right , do we
have solution?  Willy mentioned in a similar thread about a dirty way to
get it but I failed to find it.

[1]
frontend foo_bar_com
capture request header User-Agent len 120
capture request header Host   len 32
bind 10.189.200.1:80
acl lb_debug req.hdr(LBBEBUG) -i true
rspadd LBNODENAME:\ haproxylb-201.lhr4.qds.booking.com if lb_debug
default_backend foo_bar_com

backend foo_bar_com
default-server inter 10s
option httpchk GET / HTTP/1.1\r\nHost:\ foo.bar.com
http://graphite.booking.com\r\nUser-Agent:\ HAProxy
server server1 10.12.10.65:80 http://10.196.70.65:80 check



Cheers,
Pavlos


add response header based on presence of request header

2014-11-28 Thread Pavlos Parissis
Hi,

I want HAProxy to add a response header if request includes a specific
header. I implemented the logic [1] but I get the following

 parsing [/etc/haproxy/haproxy.cfg:77] : acl 'lb_debug' will never match
because it only involves keywords that are incompatible with 'frontend
http-response header rule'
[WARNING] 331/135906 (6390) : config : log format ignored for proxy
'haproxy' since it has no log address.

Found few references on Internet and if I understood them correctly it
fails because at the moment rspadd is evaluated HAProxy doesn't know
request information like headers. Am I right? and if I am right , do we
have solution?  Willy mentioned in a similar thread about a dirty way to
get it but I failed to find it.

[1]
frontend foo_bar_com
capture request header User-Agent len 120
capture request header Host   len 32
bind 10.189.200.1:80
acl lb_debug req.hdr(LBBEBUG) -i true
rspadd LBNODENAME:\ lbnode1 if lb_debug
default_backend foo_bar_com

backend foo_bar_com
default-server inter 10s
option httpchk GET / HTTP/1.1\r\nHost:\ foo.bar.com
http://graphite.booking.com\r\nUser-Agent:\ HAProxy
server server1 10.12.10.65:80 http://10.196.70.65:80 check



Cheers,
Pavlos


Re: Better understanding of nbproc vs distributing interrupts for cpu load management

2014-11-26 Thread Pavlos Parissis
On 25/11/2014 07:08 μμ, Lukas Tribus wrote:
 Hi,   Thanks for your reply. We have tried this approach and while
 it gives  some benefit, the haproxy process itself  remains cpu-bound,
 with no idle time at all - with both pidstat and perf  reporting that
 it uses close to 100%  of available cpu while running. I think SSL/TLS
 termination is the only use case where HAProxy saturates a CPU core of a
 current generation 3,4Ghz+ CPU, which is why scaling SSL/TLS is more
 complex, requiring nbproc 1. Lukas

I am experiencing the same 'expected' behavior, where SSL computation
drives HAProxy CPU user level to high numbers.

Using SSL tweaks like ECDSA/ECDH algorithms/TLS session id/ticketing
helps but it is not the ultimate solution. HAProxy guys had a webinar
about HAProxy and SSL few weeks ago, and they mentioned about using
multiple processes. They also mentioned about SSL cache being shared
between all these processes, which is a very efficient.

Cheers,
Pavlos




signature.asc
Description: OpenPGP digital signature


sslcachelookups - sslcachecemisses = ssl cache hits?

2014-11-25 Thread Pavlos Parissis
Hi,

Looking at the output of 'show info' on stats socket I see

[...snip...]
SslFrontendKeyRate: 0
SslFrontendMaxKeyRate: 31
SslFrontendSessionReuse_pct: 100
SslBackendKeyRate: 0
SslBackendMaxKeyRate: 6
SslCacheLookups: 698093
SslCacheMisses: 417817
[...snip...]

Would it be an accurate measurement of SSL Cache hits if I do
SslCacheLookups minus SslCacheMisses?
In our our setup we use Session IDs and TLS session ticketing, see below,
so I assume that Cache counter will be used for both.

openssl s_client -connect foo.bar.com:443 -tls1 -tlsextdebug -status
[...snip...]
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-SHA
Server public key is 2048 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
SSL-Session:
Protocol  : TLSv1
Cipher: ECDHE-RSA-AES256-SHA
Session-ID:
3125F0852942082B52942BC0F432F7FFCFAFB540F23EB0E57CBDC1135728F0AF
Session-ID-ctx:
Master-Key:
27A9699E9F72831E2BA2D66BB59044A47FD91C55A1CC7A82715B5A8A290BE1E007C477A0EC0193D5C869FDED6F49B646
Key-Arg   : None
Krb5 Principal: None
PSK identity: None
PSK identity hint: None
TLS session ticket lifetime hint: 300 (seconds)
TLS session ticket:
 - f1 d1 d3 e1 dd c0 3d 83-d9 1e c0 89 df c5 f5 9b
..=.


[...snip...]

what does the SslFrontendSessionReuse_pct measure ? I failed to find any
info about it in docs/Internet, I haven't checked the code yet:-)

Cheers,
Pavlos


Re: [ANNOUNCE] haproxy-1.5.8

2014-10-31 Thread Pavlos Parissis
Git tag 1.5.8 is missing:-)


Re: [ANNOUNCE] haproxy-1.5.8

2014-10-31 Thread Pavlos Parissis
On 31 October 2014 11:33, Willy Tarreau w...@1wt.eu wrote:

 On Fri, Oct 31, 2014 at 11:30:14AM +0100, Pavlos Parissis wrote:
  Git tag 1.5.8 is missing:-)

 Ah indeed, I used Ctrl-R to recall the last history command line
 to push the new version, so I pushed only v1.5.7 as found on the
 previous command line :-)

 Fixed now, thanks Pavlos!
 willy


OK building it and pushing to Production ... on Monday:-) as it is going to
be windy here in NL and want to get some windsurfing done, not that I don't
trust the code:-)

Thanks,
Pavlos


Re: Running multiple haproxy instances to use multiple cores efficiently

2014-10-29 Thread Pavlos Parissis
On 29 October 2014 08:52, Baptiste bed...@gmail.com wrote:

 On Mon, Oct 27, 2014 at 7:41 PM, Chris Allen ch...@cjx.com wrote:
  We're running haproxy on a 2x4 core Intel E5-2609 box. At present
 haproxy is
  running on
  a single core and saturating that core at about 15,000 requests per
 second.
 
  Our application has four distinct front-ends (listening on four separate
  ports) so it would be
  very easy for us to run four haproxy instances, each handling one of the
  four front-ends.
 
  This should then allow us to use four of our eight cores. However we
 won't
  be able to tie hardware
  interrupts to any particular core.
 
  Is this arrangement likely to give us a significant performance boost? Or
  are we heading for trouble because
  we can't tie interrupts to any particular core?
 
  Any advice would be much appreciated. Many thanks,
 
  Chris.
 
 

 Hi Chris,

 You can use nbproc, cpu-map and bind-process keywords to startup
 multiple processes and bind frontends and backends to multiple CPU
 cores.


If a backend is used only by 1 FE and that FE is bound to a certain CPU(s),
do we still need to bind the backend to the same CPU(s) set ?


Cheers,
Pavlos


Re: Running multiple haproxy instances to use multiple cores efficiently

2014-10-29 Thread Pavlos Parissis
On 29 October 2014 13:49, Baptiste bed...@gmail.com wrote:

  If a backend is used only by 1 FE and that FE is bound to a certain
 CPU(s),
  do we still need to bind the backend to the same CPU(s) set ?
 
 
  Cheers,
  Pavlos

 Yes, this is a requirement and will be performed by HAProxy automatically.


OK, as long as there is a 1to1 relationship between FEs and BEs, binding
only the FE to a set of CPU(s) is enough and HAProxy will take care of
binding the BE to the same set of CPU(s).

Thanks a lot for the answer,
Pavlos


Re: no-sslv3 in default

2014-10-20 Thread Pavlos Parissis
On 16/10/2014 12:12 μμ, Olivier wrote:
 Hi,
 
 2014-10-16 10:34 GMT+02:00 Neil - HAProxy List
 maillist-hapr...@iamafreeman.com
 mailto:maillist-hapr...@iamafreeman.com:
 
 I'd go further. Sslv3 us an obsolete protocol does anyone disagree
 with that?
 
 For a start make no-sslv3 the default and have a
 enable-obsolete-sslv3 option.
 Or better make enabling it a compile time option.
 
 Or maybe just get rid of it altogether?
 
 
 I do not agree. Backward compatibility is really important for software
 like HAProxy. So if you start disabling this feature, it would lead to
 tons of bug reports.
 Moreover, I do not agree that disabling Sslv3 is absolutely necessary.
 There are still plenty of websites around that must keep support for
 WinXP+IE6. Even Google did not deactivate sslv3 on their server (they
 are using a mitigating solution instead).
 
 In my own opinion, being able to deactivate it on defaults section might
 help, but don't change default behaviour. 
 
 Olivier

I second this. Disabling by default SSLv3 will go unnoticed in an
upgrade process and will cause outages on services.

Oh yes, SSLv3 is old but is being used by a lot of software (legacy or
not) and upgrading them to TLS it can take years.

Do you know that the requests library on python2.7 uses by default SSLv3
on some recent distributions(RedHat 6 for instance)? and if you want to
use TLS you have to write 5-6 lines of code, using HTTPAdaptor and etc...

I am on those people that love to use the latest and greatest
technologies, but in a way that will not break the business.

Please don't disable SSLv3, just make the code to warn about it on the
log as a reminder.

Cheers,
Pavlos








signature.asc
Description: OpenPGP digital signature


maxconnrate VS maxsessrate

2014-10-05 Thread Pavlos Parissis
Hi,

The doc is a bit confusing, at least to me. The former is about TCP
connections and the latter for HTTP requests, am I completely wrong?

Cheers,
Pavlos



signature.asc
Description: OpenPGP digital signature


Re: Binding http and https on same port

2014-10-01 Thread Pavlos Parissis
On 01/10/2014 04:30 μμ, Alexander Olsson wrote:
 Is it possible to bind both HTTP and HTTPS on the same port with haproxy. 
 Something like this:
 
 frontend data-in
   mode http
   bind 0.0.0.0:8080
   crt if ssl /path/to/crt
 
 Obviously above doesn't work. Is there something similar? It's generally easy 
 to see if it is TLS (starts with 0x16) on the port or anything else.
 
 It is important that it is the same port, so the general solution to this 
 problem where two bind statements is used does not work for me.
 
 Regards,
 Alexander
 
 
 


Have you tried a second bind like?
bind 0.0.0.0:443

Cheers,
Pavlos





signature.asc
Description: OpenPGP digital signature


Re: asking

2014-09-30 Thread Pavlos Parissis
Στις 29 Σεπ 2014 1:56 ΜΜ, ο χρήστης Bot Budi roboteb...@gmail.com
έγραψε:

 can i used haproxy for caching server?, it there have feature for
caching?

 thanks.

Nope, HAProxy is not a caching engine.

Pavlos


Re: retry new backend on http errors?

2014-09-30 Thread Pavlos Parissis
On 26/09/2014 11:46 πμ, JCM wrote:
 On 25 September 2014 14:47, Klavs Klavsen k...@vsen.dk wrote:
 Any way to make haproxy retry requests with certain http response codes
 X times (or just until all backends have been tried) ?
 
 Nope. You really don't want to do this. And I'd be sad if the devs
 added anything in to HAProxy to enable this.
 

I don't find his request unreasonable. There are cases where a short
burst of 500 could lead to a successful request upon a retry.

But, I have to see that this is very trick to decide under which
conditions you want HAProxy to retry or let the 500 to get back to the
client.


Pavlos




signature.asc
Description: OpenPGP digital signature


SSL private key and Certificate in a separated files

2014-09-29 Thread Pavlos Parissis
Hi,

Is it possible to have the SSL Private key and SSL certificate of the
server together with all intermediate certificates  in 2 separated files?

I tried
bind 10.1.1.1.1:443 ssl crt file.key crt certifate-bundle.pem no-sslv3
ciphers .


but it fails with unable to load SSL private key from PEM file

Cheers,
Pavlos


Re: About the haproxy proces/thread number

2014-09-23 Thread Pavlos Parissis
On 16 September 2014 03:23, Zebra max...@unitedstack.com wrote:

 Hi,all

   I configure one frontend named https_proxy and one backend named
 httpservers. When I start the haproxy in my machine which has 2 cpus,I find
 the log below.

 Sep 16 01:03:34 localhost haproxy[30429]: Proxy https_proxy started.
 Sep 16 01:03:34 localhost haproxy[30429]: Proxy https_proxy started.
 Sep 16 01:03:34 localhost haproxy[30429]: Proxy httpservers started.
 Sep 16 01:03:34 localhost haproxy[30429]: Proxy httpservers started.

 I know it is recommended to make the nbproc 1, so is the log makes sense ?


what happens if you bind fronteds to different CPUs?

Cheers,
Pavlos


Re: HAProxy 1.5 incorrectly marks servers as DOWN

2014-09-10 Thread Pavlos Parissis
On 10/09/2014 07:02 πμ, Juho Mäkinen wrote:
 Thanks Pavlos for your help. Fortunately (and embarrassedly for me) the
 mistake was not anywhere near haproxy but instead my haproxy configure
 template system had a bug which mixed up the backend name and ip
 address. Because of this haproxy showed different names for those
 servers which were actually down and that threw me into way off when I
 investigated this issue, blinded by the actual problem which was always
 so near of my sight. :(
 

This is one of the reason I use hostname rather IPs. I know people say
that DNS lookup has some cost but in my environment with ~300 pools and
~2K servers, we didn't notice any major problem. But, I have to say that
I never looked at possible slow downs due to DNS lookups.

Other Load balancers, F5 for instance, are strongly suggest to use IPs.


 haproxy shows the server name in the server log when it reports health
 check statuses. Example:
 Health check for server comet/comet-172.16.4.209:3500 succeeded,
 reason: Layer7 check passed, code: 200, info: OK, check duration: 2ms,
 status: 3/3 UP.
 
 This could be improved by also showing the actual ip and port in the
 log. Suggestion:
 Health check for server comet/comet-172.16.4.209:3500
 (172.16.4.209:3500 http://172.16.4.209:3500) succeeded, reason: Layer7
 check passed, code: 200, info: OK, check duration: 2ms, status: 3/3 UP.
 

I don't know C, but I think it should be relative easy to implement.

  As a side question: The documentation was a bit unclear. If I have
  nbproc  1 and I use the admin socket to turn servers administrative
  status down or up, do I need to do it to separated admin sockets per
  haproxy process, or can I just use one admin socket?
 
 
 You need a different socket. Each process can only be managed by a
 dedicated stats socket. There isn't any kind of aggregation where you
 issue a command to 1 stats socket and this command is pushed to all
 processes. Next release will address this kind of issues.
 
 
 Thank you, good to know!
 
  - Garo
  




signature.asc
Description: OpenPGP digital signature


Re: tcp reset errors

2014-09-10 Thread Pavlos Parissis
On 10/09/2014 03:31 μμ, Franky Van Liedekerke wrote:
 Hi,
 
 
[..snip..]

 Any hints are very much appreciated. If more info is needed, let me know.
 


Is it possible to run tcpdump on both servers and see who is sending
RSTs? what about ldap logs? Do you know if you get this problem for all
LDAP queries or for a subset? It could be that LDAP queries take too
much time to be processed on LDAP due to missing index, heavy IO and
etc. I know ldap can provide quite a lot of information.

Cheers,
Pavlos





signature.asc
Description: OpenPGP digital signature


Re: HAProxy 1.5 incorrectly marks servers as DOWN

2014-09-09 Thread Pavlos Parissis
On 08/09/2014 10:30 πμ, Juho Mäkinen wrote:
 
 On Thu, Sep 4, 2014 at 11:35 PM, Pavlos Parissis
 pavlos.paris...@gmail.com mailto:pavlos.paris...@gmail.com wrote:
 
 On 04/09/2014 08:55 πμ, Juho Mäkinen wrote:
  I'm upgrading my old 1.4.18 haproxies to 1.5.4 and I have a mysterious
  problem where haproxy marks some backend servers as being DOWN with a
  message L4TOUT in 2000ms. 
 Are you sure that you haven't reached any sort of limits on your backend
 servers? Number of open files and etc...
 
 
 Quite sure because I can always use curl from the haproxy machine to the
 backend machine and I get the response to the check command always
 without any delays. 
 
 Are you sure that backend servers return a response with HTTP status 200
 on healtchecks?
 
 
 Yes. I also ran strace on a single haproxy process when the haproxy
 marked multiple backends as being down. Here's an example output:
 
 08:06:07.302582 connect(30, {sa_family=AF_INET, sin_port=htons(3500),
 sin_addr=inet_addr(172.16.6.102)}, 16) = -1 EINPROGRESS (Operation now
 in progress)
 08:06:07.303024 recvfrom(30, 0x1305494, 16384, 0, 0, 0) = -1 EAGAIN
 (Resource temporarily unavailable)
 08:06:07.303097 getsockopt(30, SOL_SOCKET, SO_ERROR, [0], [4]) = 0
 08:06:07.303167 sendto(30, GET /check HTTP/1.0\r\n\r\n, 23,
 MSG_DONTWAIT|MSG_NOSIGNAL, NULL, 0) = 23
 08:06:07.304522 recvfrom(30, HTTP/1.1 200 OK\r\nX-Powered-By:
 Express\r\nAccess-Control-Allow-Origin:
 *\r\nAccess-Control-Allow-Methods: GET, HEAD, POST, PUT, DELE...,
 16384, 0, NULL, NULL) = 503
 08:06:07.304603 setsockopt(30, SOL_SOCKET, SO_LINGER, {onoff=1,
 linger=0}, 8) = 0
 08:06:07.304666 close(30)   = 0
 
 So the server clearly sends an HTTP 200 OK response, in just 1.9 ms. I
 analysed around 20 different checks via the strace to the same backend
 (which is marked down by haproxy) and none of them was over one second.
 

Are sure that for the above respone was for a healtcheck that marked the
server down? It is quite difficult to find this, I have been in your
position and took me some time to find the actual problem.


 Here's an example from haproxy logging what happens when the problem starts:
 
 Sep  8 07:22:25 localhost haproxy[24282]: [08/Sep/2014:07:22:24.615]
 https comet-getcampaigns/comet-172.16.2.97:3500 423/0/1/3/427 200 502 -
 -  1577/1577/3/1/0 0/0 GET /mobile HTTP/1.1
 Sep  8 07:22:25 localhost haproxy[24284]: [08/Sep/2014:07:22:24.280]
 https~ comet-getcampaigns/comet-172.16.2.97:3500 771/0/2/346/1121 200
 40370 - -  2769/2769/6/0/0 0/0 GET /mobile HTTP/1.1Sep  8 07:22:25
 localhost haproxy[24284]: [08/Sep/2014:07:22:25.090] https~
 comet-getcampaigns/comet-172.16.2.97:3500 379/0/2/-1/804 502 204 - -
 SH-- 2733/2733/7/0/0 0/0 GET /mobile HTTP/1.1
 Sep  8 07:22:25 localhost haproxy[24280]: Health check for server
 comet-getcampaigns/comet-172.16.2.97:3500 failed, reason: Socket error,
 info: Connection reset by peer, check duration: 231ms, status: 2/3 UP.
 Sep  8 07:22:25 localhost haproxy[24281]: Health check for server
 comet-getcampaigns/comet-172.16.2.97:3500 failed, reason: Socket error,
 info: Connection reset by peer, check duration: 217ms, status: 2/3 UP.
 Sep  8 07:22:25 localhost haproxy[24282]: Health check for server
 comet-getcampaigns/comet-172.16.2.97:3500 failed, reason: Socket error,
 info: Connection reset by peer, check duration: 137ms, status: 2/3 UP.
 Sep  8 07:22:25 localhost haproxy[24284]: Health check for server
 comet-getcampaigns/comet-172.16.2.97:3500 failed, reason: Socket error,
 info: Connection reset by peer, check duration: 393ms, status: 2/3
 UP.Sep  8 07:22:25 localhost haproxy[24284]: [08/Sep/2014:07:22:25.661]
 https comet-getcampaigns/comet-172.16.2.97:3500 305/0/1/-1/314 -1 0 - -
 SD-- 2718/2718/5/0/0 0/0 GET /mobile HTTP/1.1

The above means that the processes received a TCP RST packet on the
opening socket towards the backend. Have you run tcpdump on haproxy to
see if you backends send TCP RST.
Do you have any kind of firewall(network or host based) between haproxy
and backend?

 Sep  8 07:22:27 localhost haproxy[24278]: Health check for server
 comet-getcampaigns/comet-172.16.2.97:3500 failed, reason: Layer4
 connection problem, info: Connection refused, check duration: 0ms,
 status: 2/3 UP.
 Sep  8 07:22:27 localhost haproxy[24279]: Health check for server
 comet-getcampaigns/comet-172.16.2.97:3500 failed, reason: Layer4
 connection problem, info: Connection refused, check duration: 0ms,
 status: 2/3 UP.
 Sep  8 07:22:28 localhost haproxy[24280]: Health check for server
 comet-getcampaigns/comet-172.16.2.97:3500 failed, reason: Layer4
 connection problem, info: Connection refused, check duration: 2ms,
 status: 1/3 UP.
 Sep  8 07:22:28 localhost haproxy[24284]: Health check for server
 comet-getcampaigns/comet-172.16.2.97:3500 failed, reason: Layer4
 connection problem, info: Connection refused, check duration: 1ms,
 status: 1/3 UP.
 Sep  8 07:22:28 localhost haproxy[24282

Re: HAProxy 1.5 incorrectly marks servers as DOWN

2014-09-04 Thread Pavlos Parissis
On 04/09/2014 08:55 πμ, Juho Mäkinen wrote:
 I'm upgrading my old 1.4.18 haproxies to 1.5.4 and I have a mysterious
 problem where haproxy marks some backend servers as being DOWN with a
 message L4TOUT in 2000ms. Some times the message also has a star: *
 L4TOUT in 2000ms (I didn't find what the star means from the docs).
 Also the reported timeout varies between 2000ms and 2003ms.
 

L4TOUT status while you have httpchk enabled it means that HAProxy
failed to establish a TCP connection within 2secs.

Are you sure that you haven't reached any sort of limits on your backend
servers? Number of open files and etc...

 This does not happen to every backend and it doesn't happen immediately.
 After restart every backend is green and a few backends starts to get
 marked DOWN after about 30 minutes or so. I'm also running two instances
 in two different servers and they both suffer the same problem but the
 DOWN servers aren't same. So server A might be marked DOWN on haproxy-1
 and server B marked down on haproxy-2 (or vice versa).
 
 This seems to happen regardless how much traffic I run into the
 haproxies. I can always ssh into the haproxies and run curl against the
 check url and it always works, so this problem seems to be inside haproxy.
 

Are you sure that backend servers return a response with HTTP status 200
on healtchecks?

 My haproxy config is a kind of long so I copied it here:
 http://koti.kapsi.fi/garo/nobackup/haproxy.cfg (I've sanitised it a bit,
 but only hostnames).
 

You have only 1 stats server while you have 7 processes. You need to
enable for each process a stats socket, here is an example from a 24
processes

stats socket /var/lib/haproxy/stats1 uid 0 gid 0 mode 0440 level
admin process 1
stats socket /var/lib/haproxy/stats2 uid 0 gid 0 mode 0440 level
admin process 2
stats socket /var/lib/haproxy/stats3 uid 0 gid 0 mode 0440 level
admin process 3
stats socket /var/lib/haproxy/stats4 uid 0 gid 0 mode 0440 level
admin process 4
stats socket /var/lib/haproxy/stats5 uid 0 gid 0 mode 0440 level
admin process 5
stats socket /var/lib/haproxy/stats6 uid 0 gid 0 mode 0440 level
admin process 6
stats socket /var/lib/haproxy/stats7 uid 0 gid 0 mode 0440 level
admin process 7
stats socket /var/lib/haproxy/stats8 uid 0 gid 0 mode 0440 level
admin process 8
stats socket /var/lib/haproxy/stats9 uid 0 gid 0 mode 0440 level
admin process 9
stats socket /var/lib/haproxy/stats10 uid 0 gid 0 mode 0440 level
admin process 10
stats socket /var/lib/haproxy/stats11 uid 0 gid 0 mode 0440 level
admin process 11
stats socket /var/lib/haproxy/stats12 uid 0 gid 0 mode 0440 level
admin process 12
stats socket /var/lib/haproxy/stats13 uid 0 gid 0 mode 0440 level
admin process 13
stats socket /var/lib/haproxy/stats14 uid 0 gid 0 mode 0440 level
admin process 14
stats socket /var/lib/haproxy/stats15 uid 0 gid 0 mode 0440 level
admin process 15
stats socket /var/lib/haproxy/stats16 uid 0 gid 0 mode 0440 level
admin process 16
stats socket /var/lib/haproxy/stats17 uid 0 gid 0 mode 0440 level
admin process 17
stats socket /var/lib/haproxy/stats18 uid 0 gid 0 mode 0440 level
admin process 18
stats socket /var/lib/haproxy/stats19 uid 0 gid 0 mode 0440 level
admin process 19
stats socket /var/lib/haproxy/stats20 uid 0 gid 0 mode 0440 level
admin process 20
stats socket /var/lib/haproxy/stats21 uid 0 gid 0 mode 0440 level
admin process 21
stats socket /var/lib/haproxy/stats22 uid 0 gid 0 mode 0440 level
admin process 22
stats socket /var/lib/haproxy/stats23 uid 0 gid 0 mode 0440 level
admin process 23
stats socket /var/lib/haproxy/stats24 uid 0 gid 0 mode 0440 level
admin process 24

nbproc 24
cpu-map odd 0-5 12-17
cpu-map even 6-11 18-23

listen haproxy1
bind :8081 process 1
bind :8082 process 2
bind :8083 process 3
bind :8084 process 4
bind :8085 process 5
bind :8086 process 6
bind :8087 process 7
bind :8088 process 8
bind :8089 process 9
bind :8090 process 10
bind :8091 process 11
bind :8092 process 12
bind :8093 process 13
bind :8094 process 14
bind :8095 process 15
bind :8096 process 16
bind :8097 process 17
bind :8098 process 18
bind :8099 process 19
bind :8100 process 20
bind :8101 process 21
bind :8102 process 22
bind :8103 process 23
bind :8104 process 24
stats uri /
stats show-node
stats refresh 10s
stats show-legends


and then check all them to find which process marks the server down.
 I've ran the logging with verbose debugging to check if that gives any
 clues on the health check issue, but the logs did not reveal anything to
 my eye. I can however gather a new log sample on the health checks, but
 the haproxies are now receiving production traffic so the log amount
 would be too much to gather at the current moment.
 
 I've also gathered some tcpdump traffic to the hosts marked DOWN and
 

Re: Is this in the specifications?(HTTP responses randomly getting RST)

2014-08-01 Thread Pavlos Parissis
On 01/08/2014 08:00 πμ, cloudpack 川原 洋平 wrote:
 Hi,
 
 I setting up HAProxy 1.5.3.
 I obtained the RST randomly http response when verifying the following
 settings.
 State that contains the RST or would specification 
 
 ## tcpdump result
 
 
 05:31:17.738871 IP ${haproxy-host}.49167  ${Apache-host}.http: Flags
 [R.], seq 275, ack 269, win 149, options [nop,nop,TS val 3764976 ecr
 369760047], length 0
 
 

I believe HAProxy closes a TCP session by sending RST rather following
the typical TCP close sequence of FIN/ACK, when is the initiator of the
TCP termination. If the other end initiates a TCP termination, HAProxy
follows the typical FIN/ACK sequence. This is my understanding after I
analyzed some network traces.


Cheers,
Pavlos





signature.asc
Description: OpenPGP digital signature


Re: Roadmap for 1.6

2014-07-29 Thread Pavlos Parissis
On 29/07/2014 10:55 πμ, Willy Tarreau wrote:
 Hi Pavlos,
 
 On Mon, Jul 28, 2014 at 12:07:37AM +0200, Pavlos Parissis wrote:
 On 25/07/2014 07:28 , Willy Tarreau wrote:
 Hi all,

 [..snip..]


   - hot reconfiguration : some users are abusing the reload mechanism to
 extreme levels, but that does not void their requirements. And many
 other users occasionally need to reload for various reasons such as
 adding a new server or backend for a specific customer. While in the
 past it was not possible to change a server address on the fly, we
 could now do it easily, so we could think about provisionning a few
 extra servers that could be configured at run time to avoid a number
 of reloads. Concerning the difficulty to bind the reloaded processes,
 Simon had done some work in this area 3 years ago with the master-
 worker model. Unfortunately we never managed to stabilize it because
 of the internal architecture that was hard to adapt and taking a lot
 of time. It could be one of the options to reconsider though, along
 with FD passing across processes. Similarly, persistent server states
 across reloads is often requested and should be explored.


 Let's take this to another level and support on-line configuration
 changes for Frontends, backends and servers which don't require restart
 
 We've already improved things significantly in this direction. We're at a
 point where it should be easy to support on-the-fly server address change.
 However there are still a large number of things that cannot be easily
 changed. All those which have many implications are in this area. For
 example, people think that adding a server is easy, but it clearly is not.
 The table-based LB algorithms already compute the largest table size when
 all servers are up, according to their respective weights. Changing one
 weight or adding one server can increase their least common multiple and
 require to reallocate and rebuild a complete table. Also, servers are
 checked, and for the checks we reserve file descriptors. We cannot easily
 change the max number of file descriptors on the fly either. What can be
 done however is to reserve some spare slots for adding new servers into an
 existing backend.
 
 Also, for having worked many years with various products which support
 on-line configuration changes, I don't count anymore the number of days,
 weeks or months of troubleshooting of strange issues only caused by side
 effect of these on-line changes, that simply went away after a reboot. I'm
 not even blaming them because it's very hard to propagate changes correctly.
 It always reminds me of a math professor I had at the uni who was able to
 spot a mistake in an equation as large as the blackboard, who would fix it
 there at the top of the blackboard and propagate the fix down to other lines.
 The covered area looked like a pyramid. Here it's the same, performing a
 minor change at the top of the configuration needs to take care of many
 tiny implications far away from where the change is performed. And I'm
 definitely not going to reproduce the lack of reliability that many products
 can have just for the sake of allowing on-line reconfiguration.
 
 I'd rather invest more time ensuring that we can seamlessly reload (eg: not
 lose stick-tables, stats nor server checks) to ensure that sensible changes
 are done this way and not the tricky one.
 

If you manage to implement this, especially the server checks, it would
be a MAJOR improvement. It will probably reduce the requests of on-line
changes as well, since people (including myself) will say just reload
dude, it is for free.

Thanks Willy for taking the time to response on my mail, very much
appreciated.

Cheers,
Pavlos




signature.asc
Description: OpenPGP digital signature


Re: Roadmap for 1.6

2014-07-29 Thread Pavlos Parissis
On 28/07/2014 11:54 πμ, Apollon Oikonomopoulos wrote:
 Hi Willy,
 
 On 19:28 Fri 25 Jul , Willy Tarreau wrote:

 Concerning the new features, no promises, but we know that we need to
 progress in the following areas :

   - multi-process : better synchronization of stats and health checks,
 and find a way to support peers in this mode. I'm still thinking a
 lot that due to the arrival of latency monsters that are SSL and
 compression, we could benefit from having a thread-based architecture
 so that we could migrate tasks to another CPU when they're going to
 take a lot of time. The issue I'm seeing with threads is that
 currently the code is highly dependent on being alone to modify any
 data. Eg: a server state is consistent between entering and leaving
 a health check function. We don't want to start adding huge mutexes
 everywhere.
 
 How about using shared memory segments for stats, health checks and 
 peers?
 

 If anyone has any comment / question / suggestion, as usual feel free to
 keep the discussion going on.
 
 Could I also add shared SSL session cache over multiple boxes (like 
 stud), to aid SSL scalability behind LVS directors? It has been asked 
 for before in the mailing list if I recall correctly.
 

A bit off topic but sometimes tunning the cipher suite reduces the CPU
cost of encryption. Today, I managed to save 5% CPU by moving to ECDHE
cipher suite, see https://db.tt/N9auU9cg.

I just recompiled HAProxy against openSSL 1.0.1 where ECDHE is available
and the default cipher changed from DHE to ECDHE, which is a CPU
intensive cipher set but still much better than DHE. I have to mention
that the server uses Intel and OpenSSL Intel AES-NI engine is enabled by
default as openSSL 1.0.1 can detect processors that support AES-NI.

Cheers,
Pavlos








signature.asc
Description: OpenPGP digital signature


Re: Roadmap for 1.6

2014-07-27 Thread Pavlos Parissis
On 25/07/2014 07:28 μμ, Willy Tarreau wrote:
 Hi all,

[..snip..]


   - hot reconfiguration : some users are abusing the reload mechanism to
 extreme levels, but that does not void their requirements. And many
 other users occasionally need to reload for various reasons such as
 adding a new server or backend for a specific customer. While in the
 past it was not possible to change a server address on the fly, we
 could now do it easily, so we could think about provisionning a few
 extra servers that could be configured at run time to avoid a number
 of reloads. Concerning the difficulty to bind the reloaded processes,
 Simon had done some work in this area 3 years ago with the master-
 worker model. Unfortunately we never managed to stabilize it because
 of the internal architecture that was hard to adapt and taking a lot
 of time. It could be one of the options to reconsider though, along
 with FD passing across processes. Similarly, persistent server states
 across reloads is often requested and should be explored.
 

Let's take this to another level and support on-line configuration
changes for Frontends, backends and servers which don't require restart
and at the same time *dump* the new configuration to haproxy.conf, while
on startup haproxy.conf.OK was created. The same way OpenLDAP manages
its configuration. This will be very useful in environments where
servers register their self to a service(backend in this case) based a
health-checks which run locally or by a centralized service. Oh yes, I
am talking about Zookeeper integration.

In setups where you have N HAProxy servers for serving the same site[1],
reducing the number of health-checks is very important.
We have been running HAProxy with ~450 backends and ~3000 total servers.
The number of health-checks was so high that was causing issues on
Firewalls, oh yes we had Firewalls between HAProxy and servers.

Once again, we all need to say a big thank you to everyone working on
this excellent piece of software.

Cheers,
Pavlos


[1]
TCP Anycast setup, where iBGP ECMP balances traffic to N HAProxy
servers. Bird runs on HAProxy servers which establishes BGP and BFD
sessions to upstream routers, and a service health-check triggers route
advertisements if and only if HAProxy runs.





signature.asc
Description: OpenPGP digital signature


session limit on backend

2014-07-24 Thread Pavlos Parissis
Hi,

I have a  question about session limit on backend. Having the following
conf and without any parameters in frontend/backends about
sessions/connections I see that backends have 5000 session limit(slim in
CSV output).

How this number is calculated?


global
log 127.0.0.1 local2 notice

chroot  /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 10
userhaproxy
group   haproxy
daemon


stats socket /var/lib/haproxy/stats uid 0 gid 0 mode 0440 level admin

ssl-server-verify none
tune.ssl.default-dh-param 2048

defaults
maxconn 5
rate-limit sessions
2000

modehttp
log global
option  contstats
option  tcplog
option  dontlognull
option  tcp-smart-accept
option  tcp-smart-connect
option  http-keep-alive
option  redispatch
balance roundrobin
timeout http-request15s
timeout http-keep-alive 15s
retries 2
timeout queue   1m
timeout connect 10s
timeout client  15s
timeout server  15s
timeout check   5s


Thanks,
Pavlos


Re: Strange health check behavior

2014-07-20 Thread Pavlos Parissis
On 18/07/2014 08:33 μμ, Szelcsányi Gábor wrote:
 Hi,
 
 I've been reading the documentation and searching the mail list, but one
 thing is not clear for me. I have nbroc 2, 2 frontends pined to a
 separate cpu core and 1-1 backend. The bind-process options of these
 backends are inherited from their parent frontend.  Thus, are both
 processes supposed to do healthcheck for backend servers or just the
 desired process should do that?
 
 example:
 
 nbproc 2
 cpu-map 1 0
 cpu-map 2 1
 ...
 
 frontend frn1
 bind 10.0.0.10:80 http://10.0.0.10:80 process 1 name frn1
 bind-process 1
 ...
 default_backend bck1
 
 frontend frn2
 bind 10.0.0.10:81 http://10.0.0.10:81 process 2 name frn2
 bind-process 2
 ...
 default_backend bck2
 
 backend bck1
 option httpchk HEAD /healthcheck HTTP/1.1\r\n
 ...
 server  srv1 10.0.0.1:80 http://10.0.0.1:80 maxconn 5000
 weight 50 check inter 5s fall 2 rise 1 slowstart 15s
 server  srv2 10.0.0.2:80 http://10.0.0.2:80 maxconn 5000
 weight 50 check inter 5s fall 2 rise 1 slowstart 15s
 
 backend bck2
 option httpchk HEAD /healthcheck HTTP/1.1\r\n
 ...
 server  srv3 10.0.0.3:80 http://10.0.0.3:80 maxconn 5000
 weight 50 check inter 5s fall 2 rise 1 slowstart 15s
 server  srv4 10.0.0.4:80 http://10.0.0.4:80 maxconn 5000
 weight 50 check inter 5s fall 2 rise 1 slowstart 15s
 
 So the question is should both haproxy processes send health check
 queries to srv1 and srv2 or only the first process is designated to do this?
 In my setup I see traffic from both processes. If I set 6 or more pinned
 frontends with different backends then the health checks can saturate
 the backend servers. I tought only the right process should check for
 status. The rest could never send traffic to the servers anyway. Am I
 wrong or I just missing something?
 
 I'm using 1.5.2 stable. (released 2014/07/12)
 HA-Proxy version 1.5.2 2014/07/12
 Copyright 2000-2014 Willy Tarreau w...@1wt.eu mailto:w...@1wt.eu
 
 Build options :
   TARGET  = linux26
   CPU = generic
   CC  = gcc
   CFLAGS  = -O2 -g -fno-strict-aliasing
   OPTIONS = USE_LINUX_SPLICE=1 USE_LINUX_TPROXY=1 USE_LIBCRYPT=1
 USE_GETADDRINFO=1 USE_ZLIB=1 USE_EPOLL=1 USE_CPU_AFFINITY=1
 USE_OPENSSL=1 USE_STATIC_PCRE=1 USE_TFO=1
 
 
 Regards,
 Gabor


I can't reproduce the behavior you describe. Below is the test conf I
used where I set different User-Agent for the healthcheck on backends in
order to make it easier for me to see if process 2 sends checks on
foo-server1.

nbproc 2
cpu-map 1 0
cpu-map 2 1

frontend  main
bind *:80
bind-process 1
default_backend foo

backend foo
default-server inter 10s
option httpchk GET / HTTP/1.1\r\nHost:\
foo.example.com\r\nUser-Agent:\ HAProxy
server foo-server1 21.229.28.251:80 check


frontend  main2
bind *:81
bind-process 2
default_backend foo2

backend foo2
default-server inter 10s
option httpchk GET / HTTP/1.1\r\nHost:\
foo.example.com\r\nUser-Agent:\ HAProxy2
server foo-server2 20.229.28.252:80 check


# haproxy -vv
HA-Proxy version 1.5.2 2014/07/12
Copyright 2000-2014 Willy Tarreau w...@1wt.eu

Build options :
  TARGET  = linux2628
  CPU = generic
  CC  = gcc
  CFLAGS  =
  OPTIONS = USE_LINUX_TPROXY=1 USE_ZLIB=1 USE_REGPARM=1 USE_OPENSSL=1
USE_PCRE=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 200

Encrypted password support via crypt(3): yes
Built with zlib version : 1.2.3
Compression algorithms supported : identity, deflate, gzip
Built with OpenSSL version : OpenSSL 1.0.0-fips 29 Mar 2010
Running on OpenSSL version : OpenSSL 1.0.0-fips 29 Mar 2010
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes
Built with PCRE version : 7.8 2008-09-05
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Built with transparent proxy support using: IP_TRANSPARENT
IPV6_TRANSPARENT IP_FREEBIND

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.


Cheers,
Pavlos





signature.asc
Description: OpenPGP digital signature


Re: ACL ordering/processing

2014-07-16 Thread Pavlos Parissis
On 16/07/2014 08:31 πμ, Baptiste wrote:
 On Tue, Jul 15, 2014 at 7:14 PM, Pavlos Parissis
 pavlos.paris...@gmail.com wrote:
 On 15/07/2014 05:49 μμ, Baptiste wrote:
 On Tue, Jul 15, 2014 at 12:40 AM, bjun...@gmail.com bjun...@gmail.com 
 wrote:
 Hi folks,


 I've a question regarding the ordering/processing of ACL's.



 Example (HAProxy 1.4.24):


 

 frontend http_in
 .
 .


 acl  is_example.com  hdr_beg(host) -i example.com

 acl  check_id  url_reg   code=(1001|1002|)

 acl  check_id  url_reg   code=(3000|4001|)

 use_backend  node01 if  is_example.com  check_id



 acl  is_example.de  hdr_beg(host) -i example.de

 acl  check_id  url_reg   code=(6573|7890)

 use_backend  node02 if  is_example.de  check_id


 



 I assumed that the check_id - ACL from the second block wouldn't be
 combined/OR'ed with the 2 check_id - ACL's from the first block
 (because of the other configuration statements in between).



 But they are combined/OR'ed, is this behavior intended ?



 Thanks,
 ---

 Bjoern


 Hi Bjoern,

 ACLs are processed only if they are called by a directive.
 When many ACLs are called by a directive, an implicit logical AND is 
 applied.
 an explicit logical OR can be declared as well
 when a AND is applied between many ACLs, HAProxy stops processing them
 as soon as one is wrong
 when a OR is applied between many ACLs, HAProxy stops processing them
 as soon as one is true

 some ACLs are cheaper to run than other, make your choice :)

 Side note, to avoid any mistake in your conf:
   acl  is_example.de  hdr_beg(host) -i example.de
 = this will match http://example.de/path/path/blah.php
  or  http://example.de.google.com/path/path/blah.php

 you might want to match this:
   acl  is_example.de  hdr_end(host) -i example.de



 Is URI part of Host header?

 Cheers,
 Pavlos



 
 Hi Pavlos,
 
 not at all, sorry for confusing.

I wasn't confused, just checking that there isn't any specific 'thing'
in HAProxy which will add URI in the specific header, I never thought it
will be such thing.


 Your browser should split your URL in 2 parts:
 - Host header containing the hostname of the service
 - url path
 
 http://my.domain.tld/path will be sent as
 
 GET /path HTTP/1.1
 Host: my.domain.tld
 
 
 Baptiste
 




signature.asc
Description: OpenPGP digital signature


Re: ACL ordering/processing

2014-07-15 Thread Pavlos Parissis
On 15/07/2014 05:49 μμ, Baptiste wrote:
 On Tue, Jul 15, 2014 at 12:40 AM, bjun...@gmail.com bjun...@gmail.com wrote:
 Hi folks,


 I've a question regarding the ordering/processing of ACL's.



 Example (HAProxy 1.4.24):


 

 frontend http_in
 .
 .


 acl  is_example.com  hdr_beg(host) -i example.com

 acl  check_id  url_reg   code=(1001|1002|)

 acl  check_id  url_reg   code=(3000|4001|)

 use_backend  node01 if  is_example.com  check_id



 acl  is_example.de  hdr_beg(host) -i example.de

 acl  check_id  url_reg   code=(6573|7890)

 use_backend  node02 if  is_example.de  check_id


 



 I assumed that the check_id - ACL from the second block wouldn't be
 combined/OR'ed with the 2 check_id - ACL's from the first block
 (because of the other configuration statements in between).



 But they are combined/OR'ed, is this behavior intended ?



 Thanks,
 ---

 Bjoern

 
 Hi Bjoern,
 
 ACLs are processed only if they are called by a directive.
 When many ACLs are called by a directive, an implicit logical AND is applied.
 an explicit logical OR can be declared as well
 when a AND is applied between many ACLs, HAProxy stops processing them
 as soon as one is wrong
 when a OR is applied between many ACLs, HAProxy stops processing them
 as soon as one is true
 
 some ACLs are cheaper to run than other, make your choice :)
 
 Side note, to avoid any mistake in your conf:
   acl  is_example.de  hdr_beg(host) -i example.de
 = this will match http://example.de/path/path/blah.php
  or  http://example.de.google.com/path/path/blah.php
 
 you might want to match this:
   acl  is_example.de  hdr_end(host) -i example.de
 


Is URI part of Host header?

Cheers,
Pavlos





signature.asc
Description: OpenPGP digital signature


Re: Difference between Disable and soft stop

2014-07-07 Thread Pavlos Parissis
On 07/07/2014 11:49 πμ, David wrote:
 Hello,
 
 I have installed HAproxy 1.5 in my RDS farm. But when i check the disable 
 option for one server, this server is still active in my farm and users can 
 connect to it ?
 

I assume you mean that it took while for the server to stop receiving
after it was disabled, am I right ?

I have observed this only when I used TCP mode, in my case it took some
time(20mins) for a server to stop getting traffic. I switched(for other
reasons) to HTTP mode with keep-alive enabled and this particular
behavior doesn't occur. Have you tried to enable 'option forceclose'? I
have no clue if it will do the trick.


 May i have to use soft stop instead ? What is the difference between these 
 two options ?
 
 Thank you by advance for your answer.
 
 David.
 
 
 
 




signature.asc
Description: OpenPGP digital signature


SSL backend question

2014-07-06 Thread Pavlos Parissis
Hi,

I read the news about Native SSL support on 1.5.1. version, so I said I
need to try it out:-)

But either I don't understand how SSL backend should be configured or
there is a mismatch on the expectations.

I want HTTPS traffic to HAProxy to be loadbalanced to a backend without
stripping put the SSL part, basically HAProxy will decode incoming
request and encode it again on the way out to backend.

My conf[1] is quite simple and HAProxy has support for SSL [2]. What I
observe(using tcpdump) is that health checks are in SSL mode(SSL
handshake followed by a HTTP request) but incoming request over HTTPS
goes to backend without any SSL handshake which results to famous HTTP
status error from nginx
---
400 Bad Request
The plain HTTP request was sent to HTTPS port
---

I changed mode to tcp on backend examplefe_s but then I realized that I
wouldn't be able to have HTTP checks, am I right?

Any ideas if what I try to achieve is possible?

Cheers,
Pavlos



[1]
global
log 127.0.0.1 local2 debug

chroot  /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 10
userhaproxy
group   haproxy
daemon

# turn on stats unix socket
stats socket /var/lib/haproxy/stats uid 0 gid 0 mode 0440 level
admin process 1

# 2 Processes
nbproc 2
# Process ID 1 goes to CPU 0
cpu-map 1 0
# Process ID 2 goes to CPU 1
cpu-map 2 1

# Don't verify servers certificates.
ssl-server-verify none

#-
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#-
defaults
modehttp
log global
option  contstats
option  tcplog
option  dontlognull
option  tcp-smart-accept
option  tcp-smart-connect
option  http-keep-alive
option  redispatch
balance roundrobin
timeout http-request15s
timeout http-keep-alive 15s
retries 2
timeout queue   1m
timeout connect 10s
timeout client  15s
timeout server  15s
timeout check   5s
# TODO change that to HAProxySourceIP
option forwardfor header F5SourceIP

#-
# built-in status webpage
#-
listen haproxy :8080
stats enable
stats uri /
stats show-node
stats refresh 10s
stats show-legends

#-
# frontends which proxy to the backends
#-
frontend  main
bind *:80
# CPU0
bind-process 1
default_backend examplefe
frontend  main_s
bind *:443 ssl crt /etc/ssl/wildcard.foo.com.pem
# CPU1
bind-process 2
default_backend examplefe_s

#-
# round robin balancing between the various backends
#-
backend examplefe
default-server inter 10s
option httpchk GET / HTTP/1.1\r\nHost:\
example.foo.com\r\nUser-Agent:\ HAProxy
server examplefe-203.foo.com examplefe-203.foo.com:80 check disabled
server examplefe-204.foo.com examplefe-204.foo.com:80 check disabled

backend examplefe_s
default-server inter 10s
option httpchk GET / HTTP/1.1\r\nHost:\
example.foo.com\r\nUser-Agent:\ HAProxy
server examplefe-203.foo.com examplefe-203.foo.com:443 check check-ssl
server examplefe-204.foo.com examplefe-204.foo.com:443 check
check-ssl disabled


[2]
haproxy -vv
HA-Proxy version 1.5.1 2014/06/24
Copyright 2000-2014 Willy Tarreau w...@1wt.eu

Build options :
  TARGET  = linux2628
  CPU = generic
  CC  = gcc
  CFLAGS  =
  OPTIONS = USE_LINUX_TPROXY=1 USE_ZLIB=1 USE_REGPARM=1 USE_OPENSSL=1
USE_PCRE=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 200

Encrypted password support via crypt(3): yes
Built with zlib version : 1.2.3
Compression algorithms supported : identity, deflate, gzip
Built with OpenSSL version : OpenSSL 1.0.0-fips 29 Mar 2010
Running on OpenSSL version : OpenSSL 1.0.0-fips 29 Mar 2010
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes
Built with PCRE version : 7.8 2008-09-05
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Built with transparent proxy support using: IP_TRANSPARENT
IPV6_TRANSPARENT IP_FREEBIND

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.



signature.asc
Description: OpenPGP 

Re: SSL backend question

2014-07-06 Thread Pavlos Parissis
On 06/07/2014 04:27 μμ, Jarno Huuskonen wrote:
 Hi,
 
 On Sun, Jul 06, Pavlos Parissis wrote:
 My conf[1] is quite simple and HAProxy has support for SSL [2]. What I
 observe(using tcpdump) is that health checks are in SSL mode(SSL
 handshake followed by a HTTP request) but incoming request over HTTPS
 goes to backend without any SSL handshake which results to famous HTTP
 status error from nginx

 Any ideas if what I try to achieve is possible?
 
 I think you're missing ssl keyword from your server configs:
 http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#5.2-ssl
 

oh bummer, I am blind:-)

 (Also check verify / ssl-server-verify:
 http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#5.2-verify
 http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#3.1-ssl-server-verify)
 

Yeap I know about those settings, will enable them at a later stage as
right now I want to get the basic functionality in place and later tune
the SSL part(less CPU-intensive ciphers, cache, session re-use etc)

Thanks a lot Jarno,
Pavlos




signature.asc
Description: OpenPGP digital signature


Multi-processes and stats

2014-07-06 Thread Pavlos Parissis
Hoi again,

I am trying to squeeze the most out of my CPUs but I ran into the
problem with stats sockets and multiple processes, see below

Starting haproxy: [WARNING] 186/183809 (33970) : Proxy 'haproxy': in
multi-process mode, stats will be limited to process assigned to the
current request.
[WARNING] 186/183809 (33970) : Proxy 'haproxy2': in multi-process mode,
stats will be limited to process assigned to the current request.
[WARNING] 186/183809 (33970) : stats socket will not work as expected in
multi-process mode (nbproc  1), you should force process binding
globally using 'stats bind-process' or per socket using the 'process'
attribute.

CPU topology on box is the typical you find on a HP  Gen 8 Blade[1]

My idea is to have 12 CPUs for 1st front-end and the rest 12 CPUs for
2nd front-end. I did the cpu-maping[2] in a chance way that CPU
utilization for each front-end goes to different physical CPU, I have 2
physical CPUs and each have 2 cores, plus hyper-threading.

It works and I can get up to 34K transactions/sec as reported by siege,
I am quite happy with that. But the statistics are not correct. The
stats pages reports 1/12th of sessions.

Any ideas what I am doing wrong (again):-)

Cheers,
Pavlos


[1]
CPU 0 CORE 0
CPU 1 CORE 0
CPU 2 CORE 0
CPU 3 CORE 0
CPU 4 CORE 0
CPU 5 CORE 0
CPU 6 CORE 1
CPU 7 CORE 1
CPU 8 CORE 1
CPU 9 CORE 1
CPU 10 CORE 1
CPU 11 CORE 1
CPU 12 CORE 0
CPU 13 CORE 0
CPU 14 CORE 0
CPU 15 CORE 0
CPU 16 CORE 0
CPU 17 CORE 0
CPU 18 CORE 1
CPU 19 CORE 1
CPU 20 CORE 1
CPU 21 CORE 1
CPU 22 CORE 1
CPU 23 CORE 1


[2]
global
log 127.0.0.1 local2 notice

chroot  /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 10
userhaproxy
group   haproxy
daemon

# turn on stats unix socket
stats socket /var/lib/haproxy/stats uid 0 gid 0 mode 0440 level
admin process odd
stats socket /var/lib/haproxy/stats2 uid 0 gid 0 mode 0440 level
admin process even

# 24 Processes
nbproc 24
# Make sure we are on the same physical CPU core
cpu-map odd 0-5 12-17
cpu-map even 6-11 18-23

# Don't verify servers certificates.
ssl-server-verify none
tune.ssl.default-dh-param 2048

#-
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#-
defaults
modehttp
log global
option  contstats
option  tcplog
option  dontlognull
option  tcp-smart-accept
option  tcp-smart-connect
option  http-keep-alive
option  redispatch
balance roundrobin
timeout http-request15s
timeout http-keep-alive 15s
retries 2
timeout queue   1m
timeout connect 10s
timeout client  15s
timeout server  15s
timeout check   5s
# TODO change that to HAProxySourceIP
option forwardfor header F5SourceIP
#-
# built-in status webpage
#-
listen haproxy :8080
bind-process odd
stats uri /
stats show-node
stats refresh 10s
stats show-legends
listen haproxy2 :8082
bind-process even
stats uri /
stats show-node
stats refresh 10s
stats show-legends

#-
# frontends which proxy to the backends
#-
frontend  main
bind *:80
bind-process odd
default_backend examplefe
frontend  main_s
bind *:443 ssl crt /etc/ssl/wildcard.foo.com.pem
bind-process even
default_backend examplefe_s

#-
# round robin balancing between the various backends
#-
backend examplefe
default-server inter 10s
option httpchk GET / HTTP/1.1\r\nHost:\
example.foo.com\r\nUser-Agent:\ HAProxy
server examplefe-203.foo.com examplefe-203.foo.com:80 check
server examplefe-204.foo.com examplefe-204.foo.com:80 check

backend examplefe_s
default-server inter 10s
option httpchk GET / HTTP/1.1\r\nHost:\
example.foo.com\r\nUser-Agent:\ HAProxy
server examplefe-203.foo.com examplefe-203.foo.com:443 ssl check
check-ssl
server examplefe-204.foo.com examplefe-204.foo.com:443 ssl check
check-ssl




signature.asc
Description: OpenPGP digital signature


Re: Multi-processes and stats

2014-07-06 Thread Pavlos Parissis
On 06/07/2014 10:35 μμ, Vincent Bernat wrote:
  ❦  6 juillet 2014 19:00 +0200, Pavlos Parissis pavlos.paris...@gmail.com :
 
 It works and I can get up to 34K transactions/sec as reported by siege,
 I am quite happy with that. But the statistics are not correct. The
 stats pages reports 1/12th of sessions.
 
 With your configuration, a request to the statistic socket will be bound
 to one of the process which will answer only its own statistics. You
 need to declare for each CPU a specific statistic frontend (and bind it
 to the CPU). Then, you need to iterate over all the sockets.
 

Thanks Vincent for the clarification.

After some more coffee, better reading of the manual and googling I
figured that I had poor understanding of how processes work together
with the UNIX sockets.

There is must a one2one relationship between the UNIX stats socket and
and a process which feeds stats/status information into it.

So the following line, which I used, can't work.
stats socket /var/lib/haproxy/stats uid 0 gid 0 mode 0440 level
admin process odd


So, I came up with the following[1] which worked, but I still get the
warning. I did a quick stress test and go 70K trans/sec and only 12 CPUs
were close to 80%. In this particular test I didn't pin any of the
frontends to a specific set of CPU, quite impressive I would say.


I am very impressed with the feature set of this release, great work guys.

Cheers,
Pavlos

[1]
stats socket /var/lib/haproxy/stats1 uid 0 gid 0 mode 0440 level
admin process 1
stats socket /var/lib/haproxy/stats2 uid 0 gid 0 mode 0440 level
admin process 2
stats socket /var/lib/haproxy/stats3 uid 0 gid 0 mode 0440 level
admin process 3
stats socket /var/lib/haproxy/stats4 uid 0 gid 0 mode 0440 level
admin process 4
stats socket /var/lib/haproxy/stats5 uid 0 gid 0 mode 0440 level
admin process 5
stats socket /var/lib/haproxy/stats6 uid 0 gid 0 mode 0440 level
admin process 6
stats socket /var/lib/haproxy/stats7 uid 0 gid 0 mode 0440 level
admin process 7
stats socket /var/lib/haproxy/stats8 uid 0 gid 0 mode 0440 level
admin process 8
stats socket /var/lib/haproxy/stats9 uid 0 gid 0 mode 0440 level
admin process 9
stats socket /var/lib/haproxy/stats10 uid 0 gid 0 mode 0440 level
admin process 10
stats socket /var/lib/haproxy/stats11 uid 0 gid 0 mode 0440 level
admin process 11
stats socket /var/lib/haproxy/stats12 uid 0 gid 0 mode 0440 level
admin process 12
stats socket /var/lib/haproxy/stats13 uid 0 gid 0 mode 0440 level
admin process 13
stats socket /var/lib/haproxy/stats14 uid 0 gid 0 mode 0440 level
admin process 14
stats socket /var/lib/haproxy/stats15 uid 0 gid 0 mode 0440 level
admin process 15
stats socket /var/lib/haproxy/stats16 uid 0 gid 0 mode 0440 level
admin process 16
stats socket /var/lib/haproxy/stats17 uid 0 gid 0 mode 0440 level
admin process 17
stats socket /var/lib/haproxy/stats18 uid 0 gid 0 mode 0440 level
admin process 18
stats socket /var/lib/haproxy/stats19 uid 0 gid 0 mode 0440 level
admin process 19
stats socket /var/lib/haproxy/stats20 uid 0 gid 0 mode 0440 level
admin process 20
stats socket /var/lib/haproxy/stats21 uid 0 gid 0 mode 0440 level
admin process 21
stats socket /var/lib/haproxy/stats22 uid 0 gid 0 mode 0440 level
admin process 22
stats socket /var/lib/haproxy/stats23 uid 0 gid 0 mode 0440 level
admin process 23
stats socket /var/lib/haproxy/stats24 uid 0 gid 0 mode 0440 level
admin process 24

nbproc 24
cpu-map odd 0-5 12-17
cpu-map even 6-11 18-23

listen haproxy1
bind :8081 process 1
bind :8082 process 2
bind :8083 process 3
bind :8084 process 4
bind :8085 process 5
bind :8086 process 6
bind :8087 process 7
bind :8088 process 8
bind :8089 process 9
bind :8090 process 10
bind :8091 process 11
bind :8092 process 12
bind :8093 process 13
bind :8094 process 14
bind :8095 process 15
bind :8096 process 16
bind :8097 process 17
bind :8098 process 18
bind :8099 process 19
bind :8100 process 20
bind :8101 process 21
bind :8102 process 22
bind :8103 process 23
bind :8104 process 24
bind :8081 process 1
bind :8082 process 2
stats uri /
stats show-node
stats refresh 10s
stats show-legends




signature.asc
Description: OpenPGP digital signature


<    1   2   3