Re: nbproc 1 vs >1 performance

2016-04-13 Thread Lukas Tribus

Hi Willy,


Am 14.04.2016 um 07:08 schrieb Willy Tarreau:

Hi Lukas,

On Thu, Apr 14, 2016 at 12:14:15AM +0200, Lukas Tribus wrote:
  

For example, the following configuration load balances the traffic across
all 40 processes, expected or not?

frontend haproxy_test
  bind-process 1-40
  bind :12345 process 1


It's not expected. What is indicated above is that the frontend will
exist for the first 40 processes, and that port 12345 will be bound
only in process 1. Processes 2..40 will thus have no listener at all.


Well, we do only bind once here, not 40 times. But the traffic is then 
handled by all 40 processes.



Le me put it this way:

frontend haproxy_test
 bind-process 1-8
 bind :12345 process 1
 bind :12345 process 2
 bind :12345 process 3
 bind :12345 process 4


Leads to 8 processes, and the master process binds the socket 4 times 
(PID 16509):


lukas@ubuntuvm:~/haproxy-1.5$ ps auxww | grep haproxy
haproxy  16509  0.0  0.0  18460   320 ?Ss   08:41   0:00 
./haproxy -f ../cert/ruppert-nbproc-stress.cfg -D
haproxy  16510  0.0  0.0  18460   320 ?Ss   08:41   0:00 
./haproxy -f ../cert/ruppert-nbproc-stress.cfg -D
haproxy  16511  0.0  0.0  18460   320 ?Ss   08:41   0:00 
./haproxy -f ../cert/ruppert-nbproc-stress.cfg -D
haproxy  16512  0.0  0.0  18460   320 ?Ss   08:41   0:00 
./haproxy -f ../cert/ruppert-nbproc-stress.cfg -D
haproxy  16513  0.0  0.0  18460   320 ?Ss   08:41   0:00 
./haproxy -f ../cert/ruppert-nbproc-stress.cfg -D
haproxy  16514  0.0  0.0  18460   320 ?Ss   08:41   0:00 
./haproxy -f ../cert/ruppert-nbproc-stress.cfg -D
haproxy  16515  0.0  0.0  18460   320 ?Ss   08:41   0:00 
./haproxy -f ../cert/ruppert-nbproc-stress.cfg -D
haproxy  16516  0.0  0.0  18460   320 ?Ss   08:41   0:00 
./haproxy -f ../cert/ruppert-nbproc-stress.cfg -D

lukas@ubuntuvm:~/haproxy-1.5$ sudo netstat -tlp | grep hap
tcp0  0 *:12345 *:* LISTEN  16509/haproxy
tcp0  0 *:12345 *:* LISTEN  16509/haproxy
tcp0  0 *:12345 *:* LISTEN  16509/haproxy
tcp0  0 *:12345 *:* LISTEN  16509/haproxy
lukas@ubuntuvm:~/haproxy-1.5$



SO_REUSEPORT is almost irrelevant here. In fact it *happens* to perform
some load balancing, but the first use was simply to permit new processes
to rebind before unbinding older ones during reloads.


I know, haproxy unconditionally enables SO_REUSEPORT since 2006, because 
of OpenBSD reload/restarts (OpenBSD does not load-balance). But I'm not 
aware we ever changed anything after Linux introduced SO_REUSEPORT with 
load-balancing support in 3.9, so the fact that we bind everything to 
the master process is in my opinion simply based on the fact that 
SO_REUSEPORT when introduced was just a fix for reload/restart problems.





But I'm rather surprized, maybe we recently broke something because
we have a lot of people successfully running such configurations,
which is why I'm a bit surprized.


I can reproduce this in v1.5.0 too, I think it was always like that.




cheers,

lukas




Re: nbproc 1 vs >1 performance

2016-04-13 Thread Willy Tarreau
Hi Lukas,

On Thu, Apr 14, 2016 at 12:14:15AM +0200, Lukas Tribus wrote:
 
> For example, the following configuration load balances the traffic across
> all 40 processes, expected or not?
> 
> frontend haproxy_test
>  bind-process 1-40
>  bind :12345 process 1


It's not expected. What is indicated above is that the frontend will
exist for the first 40 processes, and that port 12345 will be bound
only in process 1. Processes 2..40 will thus have no listener at all.

> The docs [1] are not very clear and kind of contradict themselves:
> >does not enforce any process but eliminates those which do not match
> >[...]
> >If the frontend uses a "bind-process" setting, the intersection between
> >the
> >two is applied
> 
> 
> We mention in multiple places (docs, commits, ml) that we use the kernel's
> SO_REUSEPORT to loadbalance incoming traffic if supported, however due to
> the behavior mentioned above, when we do something like this:
> 
> frontend haproxy_test
>  bind :12345 process 1
>  bind :12345 process 2
>  bind :12345 process 3
>  bind :12345 process 4
> 
> 
> all those sockets are bound to the master process, not its forks:
> 
> lukas@ubuntuvm:~/haproxy$ ps auxww | grep haproxy
> haproxy  14457  0.0  0.1  16928   592 ?Ss   22:41   0:00 ./haproxy
> -D -f ../cert/ruppert-nbproc-stress.cfg
> haproxy  14458  0.0  0.1  16928   592 ?Ss   22:41   0:00 ./haproxy
> -D -f ../cert/ruppert-nbproc-stress.cfg
> haproxy  14459  0.0  0.1  16928   592 ?Ss   22:41   0:00 ./haproxy
> -D -f ../cert/ruppert-nbproc-stress.cfg
> haproxy  14460  0.0  0.1  16928   592 ?Ss   22:41   0:00 ./haproxy
> -D -f ../cert/ruppert-nbproc-stress.cfg
> lukas@ubuntuvm:~/haproxy$ sudo netstat -tlp | grep 12345
> tcp0  0 *:12345 *:* LISTEN  14457/haproxy
> tcp0  0 *:12345 *:* LISTEN  14457/haproxy
> tcp0  0 *:12345 *:* LISTEN  14457/haproxy
> tcp0  0 *:12345 *:* LISTEN  14457/haproxy
> lukas@ubuntuvm:~/haproxy$

Then there is a bug here.

> So I'm not sure how SO_REUSEPORT is supposed to load-balance between
> processes, if all sockets point to the master process (it will just pass the
> request around, like without SO_REUSEPORT). The docs [1] really suggest this
> configuration for SO_REUSEPORT based kernel side load-balancing.

SO_REUSEPORT is almost irrelevant here. In fact it *happens* to perform
some load balancing, but the first use was simply to permit new processes
to rebind before unbinding older ones during reloads.

The way SO_REUSEPORT works in the kernel is that it hashes ip and ports
and picks a socket queue based on the result of the hash. So any process
which is bound may be picked. That makes me realize that this behaviour
will definitely kill the principle of a master socket server by the way
since it will not be possible to keep a socket alive without taking one's
share of the traffic.

> It should look like this, imho:
> 
> lukas@ubuntuvm:~/haproxy$ sudo netstat -tlp | grep 12345
> tcp0  0 *:12345 *:* LISTEN  14713/haproxy
> tcp0  0 *:12345 *:* LISTEN  14712/haproxy
> tcp0  0 *:12345 *:* LISTEN  14711/haproxy
> tcp0  0 *:12345 *:* LISTEN  14710/haproxy
> lukas@ubuntuvm:~/haproxy$

Yes I agree.

> But that will only work across frontends:
> 
> frontend haproxy_test1
>  bind-process 1
>  bind :12345
>  default_backend backend_test
> frontend haproxy_test2
>  bind-process 2
>  bind :12345
>  default_backend backend_test
> frontend haproxy_test3
>  bind-process 3
>  bind :12345
>  default_backend backend_test
> frontend haproxy_test4
>  bind-process 4
>  bind :12345
>  default_backend backend_test

That's not normal. The principle of bind-process is that it limits the
processes on which a frontend stays enabled (in practice, it stops the
frontend after the fork on all processes where it's not bound). Then
the "process" directive on the "bind" line further refines this by
stopping the listeners on processes where they're not bound. Hence
the intersection mentionned in the doc.

But I'm rather surprized, maybe we recently broke something because
we have a lot of people successfully running such configurations,
which is why I'm a bit surprized.

Willy




Re: nbproc 1 vs >1 performance

2016-04-13 Thread Lukas Tribus

Hi Christian, Willy,


Am 13.04.2016 um 12:58 schrieb Christian Ruppert:


With the first config I get around ~30-33k requests/s on my test 
system, with the second conf (only the bind-process in the frontend 
section has been changed!) I just get around 26-28k requests per second.


I could get similar differences when playing with nbproc 1 and >1 as 
well as the default "bind-process" and/or the "process 1" on the 
actual bind.
Is it really just the multi process overhead causing the performance 
drop here, even tough the bind uses the first / only one process anyway?




With the "bind-process 1-40" in the frontend 
(haproxy-lessperformant.cfg), you are forcing haproxy to use all 40 
processes for this frontend traffic.


Therefor you get all the multi-process penalties (process 1 will load 
balance the requests to all 39 child processes). That's why performance 
sucks - single process configuration will benefit from your testcase 
because the cost of the error response is negligible.


With the "bind-process 1" in the frontend (inherited from the default 
section or the actual haproxy default: bind to the union of all the 
listeners' processes -  haproxy-moreperformant.cfg) just a single 
process is used for that frontend (and if no other processes are needed, 
no other processes will in fact be forked). Because your testcase favors 
single process mode, it outperforms multiple processes easily (no inter 
process overhead).



Try adding a dummy frontend to your moreperformant config:
frontend dummy_allproc
 bind-process 2-40
 bind :


You will see that haproxy_test frontend still performs fine, although 40 
processes are running, because only one process handles your ab test 
traffic (again no inter process overhead).



So that is what's happening here, the question is if this is really 
expected behavior.


For example, the following configuration load balances the traffic 
across all 40 processes, expected or not?


frontend haproxy_test
 bind-process 1-40
 bind :12345 process 1

The docs [1] are not very clear and kind of contradict themselves:

does not enforce any process but eliminates those which do not match
[...]
If the frontend uses a "bind-process" setting, the intersection 
between the

two is applied



We mention in multiple places (docs, commits, ml) that we use the 
kernel's SO_REUSEPORT to loadbalance incoming traffic if supported, 
however due to the behavior mentioned above, when we do something like this:


frontend haproxy_test
 bind :12345 process 1
 bind :12345 process 2
 bind :12345 process 3
 bind :12345 process 4


all those sockets are bound to the master process, not its forks:

lukas@ubuntuvm:~/haproxy$ ps auxww | grep haproxy
haproxy  14457  0.0  0.1  16928   592 ?Ss   22:41   0:00 
./haproxy -D -f ../cert/ruppert-nbproc-stress.cfg
haproxy  14458  0.0  0.1  16928   592 ?Ss   22:41   0:00 
./haproxy -D -f ../cert/ruppert-nbproc-stress.cfg
haproxy  14459  0.0  0.1  16928   592 ?Ss   22:41   0:00 
./haproxy -D -f ../cert/ruppert-nbproc-stress.cfg
haproxy  14460  0.0  0.1  16928   592 ?Ss   22:41   0:00 
./haproxy -D -f ../cert/ruppert-nbproc-stress.cfg

lukas@ubuntuvm:~/haproxy$ sudo netstat -tlp | grep 12345
tcp0  0 *:12345 *:* LISTEN  14457/haproxy
tcp0  0 *:12345 *:* LISTEN  14457/haproxy
tcp0  0 *:12345 *:* LISTEN  14457/haproxy
tcp0  0 *:12345 *:* LISTEN  14457/haproxy
lukas@ubuntuvm:~/haproxy$


So I'm not sure how SO_REUSEPORT is supposed to load-balance between 
processes, if all sockets point to the master process (it will just pass 
the request around, like without SO_REUSEPORT). The docs [1] really 
suggest this configuration for SO_REUSEPORT based kernel side 
load-balancing.



It should look like this, imho:

lukas@ubuntuvm:~/haproxy$ sudo netstat -tlp | grep 12345
tcp0  0 *:12345 *:* LISTEN  14713/haproxy
tcp0  0 *:12345 *:* LISTEN  14712/haproxy
tcp0  0 *:12345 *:* LISTEN  14711/haproxy
tcp0  0 *:12345 *:* LISTEN  14710/haproxy
lukas@ubuntuvm:~/haproxy$


But that will only work across frontends:

frontend haproxy_test1
 bind-process 1
 bind :12345
 default_backend backend_test
frontend haproxy_test2
 bind-process 2
 bind :12345
 default_backend backend_test
frontend haproxy_test3
 bind-process 3
 bind :12345
 default_backend backend_test
frontend haproxy_test4
 bind-process 4
 bind :12345
 default_backend backend_test



I think the process keyword on the bind line is either buggy or simply 
not designed to use with SO_REUSEPORT load-balancing, although the 
documentation suggest it.



Any insides here, Willy?



[1] http://cbonte.github.io/haproxy-dconv/configuration-1.6.html#5.1-process



[PATCH] use SSL_CTX_set_ecdh_auto() for ecdh curve selection

2016-04-13 Thread David Martin
This is my first attempt at a patch, I'd love to get some feedback on this.

Adds support for SSL_CTX_set_ecdh_auto which is available in OpenSSL 1.0.2.
From 05bee3e95e5969294998fb9e2794ef65ce5a6c1f Mon Sep 17 00:00:00 2001
From: David Martin 
Date: Wed, 13 Apr 2016 15:09:35 -0500
Subject: [PATCH] use SSL_CTX_set_ecdh_auto() for ecdh curve selection

Use SSL_CTX_set_ecdh_auto if the OpenSSL version supports it, this
allows the server to negotiate ECDH curves much like it does ciphers.
Prefered curves can be specified using the existing ecdhe bind options
(ecdhe secp384r1:prime256v1)
---
 src/ssl_sock.c | 9 -
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/src/ssl_sock.c b/src/ssl_sock.c
index 0d35c29..a1af8cd 100644
--- a/src/ssl_sock.c
+++ b/src/ssl_sock.c
@@ -2756,7 +2756,13 @@ int ssl_sock_prepare_ctx(struct bind_conf *bind_conf, SSL_CTX *ctx, struct proxy
 	SSL_CTX_set_tlsext_servername_callback(ctx, ssl_sock_switchctx_cbk);
 	SSL_CTX_set_tlsext_servername_arg(ctx, bind_conf);
 #endif
-#if defined(SSL_CTX_set_tmp_ecdh) && !defined(OPENSSL_NO_ECDH)
+#if !defined(OPENSSL_NO_ECDH)
+#if defined(SSL_CTX_set_ecdh_auto)
+	{
+		SSL_CTX_set1_curves_list(ctx, bind_conf->ecdhe);
+		SSL_CTX_set_ecdh_auto(ctx, 1);
+	}
+#elif defined(SSL_CTX_set_tmp_ecdh)
 	{
 		int i;
 		EC_KEY  *ecdh;
@@ -2774,6 +2780,7 @@ int ssl_sock_prepare_ctx(struct bind_conf *bind_conf, SSL_CTX *ctx, struct proxy
 		}
 	}
 #endif
+#endif
 
 	return cfgerr;
 }
-- 
1.9.1



Re: Config order -- when will it matter?

2016-04-13 Thread Shawn Heisey
On 4/13/2016 10:46 AM, Shawn Heisey wrote:
> I'm working on some changes to a frontend, one of which is moving the
> port 80 bind into the same frontend as port 443.
> 
> Which of the many directives that I'm using will be evaluated in order,
> and which of them will take effect first no matter where they are?
> 
> Specific questions:
> 
> Will the "blockit" ACL in the config below kill a matching connection on
> port 80 before the redirect to HTTPS happens, or is "redirect scheme"
> handled out of order with the rest of what I've got configured?
> 
> Are the "use_backend X if" statements evaluated in order?  What I'm
> trying to do would require this.

Self-followup after trying the config out:

The use_backend order does what I hoped it would do.  That was what I
was most worried about.

It looks like the connection is redirected to https before the
"http-request deny if blockit" line is evaluated, even though the config
order lists the deny before the redirect.  I had hoped it would follow
the config order.  I definitely want to combine the port 80 and port 443
front ends to avoid config duplication, especially the ACLs -- but block
evil http:// requests before getting encryption involved.  I'm content
with the situation I've got, but if it's easy to change...

Thanks,
Shawn




[SPAM] compressionsolution

2016-04-13 Thread jack






Dear Sirs,





We are a professional Compression Stocking/NURSE SOCKS OEM/ODM

Manufacturer in Taiwan with 20 years experiences.





We specialized in manufacturing and designing all types of

stockings/supports for Medical & Sports.



Please see attachment for some of our representative products or you

may visit our website



at: http://www.jianisox.com for more
information.



Please feel free to contact us for any assistance. Hope to establish a

long-term & mutual beneficial relationship with you in the near

future.





Look forward to hearing from you soon.





With best regards,





jack chou





Da Yu Enterprise Co., Ltd.



http://www.dayu99.com

TEL :+886-4-8760680
#103



FAX :+886-4-8760683

E-mail : jinn...@dayu99.com



skype : jinni03


 





Config order -- when will it matter?

2016-04-13 Thread Shawn Heisey
I'm working on some changes to a frontend, one of which is moving the
port 80 bind into the same frontend as port 443.

Which of the many directives that I'm using will be evaluated in order,
and which of them will take effect first no matter where they are?

Specific questions:

Will the "blockit" ACL in the config below kill a matching connection on
port 80 before the redirect to HTTPS happens, or is "redirect scheme"
handled out of order with the rest of what I've got configured?

Are the "use_backend X if" statements evaluated in order?  What I'm
trying to do would require this.

Any insight is appreciated.

Thanks,
Shawn


---

frontend fe-spark
description Front end that accepts production spark requests.
bind 70.102.230.78:80
bind 70.102.230.78:443 ssl crt
/etc/ssl/certs/local/spark.REDACTED.com.pem crt
/etc/ssl/certs/local/wildcard.REDACTED.com.pem crt
/etc/ssl/certs/local/spark.OTHERDOMAIN.com.pem crt
/etc/ssl/certs/local/wildcard.stg_dev0-9.REDACTED.com.pem crt
/etc/ssl/certs/local/ssl-spark.dev.REDACTED.com.pem crt
/etc/ssl/certs/local/spark.white.REDACTED.com.pem no-sslv3 alpn http/1.1
npn http/1.1
acl host_stg hdr_beg(host) -i spark.stg.REDACTED.com
acl host_dev hdr_beg(host) -i spark.dev.REDACTED.com
acl host_dev0 hdr_beg(host) -i spark.dev0.REDACTED.com
acl host_white hdr_beg(host) -i spark.white.REDACTED.com
acl mwsi_path   path_beg/services
acl bot hdr_cnt(User-Agent) 0
acl bot hdr_sub(User-Agent) -i baiduspider ia_archiver
jeeves googlebot mediapartners-google msnbot slurp zyborg yandexnews
fairshare.cc yandex bingbot crawler everyonesocialbot feed\ crawler
google-http-java-client java/1.6.0_38 owlin\ bot sc\ news wikioimagesbot
xenu\ link\ sleuth yahoocachesystem
acl facebook  hdr_sub(User-Agent) -i facebookexternalhit
acl socialbot hdr_sub(User-Agent) -i twitterbot
acl socialbot hdr_sub(User-Agent) -i feedfetcher-google
acl blockit hdr_sub(User-Agent) -i torrent
acl blockit path_beg-i /announc
acl blockit path_beg-i /v2.0
acl blockit path_beg-i /v2.1
acl blockit path_beg-i /v2.2
acl blockit path_beg-i /fr
acl blockit path_beg-i /tr
acl blockit path_beg-i /connect
acl blockit path_beg-i /feeds
acl blockit path_beg-i /desktop
acl blockit path_beg-i /ios
acl blockit path_beg-i /ipad
acl blockit path_beg-i /magento
acl blockit path_beg-i /method
acl blockit path_beg-i /news
acl blockit path_beg-i /cipgl
acl blockit path_beg-i /stats
acl blockit path_beg-i /mobile
acl blockit path_beg-i /network_ads
acl blockit path_reg^/\d+
http-request deny if blockit
reqadd X-Forwarded-Proto:\ https if { ssl_fc }
redirect scheme https if !{ ssl_fc }
redirect prefix https://spark.REDACTED.com code 301 if {
hdr(host) -i OTHERDOMAIN.com }
redirect prefix https://spark.REDACTED.com code 301 if {
hdr(host) -i www.OTHERDOMAIN.com }
use_backend be-mwsi-stg-8444 if mwsi_path { ssl_fc_sni -i
spark.stg.REDACTED.com }
use_backend be-mwsi-stg-8444 if mwsi_path host_stg
use_backend be-mwsi-8444 if mwsi_path
use_backend be-stg-spark-443 if { ssl_fc_sni -i
spark.stg.REDACTED.com }
use_backend be-spark-dev-2443 if { ssl_fc_sni -i
spark.dev.REDACTED.com }
use_backend be-spark-dev0-443 if { ssl_fc_sni -i
spark.dev0.REDACTED.com }
use_backend be-spark-white-443 if { ssl_fc_sni -i
spark.white.REDACTED.com }
use_backend be-stg-spark-443 if host_stg
use_backend be-spark-dev-2443 if host_dev
use_backend be-spark-dev0-443 if host_dev0
use_backend be-spark-white-443 if host_white
default_backend be-spark-443
rspadd Strict-Transport-Security:\ max-age=31536000;\
includeSubDomains if { ssl_fc }



Re: Haproxy & Kubernetes, dynamic backend configuration

2016-04-13 Thread Smain Kahlouch
Ok thank you,
I'll have a look to SmartStack.

2016-04-13 16:03 GMT+02:00 B. Heath Robinson :

> SmartStack was mentioned earlier in the thread.  It does a VERY good job
> of doing this.  It rewrites the haproxy configuration and performs a reload
> based on changes in a source database by a polling service on each
> instance.  The canonical DB is zookeeper.
>
> We have been using this in production for over 2 years and have had very
> little trouble with it.  In the next year we will be moving to docker
> containers and plan to make some changes to our configuration and move
> forward with SmartStack.
>
> That said, your polling application could write instructions to the stats
> socket; however, it currently does not allow adding/removing servers but
> only enabling/disabling.  See
> https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#9.2 for
> more info.  BTW, this section is missing from the 1.6 manual.  You might
> also see https://github.com/flores/haproxyctl.
>
> On Wed, Apr 13, 2016 at 7:42 AM Smain Kahlouch  wrote:
>
>> Sorry to answer to this thread so late :p
>>
>>
>> due to the fact that this will be changed when the pod is recreated!
>>>
>>
>> Alexis, as i mentionned earlier the idea is to detect these changes by
>> polling in a regular basis the API and change the backend configuration
>> automatically.
>> Using the DNS (addon) is not what i would like to achieve because it
>> still uses kubernetes internal loadbalancing system.
>> Furthermore it seems to me easier to use the NodePort,
>> This is
>> what i use today.
>>
>> Nginx Plus has now such feature :
>>
>>> With APIs – This method uses the NGINX Plus on-the-fly reconfiguration
>>> API
>>> 
>>> to add and remove entries for Kubernetes pods in the NGINX Plus
>>> configuration, and the Kubernetes API to retrieve the IP addresses of the
>>> pods. This method requires us to write some code, and we won’t discuss it
>>> in depth here. For details, watch Kelsey Hightower’s webinar, Bringing
>>> Kubernetes to the Edge with NGINX Plus
>>> ,
>>> in which he explores the APIs and creates an application that utilizes them.
>>>
>>
>> Please let me know if you are considering this feature in the future.
>> Alternatively perhaps you can guide me to propose a plugin. Actually
>> python is the language i used to play with but maybe that's not possible.
>>
>> Regards,
>> Smana
>>
>> 2016-02-25 18:29 GMT+01:00 Aleksandar Lazic :
>>
>>> Hi.
>>>
>>> Am 25-02-2016 16:15, schrieb Smain Kahlouch:
>>>
 Hi !

 Sorry to bother you again with this question, but still i think it would
 be a great feature to loadbalance directly to pods from haproxy :)
 Is there any news on the roadmap about that ?

>>>
>>> How about DNS as mentioned below?
>>>
>>>
>>> https://github.com/kubernetes/kubernetes/blob/v1.0.6/cluster/addons/dns/README.md
>>> http://cbonte.github.io/haproxy-dconv/configuration-1.6.html#5.3
>>>
>>> ### oc rsh -c ng-socklog nginx-test-2-6em5w
>>> cat /etc/resolv.conf
>>> nameserver 172.30.0.1
>>> nameserver 172.31.31.227
>>> search nginx-test.svc.cluster.local svc.cluster.local cluster.local
>>> options ndots:5
>>>
>>> ping docker-registry.default.svc.cluster.local
>>> 
>>>
>>> 
>>> oc describe svc docker-registry -n default
>>> Name:   docker-registry
>>> Namespace:  default
>>> Labels: docker-registry=default
>>> Selector:   docker-registry=default
>>> Type:   ClusterIP
>>> IP: 172.30.38.182
>>> Port:   5000-tcp5000/TCP
>>> Endpoints:  10.1.5.52:5000
>>> Session Affinity:   None
>>> No events.
>>> 
>>>
>>> Another option is that you startup script adds the A record into skydns
>>>
>>> https://github.com/skynetservices/skydns
>>>
>>> But I don't see benefit to conncect directly to the endpoint, due to the
>>> fact that this will be changed when the pod is recreated!
>>>
>>> BR Aleks
>>>
>>> Regards,
 Smana

 2015-09-22 20:21 GMT+02:00 Joseph Lynch :

 Disclaimer: I help maintain SmartStack and this is a shameless plug
>
> You can also achieve a fast and reliable dynamic backend system by
> using something off the shelf like airbnb/Yelp SmartStack
> (http://nerds.airbnb.com/smartstack-service-discovery-cloud/).
>
> Basically there is nerve that runs on every machine healthchecking
> services, and once they pass healthchecks they get registered in a
> centralized registration system which is pluggable (zookeeper is the
> default but DNS is another option, and we're working on DNS SRV
> support). Then there is synapse which runs on every client machine and
> handles re-configuring HAProxy for you aut

Re: Haproxy & Kubernetes, dynamic backend configuration

2016-04-13 Thread B. Heath Robinson
SmartStack was mentioned earlier in the thread.  It does a VERY good job of
doing this.  It rewrites the haproxy configuration and performs a reload
based on changes in a source database by a polling service on each
instance.  The canonical DB is zookeeper.

We have been using this in production for over 2 years and have had very
little trouble with it.  In the next year we will be moving to docker
containers and plan to make some changes to our configuration and move
forward with SmartStack.

That said, your polling application could write instructions to the stats
socket; however, it currently does not allow adding/removing servers but
only enabling/disabling.  See
https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#9.2 for more
info.  BTW, this section is missing from the 1.6 manual.  You might also
see https://github.com/flores/haproxyctl.

On Wed, Apr 13, 2016 at 7:42 AM Smain Kahlouch  wrote:

> Sorry to answer to this thread so late :p
>
>
> due to the fact that this will be changed when the pod is recreated!
>>
>
> Alexis, as i mentionned earlier the idea is to detect these changes by
> polling in a regular basis the API and change the backend configuration
> automatically.
> Using the DNS (addon) is not what i would like to achieve because it still
> uses kubernetes internal loadbalancing system.
> Furthermore it seems to me easier to use the NodePort,
> This is
> what i use today.
>
> Nginx Plus has now such feature :
>
>> With APIs – This method uses the NGINX Plus on-the-fly reconfiguration
>> API
>> 
>> to add and remove entries for Kubernetes pods in the NGINX Plus
>> configuration, and the Kubernetes API to retrieve the IP addresses of the
>> pods. This method requires us to write some code, and we won’t discuss it
>> in depth here. For details, watch Kelsey Hightower’s webinar, Bringing
>> Kubernetes to the Edge with NGINX Plus
>> ,
>> in which he explores the APIs and creates an application that utilizes them.
>>
>
> Please let me know if you are considering this feature in the future.
> Alternatively perhaps you can guide me to propose a plugin. Actually
> python is the language i used to play with but maybe that's not possible.
>
> Regards,
> Smana
>
> 2016-02-25 18:29 GMT+01:00 Aleksandar Lazic :
>
>> Hi.
>>
>> Am 25-02-2016 16:15, schrieb Smain Kahlouch:
>>
>>> Hi !
>>>
>>> Sorry to bother you again with this question, but still i think it would
>>> be a great feature to loadbalance directly to pods from haproxy :)
>>> Is there any news on the roadmap about that ?
>>>
>>
>> How about DNS as mentioned below?
>>
>>
>> https://github.com/kubernetes/kubernetes/blob/v1.0.6/cluster/addons/dns/README.md
>> http://cbonte.github.io/haproxy-dconv/configuration-1.6.html#5.3
>>
>> ### oc rsh -c ng-socklog nginx-test-2-6em5w
>> cat /etc/resolv.conf
>> nameserver 172.30.0.1
>> nameserver 172.31.31.227
>> search nginx-test.svc.cluster.local svc.cluster.local cluster.local
>> options ndots:5
>>
>> ping docker-registry.default.svc.cluster.local
>> 
>>
>> 
>> oc describe svc docker-registry -n default
>> Name:   docker-registry
>> Namespace:  default
>> Labels: docker-registry=default
>> Selector:   docker-registry=default
>> Type:   ClusterIP
>> IP: 172.30.38.182
>> Port:   5000-tcp5000/TCP
>> Endpoints:  10.1.5.52:5000
>> Session Affinity:   None
>> No events.
>> 
>>
>> Another option is that you startup script adds the A record into skydns
>>
>> https://github.com/skynetservices/skydns
>>
>> But I don't see benefit to conncect directly to the endpoint, due to the
>> fact that this will be changed when the pod is recreated!
>>
>> BR Aleks
>>
>> Regards,
>>> Smana
>>>
>>> 2015-09-22 20:21 GMT+02:00 Joseph Lynch :
>>>
>>> Disclaimer: I help maintain SmartStack and this is a shameless plug

 You can also achieve a fast and reliable dynamic backend system by
 using something off the shelf like airbnb/Yelp SmartStack
 (http://nerds.airbnb.com/smartstack-service-discovery-cloud/).

 Basically there is nerve that runs on every machine healthchecking
 services, and once they pass healthchecks they get registered in a
 centralized registration system which is pluggable (zookeeper is the
 default but DNS is another option, and we're working on DNS SRV
 support). Then there is synapse which runs on every client machine and
 handles re-configuring HAProxy for you automatically, handling details
 like doing socket updates vs reloading HAProxy correctly. To make this
 truly reliable on some systems you have to do some tricks to
 gracefully reload HAProxy for picking up new backends; sea

Re: Haproxy & Kubernetes, dynamic backend configuration

2016-04-13 Thread Smain Kahlouch
Sorry to answer to this thread so late :p

due to the fact that this will be changed when the pod is recreated!
>

Alexis, as i mentionned earlier the idea is to detect these changes by
polling in a regular basis the API and change the backend configuration
automatically.
Using the DNS (addon) is not what i would like to achieve because it still
uses kubernetes internal loadbalancing system.
Furthermore it seems to me easier to use the NodePort,
This is what
i use today.

Nginx Plus has now such feature :

> With APIs – This method uses the NGINX Plus on-the-fly reconfiguration API
> 
> to add and remove entries for Kubernetes pods in the NGINX Plus
> configuration, and the Kubernetes API to retrieve the IP addresses of the
> pods. This method requires us to write some code, and we won’t discuss it
> in depth here. For details, watch Kelsey Hightower’s webinar, Bringing
> Kubernetes to the Edge with NGINX Plus
> ,
> in which he explores the APIs and creates an application that utilizes them.
>

Please let me know if you are considering this feature in the future.
Alternatively perhaps you can guide me to propose a plugin. Actually python
is the language i used to play with but maybe that's not possible.

Regards,
Smana

2016-02-25 18:29 GMT+01:00 Aleksandar Lazic :

> Hi.
>
> Am 25-02-2016 16:15, schrieb Smain Kahlouch:
>
>> Hi !
>>
>> Sorry to bother you again with this question, but still i think it would
>> be a great feature to loadbalance directly to pods from haproxy :)
>> Is there any news on the roadmap about that ?
>>
>
> How about DNS as mentioned below?
>
>
> https://github.com/kubernetes/kubernetes/blob/v1.0.6/cluster/addons/dns/README.md
> http://cbonte.github.io/haproxy-dconv/configuration-1.6.html#5.3
>
> ### oc rsh -c ng-socklog nginx-test-2-6em5w
> cat /etc/resolv.conf
> nameserver 172.30.0.1
> nameserver 172.31.31.227
> search nginx-test.svc.cluster.local svc.cluster.local cluster.local
> options ndots:5
>
> ping docker-registry.default.svc.cluster.local
> 
>
> 
> oc describe svc docker-registry -n default
> Name:   docker-registry
> Namespace:  default
> Labels: docker-registry=default
> Selector:   docker-registry=default
> Type:   ClusterIP
> IP: 172.30.38.182
> Port:   5000-tcp5000/TCP
> Endpoints:  10.1.5.52:5000
> Session Affinity:   None
> No events.
> 
>
> Another option is that you startup script adds the A record into skydns
>
> https://github.com/skynetservices/skydns
>
> But I don't see benefit to conncect directly to the endpoint, due to the
> fact that this will be changed when the pod is recreated!
>
> BR Aleks
>
> Regards,
>> Smana
>>
>> 2015-09-22 20:21 GMT+02:00 Joseph Lynch :
>>
>> Disclaimer: I help maintain SmartStack and this is a shameless plug
>>>
>>> You can also achieve a fast and reliable dynamic backend system by
>>> using something off the shelf like airbnb/Yelp SmartStack
>>> (http://nerds.airbnb.com/smartstack-service-discovery-cloud/).
>>>
>>> Basically there is nerve that runs on every machine healthchecking
>>> services, and once they pass healthchecks they get registered in a
>>> centralized registration system which is pluggable (zookeeper is the
>>> default but DNS is another option, and we're working on DNS SRV
>>> support). Then there is synapse which runs on every client machine and
>>> handles re-configuring HAProxy for you automatically, handling details
>>> like doing socket updates vs reloading HAProxy correctly. To make this
>>> truly reliable on some systems you have to do some tricks to
>>> gracefully reload HAProxy for picking up new backends; search for zero
>>> downtime haproxy reloads to see how we solved it, but there are lots
>>> of solutions.
>>>
>>> We use this stack at Yelp to achieve the same kind of dynamic load
>>> balancing you're talking about except instead of kubernetes we use
>>> mesos and marathon. The one real trick here is to use a link local IP
>>> address and run the HAProxy/Synapse instances on the machines
>>> themselves but have containers talk over the link local IP address. I
>>> haven't tried it with kubernetes but given my understanding you'd end
>>> up with the same problem.
>>>
>>> We plan to automatically support whichever DNS or stats socket based
>>> solution the HAProxy devs go with for dynamic backend changes.
>>>
>>> -Joey
>>>
>>> On Fri, Sep 18, 2015 at 8:34 AM, Eduard Martinescu
>>>  wrote:
>>>
 I have implemented something similar to allow use to dynamically
 load-balance between multiple backends that are all joined to each

>>> other as
>>>
 part of a Hazelcast cluster.  All of which is running in an AWS VPC,

>>> wit

nbproc 1 vs >1 performance

2016-04-13 Thread Christian Ruppert

Hi,

I've prepared a simple testcase:

haproxy-moreperformant.cfg:
global
nbproc 40
user haproxy
group haproxy
maxconn 175000

defaults
timeout client 300s
timeout server 300s
timeout queue 60s
timeout connect 7s
timeout http-request 10s
maxconn 175000

bind-process 1

frontend haproxy_test
#bind-process 1-40
bind :12345 process 1

mode http

default_backend backend_test

backend backend_test
mode http

errorfile 503 /etc/haproxy/test.error

# vim: set syntax=haproxy:


haproxy-lessperformant.cfg:
global
nbproc 40
user haproxy
group haproxy
maxconn 175000

defaults
timeout client 300s
timeout server 300s
timeout queue 60s
timeout connect 7s
timeout http-request 10s
maxconn 175000

bind-process 1

frontend haproxy_test
bind-process 1-40
bind :12345 process 1

mode http

default_backend backend_test

backend backend_test
mode http

errorfile 503 /etc/haproxy/test.error

# vim: set syntax=haproxy:

/etc/haproxy/test.error:
HTTP/1.0 200
Cache-Control: no-cache
Content-Type: text/plain

Test123456


The test:
ab -n 5000 -c 250 http://xx.xx.xx.xx:12345

With the first config I get around ~30-33k requests/s on my test system, 
with the second conf (only the bind-process in the frontend section has 
been changed!) I just get around 26-28k requests per second.


I could get similar differences when playing with nbproc 1 and >1 as 
well as the default "bind-process" and/or the "process 1" on the actual 
bind.
Is it really just the multi process overhead causing the performance 
drop here, even tough the bind uses the first / only one process anyway?


--
Regards,
Christian Ruppert



Re: Multiple front ends listening to the same address/port -- want a config error

2016-04-13 Thread Lukas Tribus

Hi,


Am 12.04.2016 um 19:39 schrieb Shawn Heisey:

I copied a front end to set up a new service on my haproxy install.  I
changed the name of the front end, but forgot to change the port number
on the "bind" option.

Haproxy didn't complain about this configuration when I tested for
validity, so I didn't realize I'd made a mistake until the original
service whose frontend I had copied began to fail.

Is there a config option that would cause multiple front ends bound to
the same address/port to be an invalid config?  I am not running with
multiple processes.


I think an option to disable SO_REUSEPORT would be handy (and only 
default to SO_REUSEPORT in daemon/systemd mode), so we can actually 
restore pre linux-3.9 behavior avoiding multiple binds to the same port, 
if we have that requirement.


However config check will never try to bind a port, it just checks for 
valid configuration syntax.


Trying to warn the user against unkown use-cases seems like a bad idea 
to me, its complex and prone to false-positives.




lukas