Re: Haproxy & Kubernetes, dynamic backend configuration

2016-02-25 Thread Smain Kahlouch
Hi !

Sorry to bother you again with this question, but still i think it would be
a great feature to loadbalance directly to pods from haproxy :)
Is there any news on the roadmap about that ?

Regards,
Smana

2015-09-22 20:21 GMT+02:00 Joseph Lynch :

> Disclaimer: I help maintain SmartStack and this is a shameless plug
>
> You can also achieve a fast and reliable dynamic backend system by
> using something off the shelf like airbnb/Yelp SmartStack
> (http://nerds.airbnb.com/smartstack-service-discovery-cloud/).
>
> Basically there is nerve that runs on every machine healthchecking
> services, and once they pass healthchecks they get registered in a
> centralized registration system which is pluggable (zookeeper is the
> default but DNS is another option, and we're working on DNS SRV
> support). Then there is synapse which runs on every client machine and
> handles re-configuring HAProxy for you automatically, handling details
> like doing socket updates vs reloading HAProxy correctly. To make this
> truly reliable on some systems you have to do some tricks to
> gracefully reload HAProxy for picking up new backends; search for zero
> downtime haproxy reloads to see how we solved it, but there are lots
> of solutions.
>
> We use this stack at Yelp to achieve the same kind of dynamic load
> balancing you're talking about except instead of kubernetes we use
> mesos and marathon. The one real trick here is to use a link local IP
> address and run the HAProxy/Synapse instances on the machines
> themselves but have containers talk over the link local IP address. I
> haven't tried it with kubernetes but given my understanding you'd end
> up with the same problem.
>
> We plan to automatically support whichever DNS or stats socket based
> solution the HAProxy devs go with for dynamic backend changes.
>
> -Joey
>
> On Fri, Sep 18, 2015 at 8:34 AM, Eduard Martinescu
>  wrote:
> > I have implemented something similar to allow use to dynamically
> > load-balance between multiple backends that are all joined to each other
> as
> > part of a Hazelcast cluster.  All of which is running in an AWS VPC, with
> > autoscaling groups to control spin up and down of new members of the
> cluster
> > based on load, etc.
> >
> > What we ended up doing is righting custom code that attached to the
> > hazelcast cluster as a client, and periodically queried the cluster for
> the
> > current list of servers, and their IP addresses.  The coded would then
> > rewrite the HAProxy configuration, filling in the correct backend list.
> > Then via a shell call (sadly, Java can't do Unix domain sockets to write
> > directly to the server), it would tell HAProxy to restart gracefully.
> >
> > In our use case, this works great, as we don't have long-running TCP
> > connections (these servers typically serve REST API calls or static HTML
> > content with no keep-alive.)
> >
> > I'm also open to suggestions on how this could be improved too,
> especially
> > with 1.6 possibly.
> >
> > Ed
> >
> > 
> > ✉ Eduard Martinescu | ✆ (585) 708-9685 |  - ignite action. fuel change.
> >
> > On Fri, Sep 18, 2015 at 9:21 AM, Baptiste  wrote:
> >>
> >> On Fri, Sep 18, 2015 at 3:18 PM, Smain Kahlouch 
> >> wrote:
> >> >> If I may chime in here: Kubernetes supports service discovery through
> >> >> DNS
> >> >> SRV records for most use-cases, so the dynamic DNS support that
> >> >> Baptiste
> >> >> is
> >> >> currently working on would be a perfect fit. No special API support
> >> >> required.
> >> >
> >> >
> >> > Well dns would be great but, as far as i know, kubernetes uses dns
> only
> >> > for
> >> > services name, not for pods.
> >> > A pod can be seen as a server in a backend, the number of servers and
> >> > their
> >> > ip addresses can change frequently.
> >> > I'll dig further...
> >> >
> >> > Thanks,
> >> > Smana
> >>
> >>
> >> That's usually the purpose of DNS SRV records ;)
> >>
> >> Baptiste
> >>
> >
>


Re: Haproxy & Kubernetes, dynamic backend configuration

2016-04-13 Thread Smain Kahlouch
Sorry to answer to this thread so late :p

due to the fact that this will be changed when the pod is recreated!
>

Alexis, as i mentionned earlier the idea is to detect these changes by
polling in a regular basis the API and change the backend configuration
automatically.
Using the DNS (addon) is not what i would like to achieve because it still
uses kubernetes internal loadbalancing system.
Furthermore it seems to me easier to use the NodePort,
<http://kubernetes.io/docs/user-guide/services/#type-nodeport>This is what
i use today.

Nginx Plus has now such feature :

> With APIs – This method uses the NGINX Plus on-the-fly reconfiguration API
> <https://www.nginx.com/resources/admin-guide/load-balancer/#upstream_conf>
> to add and remove entries for Kubernetes pods in the NGINX Plus
> configuration, and the Kubernetes API to retrieve the IP addresses of the
> pods. This method requires us to write some code, and we won’t discuss it
> in depth here. For details, watch Kelsey Hightower’s webinar, Bringing
> Kubernetes to the Edge with NGINX Plus
> <https://www.nginx.com/resources/webinars/bringing-kubernetes-to-the-edge-with-nginx-plus/>,
> in which he explores the APIs and creates an application that utilizes them.
>

Please let me know if you are considering this feature in the future.
Alternatively perhaps you can guide me to propose a plugin. Actually python
is the language i used to play with but maybe that's not possible.

Regards,
Smana

2016-02-25 18:29 GMT+01:00 Aleksandar Lazic :

> Hi.
>
> Am 25-02-2016 16:15, schrieb Smain Kahlouch:
>
>> Hi !
>>
>> Sorry to bother you again with this question, but still i think it would
>> be a great feature to loadbalance directly to pods from haproxy :)
>> Is there any news on the roadmap about that ?
>>
>
> How about DNS as mentioned below?
>
>
> https://github.com/kubernetes/kubernetes/blob/v1.0.6/cluster/addons/dns/README.md
> http://cbonte.github.io/haproxy-dconv/configuration-1.6.html#5.3
>
> ### oc rsh -c ng-socklog nginx-test-2-6em5w
> cat /etc/resolv.conf
> nameserver 172.30.0.1
> nameserver 172.31.31.227
> search nginx-test.svc.cluster.local svc.cluster.local cluster.local
> options ndots:5
>
> ping docker-registry.default.svc.cluster.local
> 
>
> 
> oc describe svc docker-registry -n default
> Name:   docker-registry
> Namespace:  default
> Labels: docker-registry=default
> Selector:   docker-registry=default
> Type:   ClusterIP
> IP: 172.30.38.182
> Port:   5000-tcp5000/TCP
> Endpoints:  10.1.5.52:5000
> Session Affinity:   None
> No events.
> 
>
> Another option is that you startup script adds the A record into skydns
>
> https://github.com/skynetservices/skydns
>
> But I don't see benefit to conncect directly to the endpoint, due to the
> fact that this will be changed when the pod is recreated!
>
> BR Aleks
>
> Regards,
>> Smana
>>
>> 2015-09-22 20:21 GMT+02:00 Joseph Lynch :
>>
>> Disclaimer: I help maintain SmartStack and this is a shameless plug
>>>
>>> You can also achieve a fast and reliable dynamic backend system by
>>> using something off the shelf like airbnb/Yelp SmartStack
>>> (http://nerds.airbnb.com/smartstack-service-discovery-cloud/).
>>>
>>> Basically there is nerve that runs on every machine healthchecking
>>> services, and once they pass healthchecks they get registered in a
>>> centralized registration system which is pluggable (zookeeper is the
>>> default but DNS is another option, and we're working on DNS SRV
>>> support). Then there is synapse which runs on every client machine and
>>> handles re-configuring HAProxy for you automatically, handling details
>>> like doing socket updates vs reloading HAProxy correctly. To make this
>>> truly reliable on some systems you have to do some tricks to
>>> gracefully reload HAProxy for picking up new backends; search for zero
>>> downtime haproxy reloads to see how we solved it, but there are lots
>>> of solutions.
>>>
>>> We use this stack at Yelp to achieve the same kind of dynamic load
>>> balancing you're talking about except instead of kubernetes we use
>>> mesos and marathon. The one real trick here is to use a link local IP
>>> address and run the HAProxy/Synapse instances on the machines
>>> themselves but have containers talk over the link local IP address. I
>>> haven't tried it with kubernetes but given my understanding you'd end
>>&g

Re: Haproxy & Kubernetes, dynamic backend configuration

2016-04-13 Thread Smain Kahlouch
Ok thank you,
I'll have a look to SmartStack.

2016-04-13 16:03 GMT+02:00 B. Heath Robinson :

> SmartStack was mentioned earlier in the thread.  It does a VERY good job
> of doing this.  It rewrites the haproxy configuration and performs a reload
> based on changes in a source database by a polling service on each
> instance.  The canonical DB is zookeeper.
>
> We have been using this in production for over 2 years and have had very
> little trouble with it.  In the next year we will be moving to docker
> containers and plan to make some changes to our configuration and move
> forward with SmartStack.
>
> That said, your polling application could write instructions to the stats
> socket; however, it currently does not allow adding/removing servers but
> only enabling/disabling.  See
> https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#9.2 for
> more info.  BTW, this section is missing from the 1.6 manual.  You might
> also see https://github.com/flores/haproxyctl.
>
> On Wed, Apr 13, 2016 at 7:42 AM Smain Kahlouch  wrote:
>
>> Sorry to answer to this thread so late :p
>>
>>
>> due to the fact that this will be changed when the pod is recreated!
>>>
>>
>> Alexis, as i mentionned earlier the idea is to detect these changes by
>> polling in a regular basis the API and change the backend configuration
>> automatically.
>> Using the DNS (addon) is not what i would like to achieve because it
>> still uses kubernetes internal loadbalancing system.
>> Furthermore it seems to me easier to use the NodePort,
>> <http://kubernetes.io/docs/user-guide/services/#type-nodeport>This is
>> what i use today.
>>
>> Nginx Plus has now such feature :
>>
>>> With APIs – This method uses the NGINX Plus on-the-fly reconfiguration
>>> API
>>> <https://www.nginx.com/resources/admin-guide/load-balancer/#upstream_conf>
>>> to add and remove entries for Kubernetes pods in the NGINX Plus
>>> configuration, and the Kubernetes API to retrieve the IP addresses of the
>>> pods. This method requires us to write some code, and we won’t discuss it
>>> in depth here. For details, watch Kelsey Hightower’s webinar, Bringing
>>> Kubernetes to the Edge with NGINX Plus
>>> <https://www.nginx.com/resources/webinars/bringing-kubernetes-to-the-edge-with-nginx-plus/>,
>>> in which he explores the APIs and creates an application that utilizes them.
>>>
>>
>> Please let me know if you are considering this feature in the future.
>> Alternatively perhaps you can guide me to propose a plugin. Actually
>> python is the language i used to play with but maybe that's not possible.
>>
>> Regards,
>> Smana
>>
>> 2016-02-25 18:29 GMT+01:00 Aleksandar Lazic :
>>
>>> Hi.
>>>
>>> Am 25-02-2016 16:15, schrieb Smain Kahlouch:
>>>
>>>> Hi !
>>>>
>>>> Sorry to bother you again with this question, but still i think it would
>>>> be a great feature to loadbalance directly to pods from haproxy :)
>>>> Is there any news on the roadmap about that ?
>>>>
>>>
>>> How about DNS as mentioned below?
>>>
>>>
>>> https://github.com/kubernetes/kubernetes/blob/v1.0.6/cluster/addons/dns/README.md
>>> http://cbonte.github.io/haproxy-dconv/configuration-1.6.html#5.3
>>>
>>> ### oc rsh -c ng-socklog nginx-test-2-6em5w
>>> cat /etc/resolv.conf
>>> nameserver 172.30.0.1
>>> nameserver 172.31.31.227
>>> search nginx-test.svc.cluster.local svc.cluster.local cluster.local
>>> options ndots:5
>>>
>>> ping docker-registry.default.svc.cluster.local
>>> 
>>>
>>> 
>>> oc describe svc docker-registry -n default
>>> Name:   docker-registry
>>> Namespace:  default
>>> Labels: docker-registry=default
>>> Selector:   docker-registry=default
>>> Type:   ClusterIP
>>> IP: 172.30.38.182
>>> Port:   5000-tcp5000/TCP
>>> Endpoints:  10.1.5.52:5000
>>> Session Affinity:   None
>>> No events.
>>> 
>>>
>>> Another option is that you startup script adds the A record into skydns
>>>
>>> https://github.com/skynetservices/skydns
>>>
>>> But I don't see benefit to conncect directly to the endpoint, due to the
>>> fact that this will be changed when the pod is recreated!
>>&g

Re: Haproxy & Kubernetes, dynamic backend configuration

2016-04-14 Thread Smain Kahlouch
Is it plan to support backend configuration with stats socket ?

2016-04-13 16:09 GMT+02:00 Smain Kahlouch :

> Ok thank you,
> I'll have a look to SmartStack.
>
> 2016-04-13 16:03 GMT+02:00 B. Heath Robinson :
>
>> SmartStack was mentioned earlier in the thread.  It does a VERY good job
>> of doing this.  It rewrites the haproxy configuration and performs a reload
>> based on changes in a source database by a polling service on each
>> instance.  The canonical DB is zookeeper.
>>
>> We have been using this in production for over 2 years and have had very
>> little trouble with it.  In the next year we will be moving to docker
>> containers and plan to make some changes to our configuration and move
>> forward with SmartStack.
>>
>> That said, your polling application could write instructions to the stats
>> socket; however, it currently does not allow adding/removing servers but
>> only enabling/disabling.  See
>> https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#9.2 for
>> more info.  BTW, this section is missing from the 1.6 manual.  You might
>> also see https://github.com/flores/haproxyctl.
>>
>> On Wed, Apr 13, 2016 at 7:42 AM Smain Kahlouch 
>> wrote:
>>
>>> Sorry to answer to this thread so late :p
>>>
>>>
>>> due to the fact that this will be changed when the pod is recreated!
>>>>
>>>
>>> Alexis, as i mentionned earlier the idea is to detect these changes by
>>> polling in a regular basis the API and change the backend configuration
>>> automatically.
>>> Using the DNS (addon) is not what i would like to achieve because it
>>> still uses kubernetes internal loadbalancing system.
>>> Furthermore it seems to me easier to use the NodePort,
>>> <http://kubernetes.io/docs/user-guide/services/#type-nodeport>This is
>>> what i use today.
>>>
>>> Nginx Plus has now such feature :
>>>
>>>> With APIs – This method uses the NGINX Plus on-the-fly reconfiguration
>>>> API
>>>> <https://www.nginx.com/resources/admin-guide/load-balancer/#upstream_conf>
>>>> to add and remove entries for Kubernetes pods in the NGINX Plus
>>>> configuration, and the Kubernetes API to retrieve the IP addresses of the
>>>> pods. This method requires us to write some code, and we won’t discuss it
>>>> in depth here. For details, watch Kelsey Hightower’s webinar, Bringing
>>>> Kubernetes to the Edge with NGINX Plus
>>>> <https://www.nginx.com/resources/webinars/bringing-kubernetes-to-the-edge-with-nginx-plus/>,
>>>> in which he explores the APIs and creates an application that utilizes 
>>>> them.
>>>>
>>>
>>> Please let me know if you are considering this feature in the future.
>>> Alternatively perhaps you can guide me to propose a plugin. Actually
>>> python is the language i used to play with but maybe that's not possible.
>>>
>>> Regards,
>>> Smana
>>>
>>> 2016-02-25 18:29 GMT+01:00 Aleksandar Lazic :
>>>
>>>> Hi.
>>>>
>>>> Am 25-02-2016 16:15, schrieb Smain Kahlouch:
>>>>
>>>>> Hi !
>>>>>
>>>>> Sorry to bother you again with this question, but still i think it
>>>>> would
>>>>> be a great feature to loadbalance directly to pods from haproxy :)
>>>>> Is there any news on the roadmap about that ?
>>>>>
>>>>
>>>> How about DNS as mentioned below?
>>>>
>>>>
>>>> https://github.com/kubernetes/kubernetes/blob/v1.0.6/cluster/addons/dns/README.md
>>>> http://cbonte.github.io/haproxy-dconv/configuration-1.6.html#5.3
>>>>
>>>> ### oc rsh -c ng-socklog nginx-test-2-6em5w
>>>> cat /etc/resolv.conf
>>>> nameserver 172.30.0.1
>>>> nameserver 172.31.31.227
>>>> search nginx-test.svc.cluster.local svc.cluster.local cluster.local
>>>> options ndots:5
>>>>
>>>> ping docker-registry.default.svc.cluster.local
>>>> 
>>>>
>>>> 
>>>> oc describe svc docker-registry -n default
>>>> Name:   docker-registry
>>>> Namespace:  default
>>>> Labels: docker-registry=default
>>>> Selector:   docker-registry=default
>>>> Type:   ClusterIP
>>>> IP:

track & log sessions

2013-04-26 Thread Smain Kahlouch
Hello all,

My question is pretty simple.
I just want to know if it's possible to track/log a session from the
connexion to the disconnexion.
I've seen that it was possible with the "capture cookie" statement but i
don't want to change something from user side.
Is there another way please ?

Regards,
Smana


haproxy crashes with ddos mitigation config

2013-05-03 Thread Smain Kahlouch
Hello,

I currently have some troubles enabling ddos as described there :
http://blog.exceliance.fr/2012/02/27/use-a-load-balancer-as-a-first-row-of-defense-against-ddos/

When i enable the following lines :

  stick-table type ip size 100k expire 30s store conn_cur
# Shut the new connection as long as the client has already 10 opened
  tcp-request connection reject if { src_conn_cur ge 10 }
  tcp-request connection track-sc1 src
...

haproxy crashes with the following error :

kernel: [334012.858141] haproxy[6914] general protection ip:46832d
sp:7fffe5e219e8 error:0 in haproxy[40+89000]

Regards,
Smana


Re: haproxy crashes with ddos mitigation config

2013-05-03 Thread Smain Kahlouch
More information
OS : debian 6
version : 1.5-dev18


2013/5/3 Smain Kahlouch 

> Hello,
>
> I currently have some troubles enabling ddos as described there :
>
> http://blog.exceliance.fr/2012/02/27/use-a-load-balancer-as-a-first-row-of-defense-against-ddos/
>
> When i enable the following lines :
> 
>   stick-table type ip size 100k expire 30s store conn_cur
> # Shut the new connection as long as the client has already 10 opened
>   tcp-request connection reject if { src_conn_cur ge 10 }
>   tcp-request connection track-sc1 src
> ...
>
> haproxy crashes with the following error :
>
> kernel: [334012.858141] haproxy[6914] general protection ip:46832d
> sp:7fffe5e219e8 error:0 in haproxy[40+89000]
>
> Regards,
> Smana
>


Re: haproxy crashes with ddos mitigation config

2013-05-03 Thread Smain Kahlouch
Another information,
This behavior appears only with ssl frontends.
I'm trying with non ssl frontend


2013/5/3 Smain Kahlouch 

> More information
> OS : debian 6
> version : 1.5-dev18
>
>
> 2013/5/3 Smain Kahlouch 
>
>> Hello,
>>
>> I currently have some troubles enabling ddos as described there :
>>
>> http://blog.exceliance.fr/2012/02/27/use-a-load-balancer-as-a-first-row-of-defense-against-ddos/
>>
>> When i enable the following lines :
>> 
>>   stick-table type ip size 100k expire 30s store conn_cur
>> # Shut the new connection as long as the client has already 10 opened
>>   tcp-request connection reject if { src_conn_cur ge 10 }
>>   tcp-request connection track-sc1 src
>> ...
>>
>> haproxy crashes with the following error :
>>
>> kernel: [334012.858141] haproxy[6914] general protection ip:46832d
>> sp:7fffe5e219e8 error:0 in haproxy[40+89000]
>>
>> Regards,
>> Smana
>>
>
>


uri algorithm & streamer overload

2013-05-06 Thread Smain Kahlouch
Hello all,

i was wondering what would happen if a lot of users request the same file
(url).
Could it lead to an overload of one of our backends ?
Is it possible to fix limits and switch to another backend if they're
exceeded?

We've got the following configuration :
backend chkg_farm
*  balance uri whole  *
  hash-type consistent
  mode http
  log global
  option httplog
  option forwardfor
  cookie SERVERID insert indirect nocache
  default-server inter 3s rise 2 fall 3
  option httpchk GET /status
  timeout server 25s
  server a1 10.111.9.20:80 maxconn 1000 check cookie a1
  server a2 10.111.9.22:80 maxconn 1000 check cookie a2
  server a3 10.111.9.24:80 maxconn 1000 check cookie a3

Thanks,
Smana


Re: uri algorithm & streamer overload

2013-05-07 Thread Smain Kahlouch
Hello Lukas,

Thanks for your answer.
Our backend is a webserver providing streaming services.

>From my understanding, the user won't be directed to another backend. It
will wait till a connection is freed :

"Limits the sockets to this number of concurrent connections. Extraneous
connections will remain in the system's backlog until a connection is
released. "

Regards,
Smana


2013/5/6 Lukas Tribus 

> Hi Smana,
>
>
> > i was wondering what would happen if a lot of users request the same
> > file (url).
> > Could it lead to an overload of one of our backends ?
>
> We don't know your backend, so we can't tell.
>
>
>
> > Is it possible to fix limits and switch to another backend if they're
> > exceeded?
>
> Doesn't the maxconn configuration already do what you are asking?
>
>
>
>
> Regards,
>
> Lukas


agent-check server in DRAIN state when the weight is not 100%

2014-08-22 Thread Smain Kahlouch
Hello all,

Maybe i misunderstood how the agent-check works.
Actually when i have a weight other than "100%" the server switches to
"DRAIN" state.

In my current setup i just have a unique server working.

echo 'show stat' | socat /var/run/haproxy/socket1 stdio | grep ^bk_global
bk_global,cache1,0,0,0,0,,0,0,0,,0,,0,0,0,0,UP,1,1,0,0,0,864,0,,1,1,1,,0,,2,0,,0,L4OK,,0,0,0,0,0,0,0,00,0,-1,,,0,0,0,0,
bk_global,cache2,0,0,0,0,,0,0,0,,0,,0,0,0,0,DOWN,1,1,0,1,1,859,859,,1,1,2,,0,,2,0,,0,L4CON,,2998,0,0,0,0,0,0,00,0,-1,No
route to host,No route to host,0,0,0,0,
bk_global,BACKEND,0,0,0,0,6557,0,0,0,0,0,,0,0,0,0,UP,1,1,0,,0,864,0,,1,1,0,,0,,1,0,,00,0,0,0,0,0,0,0,0,0,0,0,-1,,,0,0,0,0,

The agent-check report "100%"
telnet 10.104.9.81 4242
Trying 10.104.9.81...
Connected to 10.104.9.81.
Escape character is '^]'.
100%
Connection closed by foreign host.

When i force the value to "90%",

telnet 10.104.9.81 4242
Trying 10.104.9.81...
Connected to 10.104.9.81.
Escape character is '^]'.
90%
Connection closed by foreign host.

The status of the first server change to "DRAIN"
echo 'show stat' | socat /var/run/haproxy/socket1 stdio | grep ^bk_global
bk_global,cache1,0,0,0,0,,0,0,0,,0,,0,0,0,0,DRAIN,0,1,0,0,0,1020,0,,1,1,1,,0,,2,0,,0,L4OK,,0,0,0,0,0,0,0,00,0,-1,,,0,0,0,0,
bk_global,cache2,0,0,0,0,,0,0,0,,0,,0,0,0,0,DOWN,1,1,0,1,1,1015,1015,,1,1,2,,0,,2,0,,0,L4CON,,2999,0,0,0,0,0,0,00,0,-1,No
route to host,No route to host,0,0,0,0,
bk_global,BACKEND,0,0,0,0,6557,0,0,0,0,0,,0,0,0,0,DOWN,0,0,0,,0,1020,1020,,1,1,0,,0,,1,0,,00,0,0,0,0,0,0,0,0,0,0,0,-1,,,0,0,0,0,

Is this the expected behaviour ?

Regards,
Smana


Re: agent-check server in DRAIN state when the weight is not 100%

2014-08-25 Thread Smain Kahlouch
Hi Malcom,

The version tested comes from the debian package v 1.5.3-1~bpo70+1
haproxy -v
HA-Proxy version 1.5.3 2014/07/25
Copyright 2000-2014 Willy Tarreau 

Thanks for your help,
Smana



2014-08-22 18:47 GMT+02:00 Malcolm Turnbull :

> Smana,
>
> I don't get that result on my system, which build are you running?
>
> [root@lbmaster ~]# echo "show stat" | socat
> unix-connect:/var/run/haproxy.stat stdio
> #
> pxname,svname,qcur,qmax,scur,smax,slim,stot,bin,bout,dreq,dresp,ereq,econ,eresp,wretr,wredis,status,weight,act,bck,chkfail,chkdown,lastchg,downtime,qlimit,pid,iid,sid,throttle,lbtot,tracked,type,rate,rate_lim,rate_max,check_status,check_code,check_duration,hrsp_1xx,hrsp_2xx,hrsp_3xx,hrsp_4xx,hrsp_5xx,hrsp_other,hanafail,req_rate,req_rate_max,req_tot,cli_abrt,srv_abrt,comp_in,comp_out,comp_byp,comp_rsp,lastsess,last_chk,last_agt,qtime,ctime,rtime,ttime,
>
> L72,FRONTEND,,,0,0,4,0,0,0,0,0,0,OPEN,1,2,00,0,0,00,0,0,0,0,0,,0,0,0,,,0,0,0,0
> L72,backup,0,0,0,0,,0,0,0,,0,,0,0,0,0,no
> check,1,0,1,,1,2,1,,0,,2,0,,00,0,0,0,0,0,00,0,-1,,,0,0,0,0,
>
> L72,RIP_Name,0,0,0,0,,0,0,0,,0,,0,0,0,0,UP,98,1,0,1,0,6303,0,,1,2,2,,0,,2,0,,0,L4OK,,0,0,0,0,0,0,0,00,0,-1,,,0,0,0,0,
>
> L72,BACKEND,0,0,0,0,4000,0,0,0,0,0,,0,0,0,0,UP,98,1,1,,0,6303,0,,1,2,0,,0,,1,0,,00,0,0,0,0,0,0,0,0,0,0,0,-1,,,0,0,0,0,
>
> stats,FRONTEND,,,0,2,2000,7,2797,102128,0,0,0,OPEN,1,3,00,0,0,20,6,0,1,0,0,,0,2,7,,,0,0,0,0
>
> stats,BACKEND,0,0,0,0,200,0,2797,102128,0,0,,0,0,0,0,UP,0,0,0,,0,6303,0,,1,3,0,,0,,1,0,,00,0,0,0,0,0,0,0,0,0,0,0,32,,,0,0,0,2,
>
>
>
>
> On 22 August 2014 14:38, Smain Kahlouch  wrote:
>
>> Hello all,
>>
>> Maybe i misunderstood how the agent-check works.
>> Actually when i have a weight other than "100%" the server switches to
>> "DRAIN" state.
>>
>> In my current setup i just have a unique server working.
>>
>> echo 'show stat' | socat /var/run/haproxy/socket1 stdio | grep ^bk_global
>>
>> bk_global,cache1,0,0,0,0,,0,0,0,,0,,0,0,0,0,UP,1,1,0,0,0,864,0,,1,1,1,,0,,2,0,,0,L4OK,,0,0,0,0,0,0,0,00,0,-1,,,0,0,0,0,
>> bk_global,cache2,0,0,0,0,,0,0,0,,0,,0,0,0,0,DOWN,1,1,0,1,1,859,859,,1,1,2,,0,,2,0,,0,L4CON,,2998,0,0,0,0,0,0,00,0,-1,No
>> route to host,No route to host,0,0,0,0,
>>
>> bk_global,BACKEND,0,0,0,0,6557,0,0,0,0,0,,0,0,0,0,UP,1,1,0,,0,864,0,,1,1,0,,0,,1,0,,00,0,0,0,0,0,0,0,0,0,0,0,-1,,,0,0,0,0,
>>
>> The agent-check report "100%"
>> telnet 10.104.9.81 4242
>> Trying 10.104.9.81...
>> Connected to 10.104.9.81.
>> Escape character is '^]'.
>> 100%
>> Connection closed by foreign host.
>>
>> When i force the value to "90%",
>>
>> telnet 10.104.9.81 4242
>> Trying 10.104.9.81...
>> Connected to 10.104.9.81.
>> Escape character is '^]'.
>> 90%
>> Connection closed by foreign host.
>>
>> The status of the first server change to "DRAIN"
>> echo 'show stat' | socat /var/run/haproxy/socket1 stdio | grep ^bk_global
>>
>> bk_global,cache1,0,0,0,0,,0,0,0,,0,,0,0,0,0,DRAIN,0,1,0,0,0,1020,0,,1,1,1,,0,,2,0,,0,L4OK,,0,0,0,0,0,0,0,00,0,-1,,,0,0,0,0,
>> bk_global,cache2,0,0,0,0,,0,0,0,,0,,0,0,0,0,DOWN,1,1,0,1,1,1015,1015,,1,1,2,,0,,2,0,,0,L4CON,,2999,0,0,0,0,0,0,00,0,-1,No
>> route to host,No route to host,0,0,0,0,
>>
>> bk_global,BACKEND,0,0,0,0,6557,0,0,0,0,0,,0,0,0,0,DOWN,0,0,0,,0,1020,1020,,1,1,0,,0,,1,0,,00,0,0,0,0,0,0,0,0,0,0,0,-1,,,0,0,0,0,
>>
>> Is this the expected behaviour ?
>>
>> Regards,
>> Smana
>>
>
>
>
> --
> Regards,
>
> Malcolm Turnbull.
>
> Loadbalancer.org Ltd.
> Phone: +44 (0)330 1604540
> http://www.loadbalancer.org/
>


Re: agent-check server in DRAIN state when the weight is not 100%

2014-08-25 Thread Smain Kahlouch
Hello,

I still have the same issue with 3 servers in my backend.

One of the server shows less than 100%:
for i in {1..3};do echo "Server : 10.104.9.8$i"; telnet 10.104.9.8$i 4242 |
grep ^[0-9].* ;done
Server : 10.104.9.81

90%
Connection closed by foreign host.
Server : 10.104.9.82

100%
Connection closed by foreign host.
Server : 10.104.9.83

100%
Connection closed by foreign host.

It is in "DRAIN" state in haproxy

echo 'show stat' | socat /var/run/haproxy/socket1 stdio | grep ^bk_global
bk_global,cache1,0,0,0,0,,0,0,0,,0,,0,0,0,0,DRAIN,0,1,0,0,0,1110,0,,1,1,1,,0,,2,0,,0,L4OK,,0,0,0,0,0,0,0,00,0,-1,,,0,0,0,0,
bk_global,cache2,0,0,0,0,,0,0,0,,0,,0,0,0,0,UP,1,1,0,0,0,1110,0,,1,1,2,,0,,2,0,,0,L4OK,,0,0,0,0,0,0,0,00,0,-1,,,0,0,0,0,
bk_global,cache3,0,0,0,0,,0,0,0,,0,,0,0,0,0,UP,1,1,0,0,0,1110,0,,1,1,3,,0,,2,0,,0,L4OK,,0,0,0,0,0,0,0,00,0,-1,,,0,0,0,0,
bk_global,BACKEND,0,0,0,0,6557,0,0,0,0,0,,0,0,0,0,UP,2,2,0,,0,1110,0,,1,1,0,,0,,1,0,,00,0,0,0,0,0,0,0,0,0,0,0,-1,,,0,0,0,0,

I'll try another version tomorrow,

Regards,
Smana


2014-08-25 12:14 GMT+02:00 Smain Kahlouch :

> Hi Malcom,
>
> The version tested comes from the debian package v 1.5.3-1~bpo70+1
> haproxy -v
> HA-Proxy version 1.5.3 2014/07/25
> Copyright 2000-2014 Willy Tarreau 
>
> Thanks for your help,
> Smana
>
>
>
> 2014-08-22 18:47 GMT+02:00 Malcolm Turnbull :
>
> Smana,
>>
>> I don't get that result on my system, which build are you running?
>>
>> [root@lbmaster ~]# echo "show stat" | socat
>> unix-connect:/var/run/haproxy.stat stdio
>> #
>> pxname,svname,qcur,qmax,scur,smax,slim,stot,bin,bout,dreq,dresp,ereq,econ,eresp,wretr,wredis,status,weight,act,bck,chkfail,chkdown,lastchg,downtime,qlimit,pid,iid,sid,throttle,lbtot,tracked,type,rate,rate_lim,rate_max,check_status,check_code,check_duration,hrsp_1xx,hrsp_2xx,hrsp_3xx,hrsp_4xx,hrsp_5xx,hrsp_other,hanafail,req_rate,req_rate_max,req_tot,cli_abrt,srv_abrt,comp_in,comp_out,comp_byp,comp_rsp,lastsess,last_chk,last_agt,qtime,ctime,rtime,ttime,
>>
>> L72,FRONTEND,,,0,0,4,0,0,0,0,0,0,OPEN,1,2,00,0,0,00,0,0,0,0,0,,0,0,0,,,0,0,0,0
>> L72,backup,0,0,0,0,,0,0,0,,0,,0,0,0,0,no
>> check,1,0,1,,1,2,1,,0,,2,0,,00,0,0,0,0,0,00,0,-1,,,0,0,0,0,
>>
>> L72,RIP_Name,0,0,0,0,,0,0,0,,0,,0,0,0,0,UP,98,1,0,1,0,6303,0,,1,2,2,,0,,2,0,,0,L4OK,,0,0,0,0,0,0,0,00,0,-1,,,0,0,0,0,
>>
>> L72,BACKEND,0,0,0,0,4000,0,0,0,0,0,,0,0,0,0,UP,98,1,1,,0,6303,0,,1,2,0,,0,,1,0,,00,0,0,0,0,0,0,0,0,0,0,0,-1,,,0,0,0,0,
>>
>> stats,FRONTEND,,,0,2,2000,7,2797,102128,0,0,0,OPEN,1,3,00,0,0,20,6,0,1,0,0,,0,2,7,,,0,0,0,0,,,,
>>
>> stats,BACKEND,0,0,0,0,200,0,2797,102128,0,0,,0,0,0,0,UP,0,0,0,,0,6303,0,,1,3,0,,0,,1,0,,00,0,0,0,0,0,0,0,0,0,0,0,32,,,0,0,0,2,
>>
>>
>>
>>
>> On 22 August 2014 14:38, Smain Kahlouch  wrote:
>>
>>> Hello all,
>>>
>>> Maybe i misunderstood how the agent-check works.
>>> Actually when i have a weight other than "100%" the server switches to
>>> "DRAIN" state.
>>>
>>> In my current setup i just have a unique server working.
>>>
>>> echo 'show stat' | socat /var/run/haproxy/socket1 stdio | grep ^bk_global
>>>
>>> bk_global,cache1,0,0,0,0,,0,0,0,,0,,0,0,0,0,UP,1,1,0,0,0,864,0,,1,1,1,,0,,2,0,,0,L4OK,,0,0,0,0,0,0,0,00,0,-1,,,0,0,0,0,
>>> bk_global,cache2,0,0,0,0,,0,0,0,,0,,0,0,0,0,DOWN,1,1,0,1,1,859,859,,1,1,2,,0,,2,0,,0,L4CON,,2998,0,0,0,0,0,0,00,0,-1,No
>>> route to host,No route to host,0,0,0,0,
>>>
>>> bk_global,BACKEND,0,0,0,0,6557,0,0,0,0,0,,0,0,0,0,UP,1,1,0,,0,864,0,,1,1,0,,0,,1,0,,00,0,0,0,0,0,0,0,0,0,0,0,-1,,,0,0,0,0,
>>>
>>> The agent-check report "100%"
>>> telnet 10.104.9.81 4242
>>> Trying 10.104.9.81...
>>> Connected to 10.104.9.81.
>>> Escape character is '^]'.
>>> 100%
>>> Connection closed by foreign host.
>>>
>>> When i force the value to "90%",
>>>
>>> telnet 10.104.9.81 4242
>>> Trying 10.104.9.81...
>>> Connected to 10.104.9.81.
>>> Escape character is '^]'.
>>> 90%
>>> Connection closed by foreign host.
>>>
>>> The status of the first server change to "DRAIN"
>>> echo 'show stat' | socat /var/run/haproxy/socket1 stdio | grep ^bk_global
>>>
>>> bk_global,cache1,0,0,0,0,,0,0,0,,0,,0,0,0,0,DRAIN,0,1,0,0,0,1020,0,,1,1,1,,0,,2,0,,0,L4OK,,0,0,0,0,0,0,0,00,0,-1,,,0,0,0,0,
>>> bk_global,cache2,0,0,0,0,,0,0,0,,0,,0,0,0,0,DOWN,1,1,0,1,1,1015,1015,,1,1,2,,0,,2,0,,0,L4CON,,2999,0,0,0,0,0,0,00,0,-1,No
>>> route to host,No route to host,0,0,0,0,
>>>
>>> bk_global,BACKEND,0,0,0,0,6557,0,0,0,0,0,,0,0,0,0,DOWN,0,0,0,,0,1020,1020,,1,1,0,,0,,1,0,,00,0,0,0,0,0,0,0,0,0,0,0,-1,,,0,0,0,0,
>>>
>>> Is this the expected behaviour ?
>>>
>>> Regards,
>>> Smana
>>>
>>
>>
>>
>> --
>> Regards,
>>
>> Malcolm Turnbull.
>>
>> Loadbalancer.org Ltd.
>> Phone: +44 (0)330 1604540
>> http://www.loadbalancer.org/
>>
>
>


Fwd: agent-check server in DRAIN state when the weight is not 100%

2014-08-26 Thread Smain Kahlouch
Hello Malcom,

Indeed that was caused by a missing "weight" directive in my configuration.

<   server cache1 10.104.9.81:80 check agent-check agent-inter 10s
agent-port 4242
<   server cache2 10.104.9.82:80 check agent-check agent-inter 10s
agent-port 4242
<   server cache3 10.104.9.83:80 check agent-check agent-inter 10s
agent-port 4242
---
>   server cache1 10.104.9.81:80 weight 100 check agent-check agent-inter
10s agent-port 4242
>   server cache2 10.104.9.82:80 weight 100 check agent-check agent-inter
10s agent-port 4242
>   server cache3 10.104.9.83:80 weight 100 check agent-check agent-inter
10s agent-port 4242


for i in {1..3};do echo "Server : 10.104.9.8$i"; telnet 10.104.9.8$i 4242 |
grep ^[0-9].* ;done
Server : 10.104.9.81
90%
Server : 10.104.9.82
99%
Server : 10.104.9.83
50%


echo 'show stat' | socat /var/run/haproxy/socket1 stdio | grep ^bk_global
bk_global,cache1,0,0,0,0,,0,0,0,,0,,0,0,0,0,UP,90,1,0,0,0,266,0,,1,1,1,,0,,2,0,,0,L4OK,,0,0,0,0,0,0,0,00,0,-1,,,0,0,0,0,
bk_global,cache2,0,0,0,0,,0,0,0,,0,,0,0,0,0,UP,99,1,0,0,0,266,0,,1,1,2,,0,,2,0,,0,L4OK,,0,0,0,0,0,0,0,00,0,-1,,,0,0,0,0,
bk_global,cache3,0,0,0,0,,0,0,0,,0,,0,0,0,0,UP,50,1,0,0,0,266,0,,1,1,3,,0,,2,0,,0,L4OK,,0,0,0,0,0,0,0,00,0,-1,,,0,0,0,0,
bk_global,BACKEND,0,0,0,0,6557,0,0,0,0,0,,0,0,0,0,UP,239,3,0,,0,266,0,,1,1,0,,0,,1,0,,00,0,0,0,0,0,0,0,0,0,0,0,-1,,,0,0,0,0,

Thanks for your help,
Smana



2014-08-25 19:58 GMT+02:00 Malcolm Turnbull :


>
> Smain,
>
> Just a quick thought, you are doing a normal health check as well as the
> agent check aren't you?
> They are independent by design.
> i.e. your backend config is similar to:
> weight 100 check agent-check agent-port 
>
>
>
> On 25 August 2014 17:04, Smain Kahlouch  wrote:
>
>> Hello,
>>
>> I still have the same issue with 3 servers in my backend.
>>
>> One of the server shows less than 100%:
>> for i in {1..3};do echo "Server : 10.104.9.8$i"; telnet 10.104.9.8$i 4242
>> | grep ^[0-9].* ;done
>> Server : 10.104.9.81
>>
>> 90%
>> Connection closed by foreign host.
>> Server : 10.104.9.82
>>
>> 100%
>> Connection closed by foreign host.
>> Server : 10.104.9.83
>>
>> 100%
>> Connection closed by foreign host.
>>
>> It is in "DRAIN" state in haproxy
>>
>> echo 'show stat' | socat /var/run/haproxy/socket1 stdio | grep ^bk_global
>>
>> bk_global,cache1,0,0,0,0,,0,0,0,,0,,0,0,0,0,DRAIN,0,1,0,0,0,1110,0,,1,1,1,,0,,2,0,,0,L4OK,,0,0,0,0,0,0,0,00,0,-1,,,0,0,0,0,
>>
>> bk_global,cache2,0,0,0,0,,0,0,0,,0,,0,0,0,0,UP,1,1,0,0,0,1110,0,,1,1,2,,0,,2,0,,0,L4OK,,0,0,0,0,0,0,0,00,0,-1,,,0,0,0,0,
>>
>> bk_global,cache3,0,0,0,0,,0,0,0,,0,,0,0,0,0,UP,1,1,0,0,0,1110,0,,1,1,3,,0,,2,0,,0,L4OK,,0,0,0,0,0,0,0,00,0,-1,,,0,0,0,0,
>>
>> bk_global,BACKEND,0,0,0,0,6557,0,0,0,0,0,,0,0,0,0,UP,2,2,0,,0,1110,0,,1,1,0,,0,,1,0,,00,0,0,0,0,0,0,0,0,0,0,0,-1,,,0,0,0,0,
>>
>> I'll try another version tomorrow,
>>
>> Regards,
>> Smana
>>
>>
>
>
> --
> Regards,
>
> Malcolm Turnbull.
>
> Loadbalancer.org Ltd.
> Phone: +44 (0)330 1604540
> http://www.loadbalancer.org/
>


Haproxy & Kubernetes, dynamic backend configuration

2015-09-18 Thread Smain Kahlouch
Hello all,

I guess this question has been posted many times but maybe there is some
new way to achieve this.

I'm currently testing kubernetes  with calico
 and i configured a fixed loadbalancing
using "NodePort
"
kubernetes loadbalancing.

But i wanted to bypass that second kubernete's internal loadbalancing.
The idea would be to loadbalance directly to the pods instead of a
kubernetes service address.
To do so i found the vulcan loadbalancer which seems to be well suited for
dynamic configuration. This documentation
 describes how
it works.

Is there a way to achieve the same behaviour : listen the api and change
backends dynamically ?

Thanks for your help,

Regards,
Smana


Re: Haproxy & Kubernetes, dynamic backend configuration

2015-09-18 Thread Smain Kahlouch
Hi Baptiste,

How do you do ? :)

Anyway my concern is more about changing dynamically parameters, by
fetching a source at regular intervals (api) than about kubernetes itself.

Actually, i only have one need currently : the list of servers part of a
backend.
This is the only thing i want to change dinamically, the other parameters
can be set on startup.
When the server is configured for the first time i use Ansible to fetch the
api and retrieve the necessary information.

I look forward to the new way to acheive that.

Regards,
Smana

2015-09-18 13:31 GMT+02:00 Baptiste :

> On Fri, Sep 18, 2015 at 10:49 AM, Smain Kahlouch 
> wrote:
> > Hello all,
> >
> > I guess this question has been posted many times but maybe there is some
> new
> > way to achieve this.
> >
> > I'm currently testing kubernetes with calico and i configured a fixed
> > loadbalancing using "NodePort" kubernetes loadbalancing.
> >
> > But i wanted to bypass that second kubernete's internal loadbalancing.
> > The idea would be to loadbalance directly to the pods instead of a
> > kubernetes service address.
> > To do so i found the vulcan loadbalancer which seems to be well suited
> for
> > dynamic configuration. This documentation describes how it works.
> >
> > Is there a way to achieve the same behaviour : listen the api and change
> > backends dynamically ?
> >
> > Thanks for your help,
> >
> > Regards,
> > Smana
> >
>
>
> Hey Smaine,
>
> I'm totally lost with all you buzz keywords!
>
> there is no way currently to achieve this purpose.
> That said, we're aware of this type of requirements and are thinking
> about different methods to achieve this goal.
>
> That said, could you please list here what HAProxy's parameters you
> would like to see dynamically changeable at run time?
>
> Baptiste
>


Re: Haproxy & Kubernetes, dynamic backend configuration

2015-09-18 Thread Smain Kahlouch
>
> euh, I'm lost here, could you details who does what exactly???


I configure the backends given the underlying configured services (running
on kubernetes).
So at startup, i already have a list of servers for each backend.
Then this list can change as the number of instances can be scaled easily.



2015-09-18 14:03 GMT+02:00 Baptiste :

> > How do you do ? :)
>
> I do :)
>
>
> > Anyway my concern is more about changing dynamically parameters, by
> fetching
> > a source at regular intervals (api) than about kubernetes itself.
>
> Ok, this is important to know :)
>
> > Actually, i only have one need currently : the list of servers part of a
> backend.
> > This is the only thing i want to change dinamically, the other parameters
> > can be set on startup.
>
> ok,
>
> > When the server is configured for the first time i use Ansible to fetch
> the
> > api and retrieve the necessary information.
>
> euh, I'm lost here, could you details who does what exactly???
>
>
> > I look forward to the new way to acheive that.
>
> So the way we thought for now is using DNS SRV query types.
> That said it implies a partial reachitecture of the way the 'backend'
> currently works in HAProxy.
>
> We'll start the discussion about the points above later in October,
> when the load generated by 1.6 release will be lower.
>
> Baptiste
>
>
>
> >
> > Regards,
> > Smana
> >
> > 2015-09-18 13:31 GMT+02:00 Baptiste :
> >>
> >> On Fri, Sep 18, 2015 at 10:49 AM, Smain Kahlouch 
> >> wrote:
> >> > Hello all,
> >> >
> >> > I guess this question has been posted many times but maybe there is
> some
> >> > new
> >> > way to achieve this.
> >> >
> >> > I'm currently testing kubernetes with calico and i configured a fixed
> >> > loadbalancing using "NodePort" kubernetes loadbalancing.
> >> >
> >> > But i wanted to bypass that second kubernete's internal loadbalancing.
> >> > The idea would be to loadbalance directly to the pods instead of a
> >> > kubernetes service address.
> >> > To do so i found the vulcan loadbalancer which seems to be well suited
> >> > for
> >> > dynamic configuration. This documentation describes how it works.
> >> >
> >> > Is there a way to achieve the same behaviour : listen the api and
> change
> >> > backends dynamically ?
> >> >
> >> > Thanks for your help,
> >> >
> >> > Regards,
> >> > Smana
> >> >
> >>
> >>
> >> Hey Smaine,
> >>
> >> I'm totally lost with all you buzz keywords!
> >>
> >> there is no way currently to achieve this purpose.
> >> That said, we're aware of this type of requirements and are thinking
> >> about different methods to achieve this goal.
> >>
> >> That said, could you please list here what HAProxy's parameters you
> >> would like to see dynamically changeable at run time?
> >>
> >> Baptiste
> >
> >
>


Re: Haproxy & Kubernetes, dynamic backend configuration

2015-09-18 Thread Smain Kahlouch
>
> If I may chime in here: Kubernetes supports service discovery through DNS
> SRV records for most use-cases, so the dynamic DNS support that Baptiste is
> currently working on would be a perfect fit. No special API support
> required.


Well dns would be great but, as far as i know, kubernetes uses dns only for
services name, not for pods.
A pod can be seen as a server in a backend, the number of servers and their
ip addresses can change frequently.
I'll dig further...

Thanks,
Smana



2015-09-18 13:58 GMT+02:00 Conrad Hoffmann :

> If I may chime in here: Kubernetes supports service discovery through DNS
> SRV records for most use-cases, so the dynamic DNS support that Baptiste is
> currently working on would be a perfect fit. No special API support
> required.
>
> Cheers,
> Conrad
>
> On 09/18/2015 01:53 PM, Smain Kahlouch wrote:
> > Hi Baptiste,
> >
> > How do you do ? :)
> >
> > Anyway my concern is more about changing dynamically parameters, by
> > fetching a source at regular intervals (api) than about kubernetes
> itself.
> >
> > Actually, i only have one need currently : the list of servers part of a
> > backend.
> > This is the only thing i want to change dinamically, the other parameters
> > can be set on startup.
> > When the server is configured for the first time i use Ansible to fetch
> the
> > api and retrieve the necessary information.
> >
> > I look forward to the new way to acheive that.
> >
> > Regards,
> > Smana
> >
> > 2015-09-18 13:31 GMT+02:00 Baptiste :
> >
> >> On Fri, Sep 18, 2015 at 10:49 AM, Smain Kahlouch 
> >> wrote:
> >>> Hello all,
> >>>
> >>> I guess this question has been posted many times but maybe there is
> some
> >> new
> >>> way to achieve this.
> >>>
> >>> I'm currently testing kubernetes with calico and i configured a fixed
> >>> loadbalancing using "NodePort" kubernetes loadbalancing.
> >>>
> >>> But i wanted to bypass that second kubernete's internal loadbalancing.
> >>> The idea would be to loadbalance directly to the pods instead of a
> >>> kubernetes service address.
> >>> To do so i found the vulcan loadbalancer which seems to be well suited
> >> for
> >>> dynamic configuration. This documentation describes how it works.
> >>>
> >>> Is there a way to achieve the same behaviour : listen the api and
> change
> >>> backends dynamically ?
> >>>
> >>> Thanks for your help,
> >>>
> >>> Regards,
> >>> Smana
> >>>
> >>
> >>
> >> Hey Smaine,
> >>
> >> I'm totally lost with all you buzz keywords!
> >>
> >> there is no way currently to achieve this purpose.
> >> That said, we're aware of this type of requirements and are thinking
> >> about different methods to achieve this goal.
> >>
> >> That said, could you please list here what HAProxy's parameters you
> >> would like to see dynamically changeable at run time?
> >>
> >> Baptiste
> >>
> >
>
> --
> Conrad Hoffmann
> Traffic Engineer
>
> SoundCloud Ltd. | Rheinsberger Str. 76/77, 10115 Berlin, Germany
>
> Managing Director: Alexander Ljung | Incorporated in England & Wales
> with Company No. 6343600 | Local Branch Office | AG Charlottenburg |
> HRB 110657B
>