Is it plan to support backend configuration with stats socket ?

2016-04-13 16:09 GMT+02:00 Smain Kahlouch <smain...@gmail.com>:

> Ok thank you,
> I'll have a look to SmartStack.
>
> 2016-04-13 16:03 GMT+02:00 B. Heath Robinson <he...@midnighthour.org>:
>
>> SmartStack was mentioned earlier in the thread.  It does a VERY good job
>> of doing this.  It rewrites the haproxy configuration and performs a reload
>> based on changes in a source database by a polling service on each
>> instance.  The canonical DB is zookeeper.
>>
>> We have been using this in production for over 2 years and have had very
>> little trouble with it.  In the next year we will be moving to docker
>> containers and plan to make some changes to our configuration and move
>> forward with SmartStack.
>>
>> That said, your polling application could write instructions to the stats
>> socket; however, it currently does not allow adding/removing servers but
>> only enabling/disabling.  See
>> https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#9.2 for
>> more info.  BTW, this section is missing from the 1.6 manual.  You might
>> also see https://github.com/flores/haproxyctl.
>>
>> On Wed, Apr 13, 2016 at 7:42 AM Smain Kahlouch <smain...@gmail.com>
>> wrote:
>>
>>> Sorry to answer to this thread so late :p
>>>
>>>
>>> due to the fact that this will be changed when the pod is recreated!
>>>>
>>>
>>> Alexis, as i mentionned earlier the idea is to detect these changes by
>>> polling in a regular basis the API and change the backend configuration
>>> automatically.
>>> Using the DNS (addon) is not what i would like to achieve because it
>>> still uses kubernetes internal loadbalancing system.
>>> Furthermore it seems to me easier to use the NodePort,
>>> <http://kubernetes.io/docs/user-guide/services/#type-nodeport>This is
>>> what i use today.
>>>
>>> Nginx Plus has now such feature :
>>>
>>>> With APIs – This method uses the NGINX Plus on-the-fly reconfiguration
>>>> API
>>>> <https://www.nginx.com/resources/admin-guide/load-balancer/#upstream_conf>
>>>> to add and remove entries for Kubernetes pods in the NGINX Plus
>>>> configuration, and the Kubernetes API to retrieve the IP addresses of the
>>>> pods. This method requires us to write some code, and we won’t discuss it
>>>> in depth here. For details, watch Kelsey Hightower’s webinar, Bringing
>>>> Kubernetes to the Edge with NGINX Plus
>>>> <https://www.nginx.com/resources/webinars/bringing-kubernetes-to-the-edge-with-nginx-plus/>,
>>>> in which he explores the APIs and creates an application that utilizes 
>>>> them.
>>>>
>>>
>>> Please let me know if you are considering this feature in the future.
>>> Alternatively perhaps you can guide me to propose a plugin. Actually
>>> python is the language i used to play with but maybe that's not possible.
>>>
>>> Regards,
>>> Smana
>>>
>>> 2016-02-25 18:29 GMT+01:00 Aleksandar Lazic <al-hapr...@none.at>:
>>>
>>>> Hi.
>>>>
>>>> Am 25-02-2016 16:15, schrieb Smain Kahlouch:
>>>>
>>>>> Hi !
>>>>>
>>>>> Sorry to bother you again with this question, but still i think it
>>>>> would
>>>>> be a great feature to loadbalance directly to pods from haproxy :)
>>>>> Is there any news on the roadmap about that ?
>>>>>
>>>>
>>>> How about DNS as mentioned below?
>>>>
>>>>
>>>> https://github.com/kubernetes/kubernetes/blob/v1.0.6/cluster/addons/dns/README.md
>>>> http://cbonte.github.io/haproxy-dconv/configuration-1.6.html#5.3
>>>>
>>>> ### oc rsh -c ng-socklog nginx-test-2-6em5w
>>>> cat /etc/resolv.conf
>>>> nameserver 172.30.0.1
>>>> nameserver 172.31.31.227
>>>> search nginx-test.svc.cluster.local svc.cluster.local cluster.local
>>>> options ndots:5
>>>>
>>>> ping docker-registry.default.svc.cluster.local
>>>> ####
>>>>
>>>> ####
>>>> oc describe svc docker-registry -n default
>>>> Name:                   docker-registry
>>>> Namespace:              default
>>>> Labels:                 docker-registry=default
>>>> Selector:               docker-registry=default
>>>> Type:                   ClusterIP
>>>> IP:                     172.30.38.182
>>>> Port:                   5000-tcp        5000/TCP
>>>> Endpoints:              10.1.5.52:5000
>>>> Session Affinity:       None
>>>> No events.
>>>> ####
>>>>
>>>> Another option is that you startup script adds the A record into skydns
>>>>
>>>> https://github.com/skynetservices/skydns
>>>>
>>>> But I don't see benefit to conncect directly to the endpoint, due to
>>>> the fact that this will be changed when the pod is recreated!
>>>>
>>>> BR Aleks
>>>>
>>>> Regards,
>>>>> Smana
>>>>>
>>>>> 2015-09-22 20:21 GMT+02:00 Joseph Lynch <joe.e.ly...@gmail.com>:
>>>>>
>>>>> Disclaimer: I help maintain SmartStack and this is a shameless plug
>>>>>>
>>>>>> You can also achieve a fast and reliable dynamic backend system by
>>>>>> using something off the shelf like airbnb/Yelp SmartStack
>>>>>> (http://nerds.airbnb.com/smartstack-service-discovery-cloud/).
>>>>>>
>>>>>> Basically there is nerve that runs on every machine healthchecking
>>>>>> services, and once they pass healthchecks they get registered in a
>>>>>> centralized registration system which is pluggable (zookeeper is the
>>>>>> default but DNS is another option, and we're working on DNS SRV
>>>>>> support). Then there is synapse which runs on every client machine and
>>>>>> handles re-configuring HAProxy for you automatically, handling details
>>>>>> like doing socket updates vs reloading HAProxy correctly. To make this
>>>>>> truly reliable on some systems you have to do some tricks to
>>>>>> gracefully reload HAProxy for picking up new backends; search for zero
>>>>>> downtime haproxy reloads to see how we solved it, but there are lots
>>>>>> of solutions.
>>>>>>
>>>>>> We use this stack at Yelp to achieve the same kind of dynamic load
>>>>>> balancing you're talking about except instead of kubernetes we use
>>>>>> mesos and marathon. The one real trick here is to use a link local IP
>>>>>> address and run the HAProxy/Synapse instances on the machines
>>>>>> themselves but have containers talk over the link local IP address. I
>>>>>> haven't tried it with kubernetes but given my understanding you'd end
>>>>>> up with the same problem.
>>>>>>
>>>>>> We plan to automatically support whichever DNS or stats socket based
>>>>>> solution the HAProxy devs go with for dynamic backend changes.
>>>>>>
>>>>>> -Joey
>>>>>>
>>>>>> On Fri, Sep 18, 2015 at 8:34 AM, Eduard Martinescu
>>>>>> <emartine...@salsalabs.com> wrote:
>>>>>>
>>>>>>> I have implemented something similar to allow use to dynamically
>>>>>>> load-balance between multiple backends that are all joined to each
>>>>>>>
>>>>>> other as
>>>>>>
>>>>>>> part of a Hazelcast cluster.  All of which is running in an AWS VPC,
>>>>>>>
>>>>>> with
>>>>>>
>>>>>>> autoscaling groups to control spin up and down of new members of the
>>>>>>>
>>>>>> cluster
>>>>>>
>>>>>>> based on load, etc.
>>>>>>>
>>>>>>> What we ended up doing is righting custom code that attached to the
>>>>>>> hazelcast cluster as a client, and periodically queried the cluster
>>>>>>>
>>>>>> for the
>>>>>>
>>>>>>> current list of servers, and their IP addresses.  The coded would
>>>>>>>
>>>>>> then
>>>>>>
>>>>>>> rewrite the HAProxy configuration, filling in the correct backend
>>>>>>>
>>>>>> list.
>>>>>>
>>>>>>> Then via a shell call (sadly, Java can't do Unix domain sockets to
>>>>>>>
>>>>>> write
>>>>>>
>>>>>>> directly to the server), it would tell HAProxy to restart
>>>>>>>
>>>>>> gracefully.
>>>>>>
>>>>>>>
>>>>>>> In our use case, this works great, as we don't have long-running TCP
>>>>>>> connections (these servers typically serve REST API calls or static
>>>>>>>
>>>>>> HTML
>>>>>>
>>>>>>> content with no keep-alive.)
>>>>>>>
>>>>>>> I'm also open to suggestions on how this could be improved too,
>>>>>>>
>>>>>> especially
>>>>>>
>>>>>>> with 1.6 possibly.
>>>>>>>
>>>>>>> Ed
>>>>>>>
>>>>>>> ________________________________
>>>>>>> ✉ Eduard Martinescu | ✆ (585) 708-9685 [1] |  - ignite action.
>>>>>>>
>>>>>> fuel change.
>>>>>>
>>>>>>
>>>>>>> On Fri, Sep 18, 2015 at 9:21 AM, Baptiste <bed...@gmail.com> wrote:
>>>>>>>
>>>>>>>>
>>>>>>>> On Fri, Sep 18, 2015 at 3:18 PM, Smain Kahlouch
>>>>>>>>
>>>>>>> <smain...@gmail.com>
>>>>>>
>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> If I may chime in here: Kubernetes supports service discovery
>>>>>>>>>>
>>>>>>>>> through
>>>>>>
>>>>>>> DNS
>>>>>>>>>> SRV records for most use-cases, so the dynamic DNS support that
>>>>>>>>>> Baptiste
>>>>>>>>>> is
>>>>>>>>>> currently working on would be a perfect fit. No special API
>>>>>>>>>>
>>>>>>>>> support
>>>>>>
>>>>>>> required.
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Well dns would be great but, as far as i know, kubernetes uses
>>>>>>>>>
>>>>>>>> dns only
>>>>>>
>>>>>>> for
>>>>>>>>> services name, not for pods.
>>>>>>>>> A pod can be seen as a server in a backend, the number of servers
>>>>>>>>>
>>>>>>>> and
>>>>>>
>>>>>>> their
>>>>>>>>> ip addresses can change frequently.
>>>>>>>>> I'll dig further...
>>>>>>>>>
>>>>>>>>> Thanks,
>>>>>>>>> Smana
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> That's usually the purpose of DNS SRV records ;)
>>>>>>>>
>>>>>>>> Baptiste
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>
>>>>>
>>>>> Links:
>>>>> ------
>>>>> [1] tel:%28585%29%20708-9685
>>>>>
>>>>
>>>
>

Reply via email to