Disclaimer: I help maintain SmartStack and this is a shameless plug

You can also achieve a fast and reliable dynamic backend system by
using something off the shelf like airbnb/Yelp SmartStack
(http://nerds.airbnb.com/smartstack-service-discovery-cloud/).

Basically there is nerve that runs on every machine healthchecking
services, and once they pass healthchecks they get registered in a
centralized registration system which is pluggable (zookeeper is the
default but DNS is another option, and we're working on DNS SRV
support). Then there is synapse which runs on every client machine and
handles re-configuring HAProxy for you automatically, handling details
like doing socket updates vs reloading HAProxy correctly. To make this
truly reliable on some systems you have to do some tricks to
gracefully reload HAProxy for picking up new backends; search for zero
downtime haproxy reloads to see how we solved it, but there are lots
of solutions.

We use this stack at Yelp to achieve the same kind of dynamic load
balancing you're talking about except instead of kubernetes we use
mesos and marathon. The one real trick here is to use a link local IP
address and run the HAProxy/Synapse instances on the machines
themselves but have containers talk over the link local IP address. I
haven't tried it with kubernetes but given my understanding you'd end
up with the same problem.

We plan to automatically support whichever DNS or stats socket based
solution the HAProxy devs go with for dynamic backend changes.

-Joey

On Fri, Sep 18, 2015 at 8:34 AM, Eduard Martinescu
<emartine...@salsalabs.com> wrote:
> I have implemented something similar to allow use to dynamically
> load-balance between multiple backends that are all joined to each other as
> part of a Hazelcast cluster.  All of which is running in an AWS VPC, with
> autoscaling groups to control spin up and down of new members of the cluster
> based on load, etc.
>
> What we ended up doing is righting custom code that attached to the
> hazelcast cluster as a client, and periodically queried the cluster for the
> current list of servers, and their IP addresses.  The coded would then
> rewrite the HAProxy configuration, filling in the correct backend list.
> Then via a shell call (sadly, Java can't do Unix domain sockets to write
> directly to the server), it would tell HAProxy to restart gracefully.
>
> In our use case, this works great, as we don't have long-running TCP
> connections (these servers typically serve REST API calls or static HTML
> content with no keep-alive.)
>
> I'm also open to suggestions on how this could be improved too, especially
> with 1.6 possibly.
>
> Ed
>
> ________________________________
> ✉ Eduard Martinescu | ✆ (585) 708-9685 |  - ignite action. fuel change.
>
> On Fri, Sep 18, 2015 at 9:21 AM, Baptiste <bed...@gmail.com> wrote:
>>
>> On Fri, Sep 18, 2015 at 3:18 PM, Smain Kahlouch <smain...@gmail.com>
>> wrote:
>> >> If I may chime in here: Kubernetes supports service discovery through
>> >> DNS
>> >> SRV records for most use-cases, so the dynamic DNS support that
>> >> Baptiste
>> >> is
>> >> currently working on would be a perfect fit. No special API support
>> >> required.
>> >
>> >
>> > Well dns would be great but, as far as i know, kubernetes uses dns only
>> > for
>> > services name, not for pods.
>> > A pod can be seen as a server in a backend, the number of servers and
>> > their
>> > ip addresses can change frequently.
>> > I'll dig further...
>> >
>> > Thanks,
>> > Smana
>>
>>
>> That's usually the purpose of DNS SRV records ;)
>>
>> Baptiste
>>
>

Reply via email to