Hi.

Am 25-02-2016 16:15, schrieb Smain Kahlouch:
Hi !

Sorry to bother you again with this question, but still i think it would
be a great feature to loadbalance directly to pods from haproxy :)
Is there any news on the roadmap about that ?

How about DNS as mentioned below?

https://github.com/kubernetes/kubernetes/blob/v1.0.6/cluster/addons/dns/README.md
http://cbonte.github.io/haproxy-dconv/configuration-1.6.html#5.3

### oc rsh -c ng-socklog nginx-test-2-6em5w
cat /etc/resolv.conf
nameserver 172.30.0.1
nameserver 172.31.31.227
search nginx-test.svc.cluster.local svc.cluster.local cluster.local
options ndots:5

ping docker-registry.default.svc.cluster.local
####

####
oc describe svc docker-registry -n default
Name:                   docker-registry
Namespace:              default
Labels:                 docker-registry=default
Selector:               docker-registry=default
Type:                   ClusterIP
IP:                     172.30.38.182
Port:                   5000-tcp        5000/TCP
Endpoints:              10.1.5.52:5000
Session Affinity:       None
No events.
####

Another option is that you startup script adds the A record into skydns

https://github.com/skynetservices/skydns

But I don't see benefit to conncect directly to the endpoint, due to the fact that this will be changed when the pod is recreated!

BR Aleks

Regards,
Smana

2015-09-22 20:21 GMT+02:00 Joseph Lynch <joe.e.ly...@gmail.com>:

Disclaimer: I help maintain SmartStack and this is a shameless plug

You can also achieve a fast and reliable dynamic backend system by
using something off the shelf like airbnb/Yelp SmartStack
(http://nerds.airbnb.com/smartstack-service-discovery-cloud/).

Basically there is nerve that runs on every machine healthchecking
services, and once they pass healthchecks they get registered in a
centralized registration system which is pluggable (zookeeper is the
default but DNS is another option, and we're working on DNS SRV
support). Then there is synapse which runs on every client machine and
handles re-configuring HAProxy for you automatically, handling details
like doing socket updates vs reloading HAProxy correctly. To make this
truly reliable on some systems you have to do some tricks to
gracefully reload HAProxy for picking up new backends; search for zero
downtime haproxy reloads to see how we solved it, but there are lots
of solutions.

We use this stack at Yelp to achieve the same kind of dynamic load
balancing you're talking about except instead of kubernetes we use
mesos and marathon. The one real trick here is to use a link local IP
address and run the HAProxy/Synapse instances on the machines
themselves but have containers talk over the link local IP address. I
haven't tried it with kubernetes but given my understanding you'd end
up with the same problem.

We plan to automatically support whichever DNS or stats socket based
solution the HAProxy devs go with for dynamic backend changes.

-Joey

On Fri, Sep 18, 2015 at 8:34 AM, Eduard Martinescu
<emartine...@salsalabs.com> wrote:
I have implemented something similar to allow use to dynamically
load-balance between multiple backends that are all joined to each
other as
part of a Hazelcast cluster.  All of which is running in an AWS VPC,
with
autoscaling groups to control spin up and down of new members of the
cluster
based on load, etc.

What we ended up doing is righting custom code that attached to the
hazelcast cluster as a client, and periodically queried the cluster
for the
current list of servers, and their IP addresses.  The coded would
then
rewrite the HAProxy configuration, filling in the correct backend
list.
Then via a shell call (sadly, Java can't do Unix domain sockets to
write
directly to the server), it would tell HAProxy to restart
gracefully.

In our use case, this works great, as we don't have long-running TCP
connections (these servers typically serve REST API calls or static
HTML
content with no keep-alive.)

I'm also open to suggestions on how this could be improved too,
especially
with 1.6 possibly.

Ed

________________________________
✉ Eduard Martinescu | ✆ (585) 708-9685 [1] |  - ignite action.
fuel change.


On Fri, Sep 18, 2015 at 9:21 AM, Baptiste <bed...@gmail.com> wrote:

On Fri, Sep 18, 2015 at 3:18 PM, Smain Kahlouch
<smain...@gmail.com>
wrote:
If I may chime in here: Kubernetes supports service discovery
through
DNS
SRV records for most use-cases, so the dynamic DNS support that
Baptiste
is
currently working on would be a perfect fit. No special API
support
required.


Well dns would be great but, as far as i know, kubernetes uses
dns only
for
services name, not for pods.
A pod can be seen as a server in a backend, the number of servers
and
their
ip addresses can change frequently.
I'll dig further...

Thanks,
Smana


That's usually the purpose of DNS SRV records ;)

Baptiste





Links:
------
[1] tel:%28585%29%20708-9685

Reply via email to