(Sorry for the new thread, was not subscribed before.)

The initial examples mentions the classic lb-->web-tier use case. Is the intended common way to satisfy this "stick the DNS record in the LB config" or "write a template script that fetches all of the services and updates the LB"? (My reading is the answer is both.)

If there a json-not-dns way to get a list of all of the containers in a service? For example, to shove them all into monitoring tool's dashboard. Sorry if this should be obvious but if my services has a hundred containers that won't fit conveniently in a single DNS TXT response, right? (This would fit naturally with whatever api is used to create/mutate services? Or is there no CRUD api because services are implicitly created the first time a container is tagged with one?)

I totally get not having the naming service *be* a VRPP equipped load balancer and systems that go down that road feel terribly complex. But I'm not sure that means there isn't low hanging fruit around healthchecks. For example, I'd like to be able to keep a container running but put it in a "maintenance mode" (no longer advertise as providing the service) while a debug a issue on node. I'd also like to be able to "drain" a container from a service (leave it running but no longer advertise so it can gracefully be removed). A load balancer is presumably already able to handle a node going down, but if it looks more like fan-->out-->to->a->bunch->of->internal->service I'm less confident that all of them get retry & reocnnect code correct. (I'm conflating state transitions and healthchecks a bit because I can fake a binary state by just having a health check that fails, but they could also be first class concerns.) I think "are the right things in the service right now" can be a useful problem to solve without going all the way down the HA rabbit hole.

I know we are talking about a "Naming Service" but I'm not sure services should have names instead of UUIDs. I think this will inevitably lead to people cramming in metadata and over time all service names looking like "CLONE-CLONE-CLONE-myapp-TEST-v12345". This isn't just an ascetic issue issue, I think it would be really useful if this service could be a building block for doing canary deploys (and telling my monitoring/alerting systems about them). I really like (on paper) the kubernetes label+selector approach and I think that team has mentioned labels specifically as being born out of internal scars. I'm know that with DNS being the desired (and very useful!) protocol there might not be a good way to do labels. My initial reaction to trying to embed some sort of selector queries in dns queries is that is is probably a terrifying can of worms. But without metadata I think there will be a thousand paper-cuts later.


-------------------------------------------
smartos-discuss
Archives: https://www.listbox.com/member/archive/184463/=now
RSS Feed: https://www.listbox.com/member/archive/rss/184463/25769125-55cfbc00
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=25769125&id_secret=25769125-7688e9fb
Powered by Listbox: http://www.listbox.com

Reply via email to