We do have some latency on etcd because of all reads going through
consensus. However, it never got too bad, and the kube-apiserver caches
mostly mask this from the rest of the cluster. Also, cluster operations
don't *really* need sub-second latency for us, so it's "subjectively okay".
We haven't even followed the trend to splitting out a separate etcd ring
for events (but plan to eventually). Also, etcdv3 supposedly makes
everything even better, and I have yet to do that upgrade too.

We do sometimes see kubelets fail to see updates to pods (mostly relevant
on termination) but have no proof that it's related to this setup. There
was also a bugfix for this recently, but I don't have enough data yet to
see if it fixed this.

One thing that I did not realise initially is that it is absolutely vital
to be diligent about securing the etcd peer and client communication. In a
single-node setup you can get away with binding to localhost, but if you
put etcd on the network and do not require authentication anyone who can
reach it can subvert any and all Kubernetes authorization. You probably
also don't want to use the same CA as for Kubernetes here. Only the
kube-apiserver needs etcd client access. For the same reason, you should
not ever use this etcd cluster for anything else. Run a new cluster inside
of Kubernetes instead.

/MR


On Mon, Aug 28, 2017 at 12:18 PM <m...@maglana.com> wrote:

> Thanks for sharing, Matthias! Did you encounter any pain points or
> surprises during implementation of this setup?
>
> I'm curious now about the observed performance/stability differences
> between consistent reads on/off. If anyone else has some insights on that
> matter, please do share. Thanks!
>
> Regards,
>
> Mark
>
> On Monday, August 28, 2017 at 1:44:02 AM UTC-7, Matthias Rampke wrote:
> > We have this setup, it works well. We've turned on consistent reads from
> etcd, not sure if that's strictly necessary.
> >
> >
> > /MR
> >
> >
> > On Sun, Aug 27, 2017 at 2:39 PM <ma...@maglana.com> wrote:
> > Sharing my initial thoughts on HA k8s outside the cloud:
> >
> >
> >
> > https://www.relaxdiego.com/2017/08/hakube.html
> >
> >
> >
> > --
> >
> > You received this message because you are subscribed to the Google
> Groups "Kubernetes user discussion and Q&A" group.
> >
> > To unsubscribe from this group and stop receiving emails from it, send
> an email to kubernetes-use...@googlegroups.com.
> >
> > To post to this group, send email to kubernet...@googlegroups.com.
> >
> > Visit this group at https://groups.google.com/group/kubernetes-users.
> >
> > For more options, visit https://groups.google.com/d/optout.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Kubernetes user discussion and Q&A" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.

Reply via email to