We at Oath (Yahoo) manage single large k8s cluster per region/colo, Overall
we have 6 clusters across 6 regions/colos around the world for high
availability and uptime.

The shared cluster is running the critical applications such as Yahoo
Sports, Yahoo Finance and supporting a Media organization with 500+ dev
engineers.  Every team within Media Org, say sports team has its own
namespace defined and resource quota has been assigned to the namespace.
The sports team deploys all their applications to their namespace and all
sports applications share the resources (CPU, memory) assigned to the
namespace, they have HPA defined which can adjust their pods based on the
event and still work within the namespace quota. Similarly, Finance team
within Media Org has its own namespace.  With this shared cluster model, we
provide soft multi-tenancy and importantly nature of all Media applications
are same, serving content and ads.



Monitoring solution(Prometheus and Yamas)  provides detail break up of
resource usage per namespace, we have the resource usage per namespace
handy to bill the team if needed.

The Cluster admin has the ability to adjust the namespace quota( if needed)
based on actual usage and has clear visibility into overall cluster usage



https://www.youtube.com/watch?v=GxH1-sFGMJ8&t=845s has a good amount of
detail about our k8s deployment.


On Mon, Jan 8, 2018 at 7:50 AM, <k1.hedayat...@gmail.com> wrote:

> On Sunday, January 7, 2018 at 3:29:37 AM UTC+3:30, dax....@gmail.com
> wrote:
> > My manager is starting to look into moving us off Azure Web App into
> some kind of container management system, either k8s or service fabric
> (we're *mostly* a MS shop but not entirely).  I was talking with him
> yesterday and he mentioned his plan is that each of the teams (~5-10 devs
> each, generally one main web app and a few background jobs) in our billing
> group (~50 devs total) would run their own cluster.
> >
> > My naive understanding is that somewhat defeats the primary purpose of
> k8s.  I was imagining the the entire billing group would have a single
> cluster, and the various teams would then not have to think about how to
> manage it; things would "just work".  My manager's perspective is that with
> a big shared cluster everyone would be stepping on each others toes and it
> would become *more* difficult to manage rather than *less*.  Plus org
> structure is always fluid and teams get reorganized into other departments
> etc every so often, so that could be messy.  But neither of us really know.
> >
> > Anyone have experience or advice on things like this?
>
> I prefer having a big cluster separated and managed with namespaces, RBAC,
> QoS than having multiple clusters. Managing one cluster is faster than
> multi, it reduces complexity and duplication.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Kubernetes user discussion and Q&A" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.
  • [kubernete... dax . fohl
    • Re: [... 'Tim Hockin' via Kubernetes user discussion and Q&A
      • R... Tim St. Clair
    • [kube... k1 . hedayati93
      • R... 'Suresh Visvanathan' via Kubernetes user discussion and Q&A

Reply via email to