Hi All,

We have 20 worker nodes all with the same labels (they all have the same
specs). Our pods don't have any node selectors so all nodes are available
to all pods.

What we are seeing is the scheduler constantly placing pods on nodes that
are already heavily usitised (in terms of memory and/or cpu) while other
nodes have plenty of capacity.

We have places resource request on some pods and they continue to be placed
on the busy nodes.

How can we help the scheduler make better decisions?

-bash-4.2$ oc version
oc v3.6.0+c4dd4cf
kubernetes v1.6.1+5115d708d7
features: Basic-Auth GSSAPI Kerberos SPNEGO

openshift v3.6.173.0.21
kubernetes v1.6.1+5115d708d7



Thanks
_______________________________________________
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users

Reply via email to