The scheduler is configurable. Most likely, your scheduler is configured to
use "MostRequestedPriority" (see #1), which tries to put all the pods pods
on the smallest number of nodes (so you can shutdown the excess nodes).
This priority function is used when running on a cloud provider, where you
Hi,
We had the same problem, I am not sure but it looks like the pod number is not
such important for scheduler, and sometimes we ended up with nodes with 40 pods
and other with 1~2. Setting limits and request values on CPU and memory in
deployment configs generally fixed that for us.
Regards
Hi All,
We have 20 worker nodes all with the same labels (they all have the same
specs). Our pods don't have any node selectors so all nodes are available
to all pods.
What we are seeing is the scheduler constantly placing pods on nodes that
are already heavily usitised (in terms of memory