Hi Everyone
This issue is now resolved.  I added CPU and Memory request and limits to 
the pod.yaml file.  I also added add inter-pod anti-affinity constraints 
<https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity>
 to 
the pods as well.
I noticed that I didnt have enough CPU and Memory to the pods which was 
causes some issues.

Thanks,
On Wednesday, March 2, 2022 at 3:55:45 PM UTC-5 Sifu Tian wrote:

> Hi,
>
> Is there a configuration setting where, using the Kubernetes elastic agent 
> plugin, the go server calls for an agent pod, however, the pods only get 
> spun up on one worker node?  This happens to the point where that worker 
> node is 100% CPU and memory used, which makes the pipelines run to a crawl.
>
> I have a 3 node Kubernetes cluster and there are 2 workers nodes sitting 
> idle and the pipelines die due to one k8 worker node having all the pods 
> running.
>
> I there something that I need to configure that allows the pods to be 
> evenly distributed on the other 2 worker nodes? 
>

-- 
You received this message because you are subscribed to the Google Groups 
"go-cd" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/go-cd/0127487f-46ac-46db-a2ba-5271ac78600dn%40googlegroups.com.

Reply via email to