pewpewp3w opened a new issue, #25226:
URL: https://github.com/apache/airflow/issues/25226

   ### Official Helm Chart version
   
   1.6.0 (latest released)
   
   ### Apache Airflow version
   
   2.3.3 (latest released)
   
   ### Kubernetes Version
   
   does not matter
   
   ### Helm Chart configuration
   
   important part is
   `executor: CeleryKubernetesExecutor`
   
   ### Docker Image customisations
   
   _No response_
   
   ### What happened
   
   
[This](https://github.com/apache/airflow/blob/helm-chart/1.6.0/chart/templates/workers/worker-service.yaml#L37)
 selector 
[matches](https://github.com/apache/airflow/blob/helm-chart/1.6.0/chart/files/pod-template-file.kubernetes-helm-yaml#L27)
 pods, launched from default pod template.
   
   Having a high amount of Kubernetes executor jobs, it is possible to obtain a 
continuously updating list of service endpoints. In some cluster 
configurations, this leads to unexpected increased load to other cluster 
workloads. For example, istio with default configuration will constantly 
produce a high amount of xds pushes to all istio-enabled workloads.
   
   ### What you think should happen instead
   
   Since I don't see the point in adding KubernetesExecutor pods endpoints to 
worker service, I propose to change the default pod labels in 
pod-template-file.kubernetes-helm-yaml
   
   ### How to reproduce
   
   _No response_
   
   ### Anything else
   
   _No response_
   
   ### Are you willing to submit PR?
   
   - [X] Yes I am willing to submit a PR!
   
   ### Code of Conduct
   
   - [X] I agree to follow this project's [Code of 
Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@airflow.apache.org.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to