[ 
https://issues.apache.org/jira/browse/SPARK-43657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated SPARK-43657:
-----------------------------------
    Labels: pull-request-available  (was: )

> reuse SPARK_CONF_DIR config maps between driver and executor
> ------------------------------------------------------------
>
>                 Key: SPARK-43657
>                 URL: https://issues.apache.org/jira/browse/SPARK-43657
>             Project: Spark
>          Issue Type: Improvement
>          Components: Kubernetes
>    Affects Versions: 3.2.4, 3.3.2, 3.4.0
>            Reporter: YE
>            Priority: Major
>              Labels: pull-request-available
>
> Currently, Spark on k8s-cluster creates two config maps per application: one 
> for the driver and another for the executor. However the config map for 
> executor is almost identical for config map for driver, there's no need to 
> create there two duplicate config maps. As ConfigMaps are object on K8S, 
> there would be some limitations for ConfigMaps on K8S:
>  # more config maps means more objects on etcd, and adds overhead to API 
> server
>  # Spark driver pod might be ran under limited permission, which means, it 
> might not be possible to create resources rather than exec pod. Therefore 
> driver might not be allowed to create config maps.
> I would submit a pr to reuse SPARK_CONF_DIR config maps for running spark on 
> k8s-cluster mode.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to