Andrew de Quincey created SPARK-37358:
-----------------------------------------

             Summary: Spark-on-K8S: Allow disabling of resources.limits.memory 
in executor pod spec
                 Key: SPARK-37358
                 URL: https://issues.apache.org/jira/browse/SPARK-37358
             Project: Spark
          Issue Type: Improvement
          Components: Kubernetes
    Affects Versions: 3.2.0
            Reporter: Andrew de Quincey


When spark creates an executor pod on my Kubernetes cluster, it adds the 
following resources definition:

{{      resources:}}
{{        limits:}}
{{          memory: 896Mi}}
{{        requests:}}
{{          cpu: '4'}}
{{          memory: 896Mi}}
Note that resources.limits.cpu is not set. This is controlled by the 
spark.kubernetes.driver.limit.cores setting (which we intentionally do not set).

We'd like to be able to omit resources.limit.memory as well to let the spark 
worker expand memory as necessary.

However, this isn't possible. The scala code in BasicExecutorFeatureStep.scala 
is as follows:
{{{}.editOrNewResources(){}}}{{{}.addToRequests("memory", 
executorMemoryQuantity){}}}{{{}.addToLimits("memory", 
executorMemoryQuantity){}}}{{{}.addToRequests("cpu", 
executorCpuQuantity){}}}{{{}.addToLimits(executorResourceQuantities.asJava){}}}{{{}.endResources(){}}}{{{}{}}}
 
i.e. it always adds the memory limit, and there's no way to stop it.

 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to