Bago Amirbekian created SPARK-29692:
---------------------------------------

             Summary: SparkContext.defaultParallism should reflect resource 
limits when resource limits are set
                 Key: SPARK-29692
                 URL: https://issues.apache.org/jira/browse/SPARK-29692
             Project: Spark
          Issue Type: Bug
          Components: Spark Core
    Affects Versions: 3.0.0
            Reporter: Bago Amirbekian


With the new gpu/fpga support in spark, defaultParallelism may not be computed 
correctly. Specifically defaultParaallelism may be much higher than the total 
possible concurrent tasks if workers have many more cores than gpus for example.

Steps to reproduce:
Start a cluster with spark.executor.resource.gpu.amount < cores per executor. 
Set spark.task.resource.gpu.amount = 1. Keep cores per task as 1.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to