[ 
https://issues.apache.org/jira/browse/SPARK-19090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15813962#comment-15813962
 ] 

nirav patel commented on SPARK-19090:
-------------------------------------

Oh right, I have that set exclusively. I corrected my comment. I verified that 
dynamic allocation was enabled by checking following in driver logs:
2017-01-04 12:04:11,362 INFO [spark-dynamic-executor-allocation] 
org.apache.spark.ExecutorAllocationManager: Requesting 4 new executors because 
tasks are backlogged (new desired total will be 6)

If it was not enabled then it should have actually create 6 executors with 5 
cores. 

here's the snippet of code I have:

      if(sparkConfig.dynamicAllocation){
                                
sparkConf.set("spark.dynamicAllocation.enabled", "true")
                                
sparkConf.set("spark.dynamicAllocation.executorIdleTimeout", "600s")
                                
sparkConf.set("spark.dynamicAllocation.initialExecutors", 
sparkConfig.executorInstances)
                                
sparkConf.set("spark.dynamicAllocation.minExecutors", 
String.valueOf((Integer.valueOf(sparkConfig.executorInstances) - 3)))
                                
sparkConf.set("spark.dynamicAllocation.sustainedSchedulerBacklogTimeout", 
"300s")
                                
sparkConf.set("spark.dynamicAllocation.schedulerBacklogTimeout", "120")

                        } else {
                          sparkConf.set("spark.executor.instances", 
sparkConfig.executorInstances)
                        }

                  sparkConf.set("spark.executor.cores", 
sparkConfig.executorCores)

> Dynamic Resource Allocation not respecting spark.executor.cores
> ---------------------------------------------------------------
>
>                 Key: SPARK-19090
>                 URL: https://issues.apache.org/jira/browse/SPARK-19090
>             Project: Spark
>          Issue Type: Bug
>    Affects Versions: 1.5.2, 1.6.1, 2.0.1
>            Reporter: nirav patel
>
> When enabling dynamic scheduling with yarn I see that all executors are using 
> only 1 core even if I specify "spark.executor.cores" to 6. If dynamic 
> scheduling is disabled then each executors will have 6 cores. i.e. it 
> respects  "spark.executor.cores". I have tested this against spark 1.5 . I 
> think it will be the same behavior with 2.x as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to