[jira] [Updated] (SPARK-12554) Standalone mode may hang if max cores is not a multiple of executor cores
[ https://issues.apache.org/jira/browse/SPARK-12554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hyukjin Kwon updated SPARK-12554: - Labels: bulk-closed (was: ) > Standalone mode may hang if max cores is not a multiple of executor cores > - > > Key: SPARK-12554 > URL: https://issues.apache.org/jira/browse/SPARK-12554 > Project: Spark > Issue Type: Bug > Components: Deploy, Scheduler >Affects Versions: 1.5.2 >Reporter: Lijie Xu >Priority: Minor > Labels: bulk-closed > > In scheduleExecutorsOnWorker() in Master.scala, > {{val keepScheduling = coresToAssign >= minCoresPerExecutor}} should be > changed to {{val keepScheduling = coresToAssign > 0}} > Case 1: > Suppose that an app's requested cores is 10 (i.e., {{spark.cores.max = 10}}) > and app.coresPerExecutor is 4 (i.e., {{spark.executor.cores = 4}}). > After allocating two executors (each has 4 cores) to this app, the > {{app.coresToAssign = 2}} and {{minCoresPerExecutor = coresPerExecutor = 4}}, > so {{keepScheduling = false}} and no extra executor will be allocated to this > app. If {{spark.scheduler.minRegisteredResourcesRatio}} is set to a large > number (e.g., > 0.8 in this case), the app will hang and never finish. > Case 2: if a small app's coresPerExecutor is larger than its requested cores > (e.g., {{spark.cores.max = 10}}, {{spark.executor.cores = 16}}), {{val > keepScheduling = coresToAssign >= minCoresPerExecutor}} is always FALSE. As a > result, this app will never get an executor to run. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-12554) Standalone mode may hang if max cores is not a multiple of executor cores
[ https://issues.apache.org/jira/browse/SPARK-12554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Or updated SPARK-12554: -- Priority: Minor (was: Major) > Standalone mode may hang if max cores is not a multiple of executor cores > - > > Key: SPARK-12554 > URL: https://issues.apache.org/jira/browse/SPARK-12554 > Project: Spark > Issue Type: Bug > Components: Deploy, Scheduler >Affects Versions: 1.5.2 >Reporter: Lijie Xu >Priority: Minor > > In scheduleExecutorsOnWorker() in Master.scala, > {{val keepScheduling = coresToAssign >= minCoresPerExecutor}} should be > changed to {{val keepScheduling = coresToAssign > 0}} > Case 1: > Suppose that an app's requested cores is 10 (i.e., {{spark.cores.max = 10}}) > and app.coresPerExecutor is 4 (i.e., {{spark.executor.cores = 4}}). > After allocating two executors (each has 4 cores) to this app, the > {{app.coresToAssign = 2}} and {{minCoresPerExecutor = coresPerExecutor = 4}}, > so {{keepScheduling = false}} and no extra executor will be allocated to this > app. If {{spark.scheduler.minRegisteredResourcesRatio}} is set to a large > number (e.g., > 0.8 in this case), the app will hang and never finish. > Case 2: if a small app's coresPerExecutor is larger than its requested cores > (e.g., {{spark.cores.max = 10}}, {{spark.executor.cores = 16}}), {{val > keepScheduling = coresToAssign >= minCoresPerExecutor}} is always FALSE. As a > result, this app will never get an executor to run. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-12554) Standalone mode may hang if max cores is not a multiple of executor cores
[ https://issues.apache.org/jira/browse/SPARK-12554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Or updated SPARK-12554: -- Summary: Standalone mode may hang if max cores is not a multiple of executor cores (was: Standalone app scheduler will hang when app.coreToAssign < minCoresPerExecutor) > Standalone mode may hang if max cores is not a multiple of executor cores > - > > Key: SPARK-12554 > URL: https://issues.apache.org/jira/browse/SPARK-12554 > Project: Spark > Issue Type: Bug > Components: Deploy, Scheduler >Affects Versions: 1.5.2 >Reporter: Lijie Xu > > In scheduleExecutorsOnWorker() in Master.scala, > {{val keepScheduling = coresToAssign >= minCoresPerExecutor}} should be > changed to {{val keepScheduling = coresToAssign > 0}} > Case 1: > Suppose that an app's requested cores is 10 (i.e., {{spark.cores.max = 10}}) > and app.coresPerExecutor is 4 (i.e., {{spark.executor.cores = 4}}). > After allocating two executors (each has 4 cores) to this app, the > {{app.coresToAssign = 2}} and {{minCoresPerExecutor = coresPerExecutor = 4}}, > so {{keepScheduling = false}} and no extra executor will be allocated to this > app. If {{spark.scheduler.minRegisteredResourcesRatio}} is set to a large > number (e.g., > 0.8 in this case), the app will hang and never finish. > Case 2: if a small app's coresPerExecutor is larger than its requested cores > (e.g., {{spark.cores.max = 10}}, {{spark.executor.cores = 16}}), {{val > keepScheduling = coresToAssign >= minCoresPerExecutor}} is always FALSE. As a > result, this app will never get an executor to run. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org