[ 
https://issues.apache.org/jira/browse/SPARK-26340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16720239#comment-16720239
 ] 

ASF GitHub Bot commented on SPARK-26340:
----------------------------------------

srowen closed pull request #23290: [SPARK-26340][Core] Ensure cores per 
executor is greater than cpu per task
URL: https://github.com/apache/spark/pull/23290
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/core/src/main/scala/org/apache/spark/SparkConf.scala 
b/core/src/main/scala/org/apache/spark/SparkConf.scala
index 21c5cbc04d813..8d135d3e083d7 100644
--- a/core/src/main/scala/org/apache/spark/SparkConf.scala
+++ b/core/src/main/scala/org/apache/spark/SparkConf.scala
@@ -605,6 +605,15 @@ class SparkConf(loadDefaults: Boolean) extends Cloneable 
with Logging with Seria
       }
     }
 
+    if (contains("spark.executor.cores") && contains("spark.task.cpus")) {
+      val executorCores = getInt("spark.executor.cores", 1)
+      val taskCpus = getInt("spark.task.cpus", 1)
+
+      if (executorCores < taskCpus) {
+        throw new SparkException("spark.executor.cores must not be less than 
spark.task.cpus.")
+      }
+    }
+
     val encryptionEnabled = get(NETWORK_ENCRYPTION_ENABLED) || 
get(SASL_ENCRYPTION_ENABLED)
     require(!encryptionEnabled || get(NETWORK_AUTH_ENABLED),
       s"${NETWORK_AUTH_ENABLED.key} must be enabled when enabling encryption.")
diff --git a/core/src/test/scala/org/apache/spark/SparkConfSuite.scala 
b/core/src/test/scala/org/apache/spark/SparkConfSuite.scala
index df274d949bae3..7cb03deae1391 100644
--- a/core/src/test/scala/org/apache/spark/SparkConfSuite.scala
+++ b/core/src/test/scala/org/apache/spark/SparkConfSuite.scala
@@ -138,6 +138,13 @@ class SparkConfSuite extends SparkFunSuite with 
LocalSparkContext with ResetSyst
     assert(sc.appName === "My other app")
   }
 
+  test("creating SparkContext with cpus per tasks bigger than cores per 
executors") {
+    val conf = new SparkConf(false)
+      .set("spark.executor.cores", "1")
+      .set("spark.task.cpus", "2")
+    intercept[SparkException] { sc = new SparkContext(conf) }
+  }
+
   test("nested property names") {
     // This wasn't supported by some external conf parsing libraries
     System.setProperty("spark.test.a", "a")


 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Ensure cores per executor is greater than cpu per task
> ------------------------------------------------------
>
>                 Key: SPARK-26340
>                 URL: https://issues.apache.org/jira/browse/SPARK-26340
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>    Affects Versions: 2.2.2, 2.3.2
>            Reporter: Nicolas Fraison
>            Assignee: Nicolas Fraison
>            Priority: Minor
>             Fix For: 3.0.0
>
>
> No check is performed to ensure spark.task.cpus is lower then 
> spark.executor.cores. Which can lead to jobs not able to assign tasks without 
> any understandable issues
> The check is only performed in the case of dynamic allocation usage in 
> ExecutorAllocationManager
> Adding the check in TaskSchedulerImpl ensure that an issue is thrown to the 
> driver



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to