This is an automated email from the ASF dual-hosted git repository.

tgraves pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
     new c46c067  [SPARK-30942] Fix the warning for requiring cores to be 
limiting resources
c46c067 is described below

commit c46c067f39213df9b3ee5a51e7d7803b867a0d54
Author: Thomas Graves <tgra...@nvidia.com>
AuthorDate: Tue Feb 25 10:55:56 2020 -0600

    [SPARK-30942] Fix the warning for requiring cores to be limiting resources
    
    ### What changes were proposed in this pull request?
    
    fix the warning for limiting resources when we don't know the number of 
executor cores. The issue is that there are places in the Spark code that use 
cores/task cpus to calculate slots and until the entire Stage level scheduling 
feature is in, we have to rely on the cores being the limiting resource.
    
    Change the check to only warn when custom resources are specified.
    
    ### Why are the changes needed?
    
    fix the check and warn when we should
    
    ### Does this PR introduce any user-facing change?
    
    A warning is printed
    
    ### How was this patch tested?
    
    manually tested spark-shell with standalone mode, yarn, local mode.
    
    Closes #27686 from tgravescs/SPARK-30942.
    
    Authored-by: Thomas Graves <tgra...@nvidia.com>
    Signed-off-by: Thomas Graves <tgra...@apache.org>
---
 core/src/main/scala/org/apache/spark/SparkContext.scala            | 2 +-
 .../src/main/scala/org/apache/spark/resource/ResourceProfile.scala | 7 +++----
 2 files changed, 4 insertions(+), 5 deletions(-)

diff --git a/core/src/main/scala/org/apache/spark/SparkContext.scala 
b/core/src/main/scala/org/apache/spark/SparkContext.scala
index a47136e..f377f13 100644
--- a/core/src/main/scala/org/apache/spark/SparkContext.scala
+++ b/core/src/main/scala/org/apache/spark/SparkContext.scala
@@ -2798,7 +2798,7 @@ object SparkContext extends Logging {
         defaultProf.maxTasksPerExecutor(sc.conf) < cpuSlots) {
         throw new IllegalArgumentException("The number of slots on an executor 
has to be " +
           "limited by the number of cores, otherwise you waste resources and " 
+
-          "dynamic allocation doesn't work properly. Your configuration has " +
+          "some scheduling doesn't work properly. Your configuration has " +
           s"core/task cpu slots = ${cpuSlots} and " +
           s"${limitingResource} = " +
           s"${defaultProf.maxTasksPerExecutor(sc.conf)}. Please adjust your 
configuration " +
diff --git 
a/core/src/main/scala/org/apache/spark/resource/ResourceProfile.scala 
b/core/src/main/scala/org/apache/spark/resource/ResourceProfile.scala
index 2608ab9..5b2476c 100644
--- a/core/src/main/scala/org/apache/spark/resource/ResourceProfile.scala
+++ b/core/src/main/scala/org/apache/spark/resource/ResourceProfile.scala
@@ -168,7 +168,7 @@ class ResourceProfile(
             // limiting resource because the scheduler code uses that for slots
             throw new IllegalArgumentException("The number of slots on an 
executor has to be " +
               "limited by the number of cores, otherwise you waste resources 
and " +
-              "dynamic allocation doesn't work properly. Your configuration 
has " +
+              "some scheduling doesn't work properly. Your configuration has " 
+
               s"core/task cpu slots = ${taskLimit} and " +
               s"${execReq.resourceName} = ${numTasks}. " +
               "Please adjust your configuration so that all resources require 
same number " +
@@ -183,12 +183,11 @@ class ResourceProfile(
           "no corresponding task resource request was specified.")
       }
     }
-    if(!shouldCheckExecCores && Utils.isDynamicAllocationEnabled(sparkConf)) {
+    if(!shouldCheckExecCores && execResourceToCheck.nonEmpty) {
       // if we can't rely on the executor cores config throw a warning for user
       logWarning("Please ensure that the number of slots available on your " +
         "executors is limited by the number of cores to task cpus and not 
another " +
-        "custom resource. If cores is not the limiting resource then dynamic " 
+
-        "allocation will not work properly!")
+        "custom resource.")
     }
     if (taskResourcesToCheck.nonEmpty) {
       throw new SparkException("No executor resource configs were not 
specified for the " +


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to