[ 
https://issues.apache.org/jira/browse/SPARK-30446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17009830#comment-17009830
 ] 

Thomas Graves commented on SPARK-30446:
---------------------------------------

Yeah so running on standalone if you set the spark.task.cpus=2 (or anything > 
1) and you don't set executor cores it fails even though it shouldn't because 
executor cores are all the cores of the worker by default:

 

20/01/07 09:34:02 ERROR Main: Failed to initialize Spark session.
org.apache.spark.SparkException: The number of cores per executor (=1) has to 
be >= the task config: spark.task.cpus = 2 when run on spark://tomg-x299:7077.

> Accelerator aware scheduling checkResourcesPerTask doesn't cover all cases
> --------------------------------------------------------------------------
>
>                 Key: SPARK-30446
>                 URL: https://issues.apache.org/jira/browse/SPARK-30446
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 3.0.0
>            Reporter: Thomas Graves
>            Priority: Major
>
> with accelerator aware scheduling SparkContext.checkResourcesPerTask
> Tries to make sure that users have configured things properly and warn or 
> error if not.
> It doesn't properly handle all cases like warning if cpu resources are being 
> wasted.  We should test this better and handle those. 
> I fixed these in the stage level scheduling but not sure the timeline on 
> getting that in so we may want to fix this separately as well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to