[jira] [Commented] (SPARK-41585) The Spark exclude node functionality for YARN should work independently of dynamic allocation
[ https://issues.apache.org/jira/browse/SPARK-41585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17681088#comment-17681088 ] Apache Spark commented on SPARK-41585: -- User 'LucaCanali' has created a pull request for this issue: https://github.com/apache/spark/pull/39757 > The Spark exclude node functionality for YARN should work independently of > dynamic allocation > - > > Key: SPARK-41585 > URL: https://issues.apache.org/jira/browse/SPARK-41585 > Project: Spark > Issue Type: Improvement > Components: YARN >Affects Versions: 3.0.3, 3.1.3, 3.2.2, 3.3.1 >Reporter: Luca Canali >Priority: Minor > > The Spark exclude node functionality for Spark on YARN, introduced in > SPARK-26688, allows users to specify a list of node names that are excluded > from resource allocation. This is done using the configuration parameter: > {{spark.yarn.exclude.nodes}} > The feature currently works only for executors allocated via dynamic > allocation. To use the feature on Spark 3.3.1, for example, one may set the > configurations {{{}spark.dynamicAllocation.enabled{}}}=true, > spark.dynamicAllocation.minExecutors=0 and spark.executor.instances=0, thus > making Spark spawning executors only via dynamic allocation. > This proposes to document this behavior for the current Spark release and > also proposes an improvement of this feature by extending the scope of Spark > exclude node functionality for YARN beyond dynamic allocation, which I > believe makes it more generally useful. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-41585) The Spark exclude node functionality for YARN should work independently of dynamic allocation
[ https://issues.apache.org/jira/browse/SPARK-41585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17681087#comment-17681087 ] Apache Spark commented on SPARK-41585: -- User 'LucaCanali' has created a pull request for this issue: https://github.com/apache/spark/pull/39757 > The Spark exclude node functionality for YARN should work independently of > dynamic allocation > - > > Key: SPARK-41585 > URL: https://issues.apache.org/jira/browse/SPARK-41585 > Project: Spark > Issue Type: Improvement > Components: YARN >Affects Versions: 3.0.3, 3.1.3, 3.2.2, 3.3.1 >Reporter: Luca Canali >Priority: Minor > > The Spark exclude node functionality for Spark on YARN, introduced in > SPARK-26688, allows users to specify a list of node names that are excluded > from resource allocation. This is done using the configuration parameter: > {{spark.yarn.exclude.nodes}} > The feature currently works only for executors allocated via dynamic > allocation. To use the feature on Spark 3.3.1, for example, one may set the > configurations {{{}spark.dynamicAllocation.enabled{}}}=true, > spark.dynamicAllocation.minExecutors=0 and spark.executor.instances=0, thus > making Spark spawning executors only via dynamic allocation. > This proposes to document this behavior for the current Spark release and > also proposes an improvement of this feature by extending the scope of Spark > exclude node functionality for YARN beyond dynamic allocation, which I > believe makes it more generally useful. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-41585) The Spark exclude node functionality for YARN should work independently of dynamic allocation
[ https://issues.apache.org/jira/browse/SPARK-41585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17649235#comment-17649235 ] Apache Spark commented on SPARK-41585: -- User 'LucaCanali' has created a pull request for this issue: https://github.com/apache/spark/pull/39127 > The Spark exclude node functionality for YARN should work independently of > dynamic allocation > - > > Key: SPARK-41585 > URL: https://issues.apache.org/jira/browse/SPARK-41585 > Project: Spark > Issue Type: Bug > Components: YARN >Affects Versions: 3.3.1 >Reporter: Luca Canali >Priority: Minor > > The Spark exclude node functionality for Spark on YARN, introduced in > SPARK-26688, allows users to specify a list of node names that are excluded > from resource allocation. This is done using the configuration parameter: > {{spark.yarn.exclude.nodes}} > The feature currently works only for executors allocated via dynamic > allocation. To use the feature on Spark 3.3.1, for example, one may need also > to configure spark.dynamicAllocation.minExecutors=0 and > spark.executor.instances=0, therefore relying on executor resource allocation > only via dynamic allocation. > This proposes to extend the use of Spark exclude node functionality for YARN > beyond dynamic allocation, which I believe makes it more consistent also with > what the documentation reports for this feature/configuration parameter. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-41585) The Spark exclude node functionality for YARN should work independently of dynamic allocation
[ https://issues.apache.org/jira/browse/SPARK-41585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17649233#comment-17649233 ] Apache Spark commented on SPARK-41585: -- User 'LucaCanali' has created a pull request for this issue: https://github.com/apache/spark/pull/39127 > The Spark exclude node functionality for YARN should work independently of > dynamic allocation > - > > Key: SPARK-41585 > URL: https://issues.apache.org/jira/browse/SPARK-41585 > Project: Spark > Issue Type: Bug > Components: YARN >Affects Versions: 3.3.1 >Reporter: Luca Canali >Priority: Minor > > The Spark exclude node functionality for Spark on YARN, introduced in > SPARK-26688, allows users to specify a list of node names that are excluded > from resource allocation. This is done using the configuration parameter: > {{spark.yarn.exclude.nodes}} > The feature currently works only for executors allocated via dynamic > allocation. To use the feature on Spark 3.3.1, for example, one may need also > to configure spark.dynamicAllocation.minExecutors=0 and > spark.executor.instances=0, therefore relying on executor resource allocation > only via dynamic allocation. > This proposes to extend the use of Spark exclude node functionality for YARN > beyond dynamic allocation, which I believe makes it more consistent also with > what the documentation reports for this feature/configuration parameter. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org