[ https://issues.apache.org/jira/browse/SPARK-3375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14120361#comment-14120361 ]
Thomas Graves commented on SPARK-3375: -------------------------------------- Ok so the real issue here is that we are sending the number of containers as 0 after we send the original one of X. on the yarn side this clears out the original request. I would expect this to be affect 2.x also. I think the original 0.23 code kept just sending max-running and now we are sending max-pending-running. For a ping we should just send empty asks. > spark on yarn alpha container allocation issues > ----------------------------------------------- > > Key: SPARK-3375 > URL: https://issues.apache.org/jira/browse/SPARK-3375 > Project: Spark > Issue Type: Bug > Components: YARN > Affects Versions: 1.2.0 > Reporter: Thomas Graves > Priority: Blocker > > It looks like if yarn doesn't get the containers immediately it stops asking > for them and the yarn application hangs with never getting any executors. > This was introduced by https://github.com/apache/spark/pull/2169 -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org