Github user tgravescs commented on a diff in the pull request: https://github.com/apache/spark/pull/18651#discussion_r128829289 --- Diff: resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/YarnAllocator.scala --- @@ -525,9 +534,11 @@ private[yarn] class YarnAllocator( } catch { case NonFatal(e) => logError(s"Failed to launch executor $executorId on container $containerId", e) - // Assigned container should be released immediately to avoid unnecessary resource - // occupation. + // Assigned container should be released immediately + // to avoid unnecessary resource occupation. amClient.releaseAssignedContainer(containerId) + } finally { + numExecutorsStarting.decrementAndGet() --- End diff -- not sure I follow your reference between starting and running. I was just saying in the failure case it doesn't matter because you aren't going to overcount. If we don't decrement the starting inside of updateInternalState we have the possibility to over count because running and starting aren't incremented/decremented within the synchronize block. We don't want to do that.
--- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org