Github user squito commented on a diff in the pull request: https://github.com/apache/spark/pull/18950#discussion_r134501736 --- Diff: core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala --- @@ -724,6 +777,62 @@ private[spark] class ExecutorAllocationManager( } /** + * Calculate the maximum no. of concurrent tasks that can run currently. + */ + def getMaxConTasks(): Int = { + val stagesByJobGroup = stageIdToNumTasks.groupBy(x => jobIdToJobGroup(stageIdToJobId(x._1))) --- End diff -- I think this needs a comment explaining why you need to look at stages at all -- its not obvious why its necessary. (at first I was going to suggest the number of tasks left in a stage shouldn't matter, but then realized that it could in some scenarios)
--- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org