Github user vanzin commented on the issue: https://github.com/apache/spark/pull/16819 I agree there's room for improvement in the current code; I even asked SPARK-18769 to be filed to track that work. But I don't think setting the max to a fixed value at startup is the right approach. Queue configs change, node managers go up and down, new ones are added, old ones are removed. If this value ends up being calculated at the wrong time, the application will suffer. If you want to investigate a more dynamic approach here I'm all for that, but I'm not a big fan of the current solution.
--- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org