GitHub user Ngone51 opened a pull request: https://github.com/apache/spark/pull/23223
Yarnallocator should have same blacklist behaviour with yarn to maxmize use of cluster resource ## What changes were proposed in this pull request? As I mentioned in jira [SPARK-26269](https://issues.apache.org/jira/browse/SPARK-26269), in order to maxmize the use of cluster resource, this pr try to make `YarnAllocator` have the same blacklist behaviour with YARN. ## How was this patch tested? Added. You can merge this pull request into a Git repository by running: $ git pull https://github.com/Ngone51/spark dev-YarnAllocator-should-have-same-blacklist-behaviour-with-YARN Alternatively you can review and apply these changes as the patch at: https://github.com/apache/spark/pull/23223.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #23223 ---- commit 9f88e1c22876e4cdb1a0a6e952930e76f3206e96 Author: wuyi <ngone_5451@...> Date: 2018-12-04T16:17:35Z YarnAllocator should have same blacklist behaviour with YARN commit 65a70dcbb7993731104deab2592a5b969a31414e Author: Ngone51 <ngone_5451@...> Date: 2018-12-05T06:11:06Z fix ut ---- --- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org