GitHub user redsanket opened a pull request: https://github.com/apache/spark/pull/18940
YSPARK-734 Change CacheLoader to limit entries based on memory footprint Right now the spark shuffle service has a cache for index files. It is based on a # of files cached (spark.shuffle.service.index.cache.entries). This can cause issues if people have a lot of reducers because the size of each entry can fluctuate based on the # of reducers. We saw an issues with a job that had 170000 reducers and it caused NM with spark shuffle service to use 700-800MB or memory in NM by itself. We should change this cache to be memory based and only allow a certain memory size used. When I say memory based I mean the cache should have a limit of say 100MB. https://issues.apache.org/jira/browse/SPARK-21501 Manual Testing with 170000 reducers has been performed with cache loaded up to max 100MB default limit, with each shuffle index file of size 1.3MB. Eviction takes place as soon as the total cache size reaches the 100MB limit and the objects will be ready for garbage collection there by avoiding NM to crash. No notable difference in runtime has been observed. You can merge this pull request into a Git repository by running: $ git pull https://github.com/redsanket/spark SPARK-21501 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/spark/pull/18940.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #18940 ---- commit f23a4c79b69fd1f8a77162da34b8821cb0cc1352 Author: Sanket Chintapalli <schin...@yahoo-inc.com> Date: 2017-07-27T14:59:40Z YSPARK-734 Change CacheLoader to limit entries based on memory footprint ---- --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org