[ 
https://issues.apache.org/jira/browse/SPARK-21501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17218285#comment-17218285
 ] 

Lars Francke commented on SPARK-21501:
--------------------------------------

Just FYI for others stumbling across this: This has a bug in how the memory is 
calculated and might use way more than the 100MB it intends to.

See SPARK-33206 for details.

> Spark shuffle index cache size should be memory based
> -----------------------------------------------------
>
>                 Key: SPARK-21501
>                 URL: https://issues.apache.org/jira/browse/SPARK-21501
>             Project: Spark
>          Issue Type: Bug
>          Components: Shuffle, Spark Core
>    Affects Versions: 2.1.0
>            Reporter: Thomas Graves
>            Assignee: Sanket Reddy
>            Priority: Major
>             Fix For: 2.3.0
>
>
> Right now the spark shuffle service has a cache for index files. It is based 
> on a # of files cached (spark.shuffle.service.index.cache.entries). This can 
> cause issues if people have a lot of reducers because the size of each entry 
> can fluctuate based on the # of reducers. 
> We saw an issues with a job that had 170000 reducers and it caused NM with 
> spark shuffle service to use 700-800MB or memory in NM by itself.
> We should change this cache to be memory based and only allow a certain 
> memory size used. When I say memory based I mean the cache should have a 
> limit of say 100MB.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to