[ https://issues.apache.org/jira/browse/SPARK-22149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16183575#comment-16183575 ]
Takeshi Yamamuro commented on SPARK-22149: ------------------------------------------ I think you should first ask in the spark mailing list. Then, if we'd better to do something, you can open a jira. Thanks! > spark.shuffle.memoryFraction (deprecated) in spark 2 > ---------------------------------------------------- > > Key: SPARK-22149 > URL: https://issues.apache.org/jira/browse/SPARK-22149 > Project: Spark > Issue Type: Documentation > Components: Documentation > Affects Versions: 2.1.1 > Reporter: regis le bretonnic > Priority: Minor > > Hi > This is not a bug but maybe a lack of documentation. > I have a job that produce a lot of blockmgr files... I do not understand why > the shuffle writes so much on disk and not in the heap of nodemanager. > I wanted to increase spark.shuffle.memoryFraction to reduce the part of data > on disk, but this parameter is deprecated in the version we use > (https://spark.apache.org/docs/2.1.1/configuration.html) > How to increase the memory size allocated to shuffle in spark 2 ? Is there a > non documented parameter ? > I do not use an external shuffle service and I'd prefer to avoid it for now... > Thanks in advance -- This message was sent by Atlassian JIRA (v6.4.14#64029) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org