[ https://issues.apache.org/jira/browse/SPARK-24297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16497219#comment-16497219 ]
Apache Spark commented on SPARK-24297: -------------------------------------- User 'squito' has created a pull request for this issue: https://github.com/apache/spark/pull/21474 > Change default value for spark.maxRemoteBlockSizeFetchToMem to be < 2GB > ----------------------------------------------------------------------- > > Key: SPARK-24297 > URL: https://issues.apache.org/jira/browse/SPARK-24297 > Project: Spark > Issue Type: Sub-task > Components: Block Manager, Shuffle, Spark Core > Affects Versions: 2.3.0 > Reporter: Imran Rashid > Priority: Major > > Any network request which does not use stream-to-disk that is sending over > 2GB is doomed to fail, so we might as well at least set the default value of > spark.maxRemoteBlockSizeFetchToMem to something < 2GB. > It probably makes sense to set it to something even lower still, but that > might require more careful testing; this is a totally safe first step. -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org