[ https://issues.apache.org/jira/browse/SPARK-24296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Apache Spark reassigned SPARK-24296: ------------------------------------ Assignee: Apache Spark > Support replicating blocks larger than 2 GB > ------------------------------------------- > > Key: SPARK-24296 > URL: https://issues.apache.org/jira/browse/SPARK-24296 > Project: Spark > Issue Type: Sub-task > Components: Block Manager, Spark Core > Affects Versions: 2.3.0 > Reporter: Imran Rashid > Assignee: Apache Spark > Priority: Major > > Replicating blocks send the entire block data in one frame. This results in > a failure on the receiving end for blocks larger than 2GB. > We should change block replication to send the block data as a stream when > the block is large (building on the network changes in SPARK-6237). This can > use the conf spark.maxRemoteBlockSizeFetchToMem to decided when to replicate > as a stream, the same as we do for fetching shuffle blocks and fetching > remote RDD blocks. -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org