[ https://issues.apache.org/jira/browse/SPARK-5997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15259101#comment-15259101 ]
Robert Ormandi edited comment on SPARK-5997 at 4/26/16 10:58 PM: ----------------------------------------------------------------- Does it simply solve the problem if the method splits up each partition to N new ones uniformly (N should be sufficiently small, like 2 or 3). In this way, we will have N x originalNumberOfPartitions partitions each containing originalNumberOfObjectsPerPartition / N objects approximately? N could be a parameter of the method. was (Author: rormandi): Does it simply solve the problem if the method split up each partition to N new ones uniformly. In this way, we will have N x originalNumberOfPartitions partitions each containing originalNumberOfObjectsPerPartition / N objects approximately? N could be a parameter of the method. > Increase partition count without performing a shuffle > ----------------------------------------------------- > > Key: SPARK-5997 > URL: https://issues.apache.org/jira/browse/SPARK-5997 > Project: Spark > Issue Type: Improvement > Components: Spark Core > Reporter: Andrew Ash > > When decreasing partition count with rdd.repartition() or rdd.coalesce(), the > user has the ability to choose whether or not to perform a shuffle. However > when increasing partition count there is no option of whether to perform a > shuffle or not -- a shuffle always occurs. > This Jira is to create a {{rdd.repartition(largeNum, shuffle=false)}} call > that performs a repartition to a higher partition count without a shuffle. > The motivating use case is to decrease the size of an individual partition > enough that the .toLocalIterator has significantly reduced memory pressure on > the driver, as it loads a partition at a time into the driver. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org