[ https://issues.apache.org/jira/browse/SPARK-6695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14396174#comment-14396174 ]
Sean Owen commented on SPARK-6695: ---------------------------------- See SPARK-6713 for a solution to this particular problem and the type of solution I think we'd want to implement for issues like this. This lets Spark itself do the spilling. > Add an external iterator: a hadoop-like output collector > -------------------------------------------------------- > > Key: SPARK-6695 > URL: https://issues.apache.org/jira/browse/SPARK-6695 > Project: Spark > Issue Type: New Feature > Components: Spark Core > Reporter: uncleGen > > In practical use, we usually need to create a big iterator, which means too > big in `memory usage` or too long in `array size`. On the one hand, it leads > to too much memory consumption. On the other hand, one `Array` may not hold > all the elements, as java array indices are of type 'int' (4 bytes or 32 > bits). So, IMHO, we may provide a `collector`, which has a buffer, 100MB or > any others, and could spill data into disk. The use case may like: > {code: borderStyle=solid} > rdd.mapPartition { it => > ... > val collector = new ExternalCollector() > collector.collect(a) > ... > collector.iterator > } > > {code} > I have done some related works, and I need your opinions, thanks! -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org