[
https://issues.apache.org/jira/browse/SPARK-11583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14999130#comment-14999130
]
Daniel Lemire commented on SPARK-11583:
---------------------------------------
> So, this is about compressing the in-memory representation in some way,
> whereas roaringbitmap compressed the external representation?
Probably not. Roaring bitmaps use about as much RAM as they use serialized
bytes.
Possibly Spark would either need to use a version of Roaring like Lucene (that
eagerly handles the flips) or make use of our "runOptimize" method. See my
answer below to [~irashid].
> Make MapStatus use less memory uage
> -----------------------------------
>
> Key: SPARK-11583
> URL: https://issues.apache.org/jira/browse/SPARK-11583
> Project: Spark
> Issue Type: Improvement
> Components: Scheduler, Spark Core
> Reporter: Kent Yao
>
> In the resolved issue https://issues.apache.org/jira/browse/SPARK-11271, as I
> said, using BitSet can save ≈20% memory usage compared to RoaringBitMap.
> For a spark job contains quite a lot of tasks, 20% seems a drop in the ocean.
> Essentially, BitSet uses long[]. For example a BitSet[200k] = long[3125].
> So if we use a HashSet[Int] to store reduceId (when non-empty blocks are
> dense,use reduceId of empty blocks; when sparse, use non-empty ones).
> For dense cases: if HashSet[Int](numNonEmptyBlocks).size <
> BitSet[totalBlockNum], I use MapStatusTrackingNoEmptyBlocks
> For sparse cases: if HashSet[Int](numEmptyBlocks).size <
> BitSet[totalBlockNum], I use MapStatusTrackingEmptyBlocks
> sparse case, 299/300 are empty
> sc.makeRDD(1 to 30000, 3000).groupBy(x=>x).top(5)
> dense case, no block is empty
> sc.makeRDD(1 to 9000000, 3000).groupBy(x=>x).top(5)
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]