[ 
https://issues.apache.org/jira/browse/SPARK-11583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14999882#comment-14999882
 ] 

Daniel Lemire commented on SPARK-11583:
---------------------------------------

[~irashid]

Calling so many times a function to set consecutive values would be wasteful. 

With Java's BitSet, you could call...

{code:java}
  bs.set(0,200000)
{code}

The memory usage would the same (about 25kB) but the construction time would be 
much less...

Similarly, on a {{RoaringBitmap}} object you could do:
 
{code:java}
  r.add(0,200000)
{code}

The object would be automatically compressed down to a few bytes (much less 
than a BitSet). It is an instance where a {{RoaringBitmap}} would probably use 
three orders of memory less.

If you instead do add the values one by one, you could also call 
{{runOptimize}} on the result (at the very end, or a few times in the process). 
Again a RoaringBitmap would use thousands of times less memory.



This is, by the way, a perfect example of a case where you neither want to use 
a hash set or an uncompressed bitset. 

> Make MapStatus use less memory uage
> -----------------------------------
>
>                 Key: SPARK-11583
>                 URL: https://issues.apache.org/jira/browse/SPARK-11583
>             Project: Spark
>          Issue Type: Improvement
>          Components: Scheduler, Spark Core
>            Reporter: Kent Yao
>
> In the resolved issue https://issues.apache.org/jira/browse/SPARK-11271, as I 
> said, using BitSet can save ≈20% memory usage compared to RoaringBitMap. 
> For a spark job contains quite a lot of tasks, 20% seems a drop in the ocean. 
> Essentially, BitSet uses long[]. For example a BitSet[200k] = long[3125].
> So if we use a HashSet[Int] to store reduceId (when non-empty blocks are 
> dense,use reduceId of empty blocks; when sparse, use non-empty ones). 
> For dense cases: if HashSet[Int](numNonEmptyBlocks).size <   
> BitSet[totalBlockNum], I use MapStatusTrackingNoEmptyBlocks
> For sparse cases: if HashSet[Int](numEmptyBlocks).size <   
> BitSet[totalBlockNum], I use MapStatusTrackingEmptyBlocks
> sparse case, 299/300 are empty
> sc.makeRDD(1 to 30000, 3000).groupBy(x=>x).top(5)
> dense case,  no block is empty
> sc.makeRDD(1 to 9000000, 3000).groupBy(x=>x).top(5)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to