[ 
https://issues.apache.org/jira/browse/SPARK-11583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15000187#comment-15000187
 ] 

Kent Yao edited comment on SPARK-11583 at 11/11/15 10:20 AM:
-------------------------------------------------------------

@lemire [~imranr]

1. test cases
    1.1 sparse case: for each task 10 blocks contains data, others dont
         
          sc.makeRDD(1 to 40950, 4095).groupBy(x=>x).top(5)

    1.2 dense case: for each task most block contains data, few dont
 
         1.2.1 full
         sc.makeRDD(1 to 16769025, 4095).groupBy(x=>x).top(5)
          
         1.2.2 very dense: about 95 empty blocks
         sc.makeRDD(1 to 16380000, 4095).groupBy(x=>x).top(5)
     
   1.3 test tool
        
         jmap -dump:format=b,file=heap.bin <pid>  
   
   1.4 test branches: branch-1.5, master
 

2. memory usage

     2.1 RoaringBitmap--sparse

Class Name                                          | Objects | Shallow Heap | 
Retained Heap
---------------------------------------------------------------------------------------------
org.apache.spark.scheduler.HighlyCompressedMapStatus|   4,095 |      131,040 | 
>= 34,135,920
---------------------------------------------------------------------------------------------
     my explaination: 4095 * short[4095-10] =4095 * 16 * 4085 / 8  ≈ 34,135,920

    2.2.1 RoaringBitmap--full

Class Name                                          | Objects | Shallow Heap | 
Retained Heap
---------------------------------------------------------------------------------------------
org.apache.spark.scheduler.HighlyCompressedMapStatus|   4,095 |      131,040 |  
  >= 360,360 
---------------------------------------------------------------------------------------------
    my explaination:RoaringBitmap(0)

   2.2.2 RoaringBitmap--very dense

Class Name                                          | Objects | Shallow Heap | 
Retained Heap
---------------------------------------------------------------------------------------------
org.apache.spark.scheduler.HighlyCompressedMapStatus|   4,095 |      131,040 |  
>= 1,441,440
---------------------------------------------------------------------------------------------
     my explaination:4095 * short[95] = 4095 * 16 * 95 / 8 = 778, 050 (+ others 
= 1441440)
    
     
    2.3 BitSet--sparse

Class Name                                          | Objects | Shallow Heap | 
Retained Heap
---------------------------------------------------------------------------------------------
org.apache.spark.scheduler.HighlyCompressedMapStatus|   4,095 |      131,040 |  
>= 2,391,480
---------------------------------------------------------------------------------------------
my explaination:4095 * 4095 =16,769,025 + (others =  2,391,480)
    2.4 BitSet--full

Class Name                                          | Objects | Shallow Heap | 
Retained Heap
---------------------------------------------------------------------------------------------
org.apache.spark.scheduler.HighlyCompressedMapStatus|   4,095 |      131,040 |  
>= 2,391,480
---------------------------------------------------------------------------------------------
my explaination:same as the above

3. conclusion

memory usage:

RoaringBitmap--full  <  RoaringBitmap--very dense <  BitSet--full =  
BitSet--sparse < RoaringBitmap--sparse
  

    





was (Author: qin yao):
@lemire [~imranr]

1. test cases
    1.1 sparse case: for each task 10 blocks contains data, others dont
         
          sc.makeRDD(1 to 40950, 4095).groupBy(x=>x).top(5)

    1.2 dense case: for each task most block contains data, few dont
 
         1.2.1 full
         sc.makeRDD(1 to 16769025, 4095).groupBy(x=>x).top(5)
          
         1.2.2 very dense: about 95 empty blocks
         sc.makeRDD(1 to 16380000, 4095).groupBy(x=>x).top(5)
     
   1.3 test tool
        
         jmap -dump:format=b,file=heap.bin <pid>  
   
   1.4 test branches: branch-1.5, master
 

2. memory usage

     2.1 RoaringBitmap--sparse

Class Name                                          | Objects | Shallow Heap | 
Retained Heap
---------------------------------------------------------------------------------------------
org.apache.spark.scheduler.HighlyCompressedMapStatus|   4,095 |      131,040 | 
>= 34,135,920
---------------------------------------------------------------------------------------------
     my explaination: 4095 * short[4095-10] =4095 * 16 * 4085 / 8  ≈ 34,135,920

    2.2.1 RoaringBitmap--full

Class Name                                          | Objects | Shallow Heap | 
Retained Heap
---------------------------------------------------------------------------------------------
org.apache.spark.scheduler.HighlyCompressedMapStatus|   4,095 |      131,040 |  
  >= 360,360 
---------------------------------------------------------------------------------------------
    my explaination:RoaringBitmap(0)

   2.2.2 RoaringBitmap--very dense

Class Name                                          | Objects | Shallow Heap | 
Retained Heap
---------------------------------------------------------------------------------------------
org.apache.spark.scheduler.HighlyCompressedMapStatus|   4,095 |      131,040 |  
>= 1,441,440
---------------------------------------------------------------------------------------------
     my explaination:4095 * short[95] = 4095 * 16 * 95 / 8 = 778, 050 (+ others 
= 1441440)
    
     
    2.3 BitSet--sparse

Class Name                                          | Objects | Shallow Heap | 
Retained Heap
---------------------------------------------------------------------------------------------
org.apache.spark.scheduler.HighlyCompressedMapStatus|   4,095 |      131,040 |  
>= 2,391,480
---------------------------------------------------------------------------------------------
my explaination:4095 * 4095 =16,769,025 + (others =  2,391,480)
    2.4 BitSet--full

Class Name                                          | Objects | Shallow Heap | 
Retained Heap
---------------------------------------------------------------------------------------------
org.apache.spark.scheduler.HighlyCompressedMapStatus|   4,095 |      131,040 |  
>= 2,391,480
---------------------------------------------------------------------------------------------
my explaination:same as the above

3. conclusion

memory usage:

RoaringBitmap--full < RoaringBitmap--very dense < BitSet--full = BitSet--sparse 
< RoaringBitmap--sparse
  

    




> Make MapStatus use less memory uage
> -----------------------------------
>
>                 Key: SPARK-11583
>                 URL: https://issues.apache.org/jira/browse/SPARK-11583
>             Project: Spark
>          Issue Type: Improvement
>          Components: Scheduler, Spark Core
>            Reporter: Kent Yao
>
> In the resolved issue https://issues.apache.org/jira/browse/SPARK-11271, as I 
> said, using BitSet can save ≈20% memory usage compared to RoaringBitMap. 
> For a spark job contains quite a lot of tasks, 20% seems a drop in the ocean. 
> Essentially, BitSet uses long[]. For example a BitSet[200k] = long[3125].
> So if we use a HashSet[Int] to store reduceId (when non-empty blocks are 
> dense,use reduceId of empty blocks; when sparse, use non-empty ones). 
> For dense cases: if HashSet[Int](numNonEmptyBlocks).size <   
> BitSet[totalBlockNum], I use MapStatusTrackingNoEmptyBlocks
> For sparse cases: if HashSet[Int](numEmptyBlocks).size <   
> BitSet[totalBlockNum], I use MapStatusTrackingEmptyBlocks
> sparse case, 299/300 are empty
> sc.makeRDD(1 to 30000, 3000).groupBy(x=>x).top(5)
> dense case,  no block is empty
> sc.makeRDD(1 to 9000000, 3000).groupBy(x=>x).top(5)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to