[ 
https://issues.apache.org/jira/browse/HDFS-7836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14345482#comment-14345482
 ] 

Colin Patrick McCabe commented on HDFS-7836:
--------------------------------------------

bq. Generally we have some ways to use offheap, 1) directbuffer, it's not fit 
to our case. 2) Sun Unsafe. 3) Write native code and allocate memory ourselves. 
It seems we are going to use #3?

I think #2 is the best.  As we talked about earlier, writing more JNI (#3) just 
adds more platform dependencies that we don't want.  With regard to #1, 
allocating DirectByteBuffers is very slow.

bq. If so, we still need to have an initial capacity for the hash table, 
otherwise there are lots of rehash when the load factor reaches the threshold. 
How about we do the same thing as what we currently do in java blockmap, using 
2% of totoal memory? Then we don't need the load factor and assume the table 
rows number are enough?

That's an interesting idea.  Keep in mind, though, that this hash table will be 
off-heap.  So if we size the hash table based on the JVM heap size, it would be 
weird.  Honestly I think having a setting for this is easiest.  I also suspect 
that growing the off-heap hash table will be quicker than growing an on-heap 
hash table, since it won't trigger full GCs during resizing.

> BlockManager Scalability Improvements
> -------------------------------------
>
>                 Key: HDFS-7836
>                 URL: https://issues.apache.org/jira/browse/HDFS-7836
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>            Reporter: Charles Lamb
>            Assignee: Charles Lamb
>         Attachments: BlockManagerScalabilityImprovementsDesign.pdf
>
>
> Improvements to BlockManager scalability.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to