[ 
https://issues.apache.org/jira/browse/HBASE-21874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16768931#comment-16768931
 ] 

Anoop Sam John commented on HBASE-21874:
----------------------------------------

bq.Why add the configurable buffer size? That seems to be what most of the 
patch is about which is distracting.
That is not directly related with this new IOEngine patch.  We have 4 MB sized 
BB chunks created when offheap/filemmap IOEngines.  In case of mmap, this puts 
some restriction on the total cache size that can be used.  There seems a cap 
on the #mmaped regions.. So we tried with some cache size > 250 GB (Do not 
remember the number exactly) the mmap calls were failing..  For Pmem mmap, it 
can be bigger sized BB chunks than 4 MB..  Might be we can remove that part 
from this patch and make it smaller.

> Bucket cache on Persistent memory
> ---------------------------------
>
>                 Key: HBASE-21874
>                 URL: https://issues.apache.org/jira/browse/HBASE-21874
>             Project: HBase
>          Issue Type: New Feature
>          Components: BucketCache
>    Affects Versions: 3.0.0
>            Reporter: ramkrishna.s.vasudevan
>            Assignee: ramkrishna.s.vasudevan
>            Priority: Major
>             Fix For: 3.0.0
>
>         Attachments: HBASE-21874.patch, HBASE-21874.patch, Pmem_BC.png
>
>
> Non volatile persistent memory devices are byte addressable like DRAM (for 
> eg. Intel DCPMM). Bucket cache implementation can take advantage of this new 
> memory type and can make use of the existing offheap data structures to serve 
> data directly from this memory area without having to bring the data to 
> onheap.
> The patch is a new IOEngine implementation that works with the persistent 
> memory.
> Note : Here we don't make use of the persistence nature of the device and 
> just make use of the big memory it provides.
> Performance numbers to follow. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to