[ 
https://issues.apache.org/jira/browse/HBASE-13259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15138356#comment-15138356
 ] 

ramkrishna.s.vasudevan commented on HBASE-13259:
------------------------------------------------

Completed the testing. Here are the findings
Using YCSB scans and gets were performed with 75 threads and the throughput was 
measured.
I used to a server with 50GB of RAM. Measured the throughput diff between a 
setup where the mmap() file mode was configured with a cache of 100G. In one of 
the setup only 10G of data was loaded and that was cached and in another loaded 
around 75G of data and the whole 75G was cached in the file mode BC. 
With 10G of cache
||Scans||Gets||
|13697.46 ops/sec|69085.88 ops/sec||

With 75G of cache
||Scans||Gets||
|8745.08 ops/sec|66221.93 ops/sec||

The same 75G of cache setup was run with the current File Mode impl of BC
||Scans||Gets||
|12107.92 ops/sec|42725.07 ops/sec||

Also my Filemode BC impl is backed with *PCIe SSD*.

So the test clearly shows that the mmap based file mode is best suited for gets 
rather than scans because when the data does not fit in the RAM there may lot 
of page faults and we do a lot of read operations like compare on the BB that 
is retrieved out of this mmap buffers.  Whereas in the current way of File Mode 
BC since the BB is copied to the onheap there is no page faults. 


> mmap() based BucketCache IOEngine
> ---------------------------------
>
>                 Key: HBASE-13259
>                 URL: https://issues.apache.org/jira/browse/HBASE-13259
>             Project: HBase
>          Issue Type: New Feature
>          Components: BlockCache
>    Affects Versions: 0.98.10
>            Reporter: Zee Chen
>            Assignee: Zee Chen
>            Priority: Critical
>             Fix For: 2.0.0, 1.3.0
>
>         Attachments: HBASE-13259-v2.patch, HBASE-13259.patch, 
> HBASE-13259_v3.patch, ioread-1.svg, mmap-0.98-v1.patch, mmap-1.svg, 
> mmap-trunk-v1.patch
>
>
> Of the existing BucketCache IOEngines, FileIOEngine uses pread() to copy data 
> from kernel space to user space. This is a good choice when the total working 
> set size is much bigger than the available RAM and the latency is dominated 
> by IO access. However, when the entire working set is small enough to fit in 
> the RAM, using mmap() (and subsequent memcpy()) to move data from kernel 
> space to user space is faster. I have run some short keyval gets tests and 
> the results indicate a reduction of 2%-7% of kernel CPU on my system, 
> depending on the load. On the gets, the latency histograms from mmap() are 
> identical to those from pread(), but peak throughput is close to 40% higher.
> This patch modifies ByteByfferArray to allow it to specify a backing file.
> Example for using this feature: set  hbase.bucketcache.ioengine to 
> mmap:/dev/shm/bucketcache.0 in hbase-site.xml.
> Attached perf measured CPU usage breakdown in flames graph.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to