[ 
https://issues.apache.org/jira/browse/HBASE-7404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanjie Gu updated HBASE-7404:
-----------------------------

thanks for response, I have asked to bucket writer, and the answer is the same.



发自我的小米手机在 "Anoop Sam John (JIRA)" <j...@apache.org>,2017年6月9日 上午10:01写道:





    [ 
[1]https://issues.apache.org/jira/browse/HBASE-7404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16043811#comment-16043811
 ]

Anoop Sam John commented on HBASE-7404:
---------------------------------------

Because the block size is not a hard limit. While writing HFiles, it is always 
possible that we might have crossed the block size for the current cell.  Then 
only we have check that says the size is crossed so we move on to the next 
block.  To accommodate this possibility, we have 1K extra




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

----------------------------------------------------------------------------------------
[1] 
https://issues.apache.org/jira/browse/HBASE-7404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16043811#comment-16043811
[2] https://issues.apache.org/jira/browse/HBASE-7404
[3] http://org.apache.hadoop.hbase.io
[4] http://org.apache.hadoop.hbase.io


> Bucket Cache:A solution about CMS,Heap Fragment and Big Cache on HBASE
> ----------------------------------------------------------------------
>
>                 Key: HBASE-7404
>                 URL: https://issues.apache.org/jira/browse/HBASE-7404
>             Project: HBase
>          Issue Type: New Feature
>    Affects Versions: 0.94.3
>            Reporter: chunhui shen
>            Assignee: chunhui shen
>             Fix For: 0.95.0
>
>         Attachments: 7404-0.94-fixed-lines.txt, 7404-trunk-v10.patch, 
> 7404-trunk-v11.patch, 7404-trunk-v12.patch, 7404-trunk-v13.patch, 
> 7404-trunk-v13.txt, 7404-trunk-v14.patch, BucketCache.pdf, 
> hbase-7404-94v2.patch, HBASE-7404-backport-0.94.patch, 
> hbase-7404-trunkv2.patch, hbase-7404-trunkv9.patch, Introduction of Bucket 
> Cache.pdf
>
>
> First, thanks @neil from Fusion-IO share the source code.
> Usage:
> 1.Use bucket cache as main memory cache, configured as the following:
> –"hbase.bucketcache.ioengine" "heap" (or "offheap" if using offheap memory to 
> cache block )
> –"hbase.bucketcache.size" 0.4 (size for bucket cache, 0.4 is a percentage of 
> max heap size)
> 2.Use bucket cache as a secondary cache, configured as the following:
> –"hbase.bucketcache.ioengine" "file:/disk1/hbase/cache.data"(The file path 
> where to store the block data)
> –"hbase.bucketcache.size" 1024 (size for bucket cache, unit is MB, so 1024 
> means 1GB)
> –"hbase.bucketcache.combinedcache.enabled" false (default value being true)
> See more configurations from org.apache.hadoop.hbase.io.hfile.CacheConfig and 
> org.apache.hadoop.hbase.io.hfile.bucket.BucketCache
> What's Bucket Cache? 
> It could greatly decrease CMS and heap fragment by GC
> It support a large cache space for High Read Performance by using high speed 
> disk like Fusion-io
> 1.An implementation of block cache like LruBlockCache
> 2.Self manage blocks' storage position through Bucket Allocator
> 3.The cached blocks could be stored in the memory or file system
> 4.Bucket Cache could be used as a mainly block cache(see CombinedBlockCache), 
> combined with LruBlockCache to decrease CMS and fragment by GC.
> 5.BucketCache also could be used as a secondary cache(e.g. using Fusionio to 
> store block) to enlarge cache space
> How about SlabCache?
> We have studied and test SlabCache first, but the result is bad, because:
> 1.SlabCache use SingleSizeCache, its use ratio of memory is low because kinds 
> of block size, especially using DataBlockEncoding
> 2.SlabCache is uesd in DoubleBlockCache, block is cached both in SlabCache 
> and LruBlockCache, put the block to LruBlockCache again if hit in SlabCache , 
> it causes CMS and heap fragment don't get any better
> 3.Direct heap performance is not good as heap, and maybe cause OOM, so we 
> recommend using "heap" engine 
> See more in the attachment and in the patch



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Reply via email to