[ https://issues.apache.org/jira/browse/HBASE-23887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17127551#comment-17127551 ]
Danil Lipovoy commented on HBASE-23887: --------------------------------------- Another one test - I wanted to see how to will work auto-scaling when we have changing load. So I run this scenario nohup bin/ycsb run hbase2 -cp ~/hbase_conf -P workloads/select_u -p table=tbl4 -p columnfamily=cf -threads 5 -p fieldcount=1 -p operationcount=40000 -s -t & sleep 100 nohup bin/ycsb run hbase2 -cp ~/hbase_conf -P workloads/select_u -p table=tbl4 -p columnfamily=cf -threads 15 -p fieldcount=1 -p operationcount=60000 -s -t & sleep 100 nohup bin/ycsb run hbase2 -cp ~/hbase_conf -P workloads/select_u -p table=tbl4 -p columnfamily=cf -threads 5 -p fieldcount=1 -p operationcount=40000 -s -t & sleep 100 nohup bin/ycsb run hbase2 -cp ~/hbase_conf -P workloads/select_u -p table=tbl4 -p columnfamily=cf -threads 5 -p fieldcount=1 -p operationcount=40000 -s -t & sleep 100 nohup bin/ycsb run hbase2 -cp ~/hbase_conf -P workloads/select_u -p table=tbl4 -p columnfamily=cf -threads 20 -p fieldcount=1 -p operationcount=50000 -s -t & sleep 100 nohup bin/ycsb run hbase2 -cp ~/hbase_conf -P workloads/select_u -p table=tbl4 -p columnfamily=cf -threads 5 -p fieldcount=1 -p operationcount=20000 -s -t & sleep 100 nohup bin/ycsb run hbase2 -cp ~/hbase_conf -P workloads/select_u -p table=tbl4 -p columnfamily=cf -threads 5 -p fieldcount=1 -p operationcount=10000 -s -t & sleep 100 nohup bin/ycsb run hbase2 -cp ~/hbase_conf -P workloads/select_u -p table=tbl4 -p columnfamily=cf -threads 10 -p fieldcount=1 -p operationcount=60000 -s -t & with param: hbase.lru.cache.heavy.eviction.count.limit = 100000 ( = disable the feature) Then I set: hbase.lru.cache.heavy.eviction.count.limit = 0 And have done almost the same scenario just set "sleep 50" because it works faster. The results: !wave.png! !image-2020-06-07-12-07-30-307.png! How it looks in the log: BlockCache evicted (MB): 0, overhead (%): -100, heavy eviction counter: 0, current caching DataBlock (%): 100 BlockCache evicted (MB): 0, overhead (%): -100, heavy eviction counter: 0, current caching DataBlock (%): 100 BlockCache evicted (MB): 0, overhead (%): -100, heavy eviction counter: 0, current caching DataBlock (%): 100 BlockCache evicted (MB): 5472, overhead (%): 2636, heavy eviction counter: 1, current caching DataBlock (%): 85 < test begin BlockCache evicted (MB): 6498, overhead (%): 3149, heavy eviction counter: 2, current caching DataBlock (%): 70 BlockCache evicted (MB): 5017, overhead (%): 2408, heavy eviction counter: 3, current caching DataBlock (%): 55 BlockCache evicted (MB): 3990, overhead (%): 1895, heavy eviction counter: 4, current caching DataBlock (%): 40 BlockCache evicted (MB): 2623, overhead (%): 1211, heavy eviction counter: 5, current caching DataBlock (%): 28 BlockCache evicted (MB): 2166, overhead (%): 983, heavy eviction counter: 6, current caching DataBlock (%): 19 BlockCache evicted (MB): 1254, overhead (%): 527, heavy eviction counter: 7, current caching DataBlock (%): 14 BlockCache evicted (MB): 456, overhead (%): 128, heavy eviction counter: 8, current caching DataBlock (%): 13 BlockCache evicted (MB): 228, overhead (%): 14, heavy eviction counter: 9, current caching DataBlock (%): 13 BlockCache evicted (MB): 114, overhead (%): -43, heavy eviction counter: 9, current caching DataBlock (%): 18 BlockCache evicted (MB): 456, overhead (%): 128, heavy eviction counter: 10, current caching DataBlock (%): 17 BlockCache evicted (MB): 342, overhead (%): 71, heavy eviction counter: 11, current caching DataBlock (%): 17 BlockCache evicted (MB): 342, overhead (%): 71, heavy eviction counter: 12, current caching DataBlock (%): 17 BlockCache evicted (MB): 228, overhead (%): 14, heavy eviction counter: 13, current caching DataBlock (%): 17 BlockCache evicted (MB): 114, overhead (%): -43, heavy eviction counter: 13, current caching DataBlock (%): 22 BlockCache evicted (MB): 798, overhead (%): 299, heavy eviction counter: 14, current caching DataBlock (%): 20 BlockCache evicted (MB): 684, overhead (%): 242, heavy eviction counter: 15, current caching DataBlock (%): 18 BlockCache evicted (MB): 570, overhead (%): 185, heavy eviction counter: 16, current caching DataBlock (%): 17 BlockCache evicted (MB): 456, overhead (%): 128, heavy eviction counter: 17, current caching DataBlock (%): 16 BlockCache evicted (MB): 228, overhead (%): 14, heavy eviction counter: 18, current caching DataBlock (%): 16 BlockCache evicted (MB): 228, overhead (%): 14, heavy eviction counter: 19, current caching DataBlock (%): 16 BlockCache evicted (MB): 114, overhead (%): -43, heavy eviction counter: 19, current caching DataBlock (%): 21 BlockCache evicted (MB): 684, overhead (%): 242, heavy eviction counter: 20, current caching DataBlock (%): 19 BlockCache evicted (MB): 456, overhead (%): 128, heavy eviction counter: 21, current caching DataBlock (%): 18 BlockCache evicted (MB): 456, overhead (%): 128, heavy eviction counter: 22, current caching DataBlock (%): 17 BlockCache evicted (MB): 342, overhead (%): 71, heavy eviction counter: 23, current caching DataBlock (%): 17 BlockCache evicted (MB): 228, overhead (%): 14, heavy eviction counter: 24, current caching DataBlock (%): 17 BlockCache evicted (MB): 228, overhead (%): 14, heavy eviction counter: 25, current caching DataBlock (%): 17 BlockCache evicted (MB): 228, overhead (%): 14, heavy eviction counter: 26, current caching DataBlock (%): 17 BlockCache evicted (MB): 114, overhead (%): -43, heavy eviction counter: 26, current caching DataBlock (%): 22 BlockCache evicted (MB): 684, overhead (%): 242, heavy eviction counter: 27, current caching DataBlock (%): 20 BlockCache evicted (MB): 570, overhead (%): 185, heavy eviction counter: 28, current caching DataBlock (%): 19 BlockCache evicted (MB): 570, overhead (%): 185, heavy eviction counter: 29, current caching DataBlock (%): 18 BlockCache evicted (MB): 456, overhead (%): 128, heavy eviction counter: 30, current caching DataBlock (%): 17 BlockCache evicted (MB): 456, overhead (%): 128, heavy eviction counter: 31, current caching DataBlock (%): 16 BlockCache evicted (MB): 228, overhead (%): 14, heavy eviction counter: 32, current caching DataBlock (%): 16 BlockCache evicted (MB): 228, overhead (%): 14, heavy eviction counter: 33, current caching DataBlock (%): 16 BlockCache evicted (MB): 114, overhead (%): -43, heavy eviction counter: 33, current caching DataBlock (%): 21 BlockCache evicted (MB): 684, overhead (%): 242, heavy eviction counter: 34, current caching DataBlock (%): 19 BlockCache evicted (MB): 684, overhead (%): 242, heavy eviction counter: 35, current caching DataBlock (%): 17 BlockCache evicted (MB): 456, overhead (%): 128, heavy eviction counter: 36, current caching DataBlock (%): 16 BlockCache evicted (MB): 342, overhead (%): 71, heavy eviction counter: 37, current caching DataBlock (%): 16 BlockCache evicted (MB): 228, overhead (%): 14, heavy eviction counter: 38, current caching DataBlock (%): 16 BlockCache evicted (MB): 228, overhead (%): 14, heavy eviction counter: 39, current caching DataBlock (%): 16 BlockCache evicted (MB): 114, overhead (%): -43, heavy eviction counter: 39, current caching DataBlock (%): 21 BlockCache evicted (MB): 684, overhead (%): 242, heavy eviction counter: 40, current caching DataBlock (%): 19 BlockCache evicted (MB): 228, overhead (%): 14, heavy eviction counter: 41, current caching DataBlock (%): 19 BlockCache evicted (MB): 0, overhead (%): -100, heavy eviction counter: 0, current caching DataBlock (%): 100 < finish BlockCache evicted (MB): 0, overhead (%): -100, heavy eviction counter: 0, current caching DataBlock (%): 100 Looks all is fine. Can we merge PR? > BlockCache performance improve by reduce eviction rate > ------------------------------------------------------ > > Key: HBASE-23887 > URL: https://issues.apache.org/jira/browse/HBASE-23887 > Project: HBase > Issue Type: Improvement > Components: BlockCache, Performance > Reporter: Danil Lipovoy > Priority: Minor > Attachments: 1582787018434_rs_metrics.jpg, > 1582801838065_rs_metrics_new.png, BC_LongRun.png, > BlockCacheEvictionProcess.gif, cmp.png, evict_BC100_vs_BC23.png, > eviction_100p.png, eviction_100p.png, eviction_100p.png, gc_100p.png, > graph.png, image-2020-06-07-08-11-11-929.png, > image-2020-06-07-08-19-00-922.png, image-2020-06-07-12-07-24-903.png, > image-2020-06-07-12-07-30-307.png, read_requests_100pBC_vs_23pBC.png, > requests_100p.png, requests_100p.png, requests_new2_100p.png, > requests_new_100p.png, scan.png, wave.png > > > Hi! > I first time here, correct me please if something wrong. > I want propose how to improve performance when data in HFiles much more than > BlockChache (usual story in BigData). The idea - caching only part of DATA > blocks. It is good becouse LruBlockCache starts to work and save huge amount > of GC. > Sometimes we have more data than can fit into BlockCache and it is cause a > high rate of evictions. In this case we can skip cache a block N and insted > cache the N+1th block. Anyway we would evict N block quite soon and that why > that skipping good for performance. > Example: > Imagine we have little cache, just can fit only 1 block and we are trying to > read 3 blocks with offsets: > 124 > 198 > 223 > Current way - we put the block 124, then put 198, evict 124, put 223, evict > 198. A lot of work (5 actions). > With the feature - last few digits evenly distributed from 0 to 99. When we > divide by modulus we got: > 124 -> 24 > 198 -> 98 > 223 -> 23 > It helps to sort them. Some part, for example below 50 (if we set > *hbase.lru.cache.data.block.percent* = 50) go into the cache. And skip > others. It means we will not try to handle the block 198 and save CPU for > other job. In the result - we put block 124, then put 223, evict 124 (3 > actions). > See the picture in attachment with test below. Requests per second is higher, > GC is lower. > > The key point of the code: > Added the parameter: *hbase.lru.cache.data.block.percent* which by default = > 100 > > But if we set it 1-99, then will work the next logic: > > > {code:java} > public void cacheBlock(BlockCacheKey cacheKey, Cacheable buf, boolean > inMemory) { > if (cacheDataBlockPercent != 100 && buf.getBlockType().isData()) > if (cacheKey.getOffset() % 100 >= cacheDataBlockPercent) > return; > ... > // the same code as usual > } > {code} > > Other parameters help to control when this logic will be enabled. It means it > will work only while heavy reading going on. > hbase.lru.cache.heavy.eviction.count.limit - set how many times have to run > eviction process that start to avoid of putting data to BlockCache > hbase.lru.cache.heavy.eviction.bytes.size.limit - set how many bytes have to > evicted each time that start to avoid of putting data to BlockCache > By default: if 10 times (100 secunds) evicted more than 10 MB (each time) > then we start to skip 50% of data blocks. > When heavy evitions process end then new logic off and will put into > BlockCache all blocks again. > > Descriptions of the test: > 4 nodes E5-2698 v4 @ 2.20GHz, 700 Gb Mem. > 4 RegionServers > 4 tables by 64 regions by 1.88 Gb data in each = 600 Gb total (only FAST_DIFF) > Total BlockCache Size = 48 Gb (8 % of data in HFiles) > Random read in 20 threads > > I am going to make Pull Request, hope it is right way to make some > contribution in this cool product. > -- This message was sent by Atlassian Jira (v8.3.4#803005)