[ 
https://issues.apache.org/jira/browse/HBASE-23887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danil Lipovoy updated HBASE-23887:
----------------------------------
    Description: 
Hi!

I first time here, correct me please if something wrong.

I want propose how to improve performance when data in HFiles much more than 
BlockChache (usual story in BigData). The idea - caching only part of DATA 
blocks. It is good becouse LruBlockCache starts to work and save huge amount of 
GC. 

Sometimes we have more data than can fit into BlockCache and it is cause a high 
rate of evictions. 

, you choose to not cache a block which we would have otherwise chosen to cache 
under the assumption that this "churn": to cache the N+1th block, we have to 
evict the Nth block.

See the picture in attachment with test below. Requests per second is higher, 
GC is lower.

 

The key point of the code:

Added the parameter: *hbase.lru.cache.data.block.percent* which by default = 100

 

But if we set it 0-99, then will work the next logic:

 

 
{code:java}
public void cacheBlock(BlockCacheKey cacheKey, Cacheable buf, boolean inMemory) 
{   
  if (cacheDataBlockPercent != 100 && buf.getBlockType().isData())      
    if (cacheKey.getOffset() % 100 >= cacheDataBlockPercent) 
      return;    
... 
// the same code as usual
}
{code}
 

 

Descriptions of the test:

4 nodes E5-2698 v4 @ 2.20GHz, 700 Gb Mem.

4 RegionServers

4 tables by 64 regions by 1.88 Gb data in each = 600 Gb total (only FAST_DIFF)

Total BlockCache Size = 48 Gb (8 % of data in HFiles)

Random read in 20 threads

 

I am going to make Pull Request, hope it is right way to make some contribution 
in this cool product.  

 

  was:
НГHi!

I first time here, correct me please if something wrong.

I want propose how to improve performance when data in HFiles much more than 
BlockChache (usual story in BigData). The idea - caching only part of DATA 
blocks. It is good becouse LruBlockCache starts to work and save huge amount of 
GC. See the picture in attachment with test below. Requests per second is 
higher, GC is lower.

 

The key point of the code:

Added the parameter: *hbase.lru.cache.data.block.percent* which by default = 100

 

But if we set it 0-99, then will work the next logic:

 

 
{code:java}
public void cacheBlock(BlockCacheKey cacheKey, Cacheable buf, boolean inMemory) 
{   
  if (cacheDataBlockPercent != 100 && buf.getBlockType().isData())      
    if (cacheKey.getOffset() % 100 >= cacheDataBlockPercent) 
      return;    
... 
// the same code as usual
}
{code}
 

 

Descriptions of the test:

4 nodes E5-2698 v4 @ 2.20GHz, 700 Gb Mem.

4 RegionServers

4 tables by 64 regions by 1.88 Gb data in each = 600 Gb total (only FAST_DIFF)

Total BlockCache Size = 48 Gb (8 % of data in HFiles)

Random read in 20 threads

 

I am going to make Pull Request, hope it is right way to make some contribution 
in this cool product.  

 


> BlockCache performance improve by reduce eviction rate
> ------------------------------------------------------
>
>                 Key: HBASE-23887
>                 URL: https://issues.apache.org/jira/browse/HBASE-23887
>             Project: HBase
>          Issue Type: Improvement
>          Components: BlockCache, Performance
>            Reporter: Danil Lipovoy
>            Priority: Minor
>         Attachments: 1582787018434_rs_metrics.jpg, 
> 1582801838065_rs_metrics_new.png, BC_LongRun.png, cmp.png, 
> evict_BC100_vs_BC23.png, read_requests_100pBC_vs_23pBC.png
>
>
> Hi!
> I first time here, correct me please if something wrong.
> I want propose how to improve performance when data in HFiles much more than 
> BlockChache (usual story in BigData). The idea - caching only part of DATA 
> blocks. It is good becouse LruBlockCache starts to work and save huge amount 
> of GC. 
> Sometimes we have more data than can fit into BlockCache and it is cause a 
> high rate of evictions. 
> , you choose to not cache a block which we would have otherwise chosen to 
> cache under the assumption that this "churn": to cache the N+1th block, we 
> have to evict the Nth block.
> See the picture in attachment with test below. Requests per second is higher, 
> GC is lower.
>  
> The key point of the code:
> Added the parameter: *hbase.lru.cache.data.block.percent* which by default = 
> 100
>  
> But if we set it 0-99, then will work the next logic:
>  
>  
> {code:java}
> public void cacheBlock(BlockCacheKey cacheKey, Cacheable buf, boolean 
> inMemory) {   
>   if (cacheDataBlockPercent != 100 && buf.getBlockType().isData())      
>     if (cacheKey.getOffset() % 100 >= cacheDataBlockPercent) 
>       return;    
> ... 
> // the same code as usual
> }
> {code}
>  
>  
> Descriptions of the test:
> 4 nodes E5-2698 v4 @ 2.20GHz, 700 Gb Mem.
> 4 RegionServers
> 4 tables by 64 regions by 1.88 Gb data in each = 600 Gb total (only FAST_DIFF)
> Total BlockCache Size = 48 Gb (8 % of data in HFiles)
> Random read in 20 threads
>  
> I am going to make Pull Request, hope it is right way to make some 
> contribution in this cool product.  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to