[ https://issues.apache.org/jira/browse/HBASE-10263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13865002#comment-13865002 ]
Feng Honghua commented on HBASE-10263: -------------------------------------- Performance result: 1. create 2 tables: TA, and TM, all with a single CF, TM's CF is in-memory; disable split to guarantee only 1 region each table; move these 2 tables to a same regionserver 2. write about 20G data (20M rows * 1K row-size) to each table respectively, major compact after the write is done; the block cache size is 21G; Then read: 2 client with 20 threads each issues random read 20G times to these 2 tables... 1.old version regionserver(fixed 25%:50%:25% single/multi/memory ratio): TA latency=5.5ms; TM latency=11.4ms; (in-memory table performance is about 2 times worse than ordinary table) 2.new version regionserver(preemptive mode is on): TA latency=19.4ms; TM latency=3.8ms; (now in-memory table performance is much better than ordinary table) Since tests against different underlying hardware can yield different test results, but above numbers did prove the improvement effect. Additional performance comparison test is encouraged to verify the effect for different hardware configurations :-) > make LruBlockCache single/multi/in-memory ratio user-configurable and provide > preemptive mode for in-memory type block > ---------------------------------------------------------------------------------------------------------------------- > > Key: HBASE-10263 > URL: https://issues.apache.org/jira/browse/HBASE-10263 > Project: HBase > Issue Type: Improvement > Components: io > Reporter: Feng Honghua > Assignee: Feng Honghua > Attachments: HBASE-10263-trunk_v0.patch, HBASE-10263-trunk_v1.patch, > HBASE-10263-trunk_v2.patch > > > currently the single/multi/in-memory ratio in LruBlockCache is hardcoded > 1:2:1, which can lead to somewhat counter-intuition behavior for some user > scenario where in-memory table's read performance is much worse than ordinary > table when two tables' data size is almost equal and larger than > regionserver's cache size (we ever did some such experiment and verified that > in-memory table random read performance is two times worse than ordinary > table). > this patch fixes above issue and provides: > 1. make single/multi/in-memory ratio user-configurable > 2. provide a configurable switch which can make in-memory block preemptive, > by preemptive means when this switch is on in-memory block can kick out any > ordinary block to make room until no ordinary block, when this switch is off > (by default) the behavior is the same as previous, using > single/multi/in-memory ratio to determine evicting. > by default, above two changes are both off and the behavior keeps the same as > before applying this patch. it's client/user's choice to determine whether or > which behavior to use by enabling one of these two enhancements. -- This message was sent by Atlassian JIRA (v6.1.5#6160)