[jira] [Commented] (HBASE-10263) make LruBlockCache single/multi/in-memory ratio user-configurable and provide preemptive mode for in-memory type block
[ https://issues.apache.org/jira/browse/HBASE-10263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13869242#comment-13869242 ] Feng Honghua commented on HBASE-10263: -- [~vrodionov] : bq.I have always thought that artificial LruBlockCache divide on regular and in-memory zones was not a good idea. to some extent, I agree:-) bq.The good cache implementation must sort all these things out itself not quite...for some applications they need better latency for some table regardless of what the table's access pattern is compared to other tables. treating all the tables the same way and just letting the 'good' cache take care of all of these is not desirable for such applications. for example think about the META table(which is the internal in-memory table) > make LruBlockCache single/multi/in-memory ratio user-configurable and provide > preemptive mode for in-memory type block > -- > > Key: HBASE-10263 > URL: https://issues.apache.org/jira/browse/HBASE-10263 > Project: HBase > Issue Type: Improvement > Components: io >Reporter: Feng Honghua >Assignee: Feng Honghua > Fix For: 0.98.0, 0.99.0 > > Attachments: HBASE-10263-trunk_v0.patch, HBASE-10263-trunk_v1.patch, > HBASE-10263-trunk_v2.patch > > > currently the single/multi/in-memory ratio in LruBlockCache is hardcoded > 1:2:1, which can lead to somewhat counter-intuition behavior for some user > scenario where in-memory table's read performance is much worse than ordinary > table when two tables' data size is almost equal and larger than > regionserver's cache size (we ever did some such experiment and verified that > in-memory table random read performance is two times worse than ordinary > table). > this patch fixes above issue and provides: > 1. make single/multi/in-memory ratio user-configurable > 2. provide a configurable switch which can make in-memory block preemptive, > by preemptive means when this switch is on in-memory block can kick out any > ordinary block to make room until no ordinary block, when this switch is off > (by default) the behavior is the same as previous, using > single/multi/in-memory ratio to determine evicting. > by default, above two changes are both off and the behavior keeps the same as > before applying this patch. it's client/user's choice to determine whether or > which behavior to use by enabling one of these two enhancements. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10263) make LruBlockCache single/multi/in-memory ratio user-configurable and provide preemptive mode for in-memory type block
[ https://issues.apache.org/jira/browse/HBASE-10263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13868668#comment-13868668 ] Nick Dimiduk commented on HBASE-10263: -- No harm, no foul. Thanks [~apurtell]! > make LruBlockCache single/multi/in-memory ratio user-configurable and provide > preemptive mode for in-memory type block > -- > > Key: HBASE-10263 > URL: https://issues.apache.org/jira/browse/HBASE-10263 > Project: HBase > Issue Type: Improvement > Components: io >Reporter: Feng Honghua >Assignee: Feng Honghua > Fix For: 0.98.0, 0.99.0 > > Attachments: HBASE-10263-trunk_v0.patch, HBASE-10263-trunk_v1.patch, > HBASE-10263-trunk_v2.patch > > > currently the single/multi/in-memory ratio in LruBlockCache is hardcoded > 1:2:1, which can lead to somewhat counter-intuition behavior for some user > scenario where in-memory table's read performance is much worse than ordinary > table when two tables' data size is almost equal and larger than > regionserver's cache size (we ever did some such experiment and verified that > in-memory table random read performance is two times worse than ordinary > table). > this patch fixes above issue and provides: > 1. make single/multi/in-memory ratio user-configurable > 2. provide a configurable switch which can make in-memory block preemptive, > by preemptive means when this switch is on in-memory block can kick out any > ordinary block to make room until no ordinary block, when this switch is off > (by default) the behavior is the same as previous, using > single/multi/in-memory ratio to determine evicting. > by default, above two changes are both off and the behavior keeps the same as > before applying this patch. it's client/user's choice to determine whether or > which behavior to use by enabling one of these two enhancements. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10263) make LruBlockCache single/multi/in-memory ratio user-configurable and provide preemptive mode for in-memory type block
[ https://issues.apache.org/jira/browse/HBASE-10263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13868666#comment-13868666 ] Andrew Purtell commented on HBASE-10263: It's fine to keep in 0.98, btw. > make LruBlockCache single/multi/in-memory ratio user-configurable and provide > preemptive mode for in-memory type block > -- > > Key: HBASE-10263 > URL: https://issues.apache.org/jira/browse/HBASE-10263 > Project: HBase > Issue Type: Improvement > Components: io >Reporter: Feng Honghua >Assignee: Feng Honghua > Fix For: 0.98.0, 0.99.0 > > Attachments: HBASE-10263-trunk_v0.patch, HBASE-10263-trunk_v1.patch, > HBASE-10263-trunk_v2.patch > > > currently the single/multi/in-memory ratio in LruBlockCache is hardcoded > 1:2:1, which can lead to somewhat counter-intuition behavior for some user > scenario where in-memory table's read performance is much worse than ordinary > table when two tables' data size is almost equal and larger than > regionserver's cache size (we ever did some such experiment and verified that > in-memory table random read performance is two times worse than ordinary > table). > this patch fixes above issue and provides: > 1. make single/multi/in-memory ratio user-configurable > 2. provide a configurable switch which can make in-memory block preemptive, > by preemptive means when this switch is on in-memory block can kick out any > ordinary block to make room until no ordinary block, when this switch is off > (by default) the behavior is the same as previous, using > single/multi/in-memory ratio to determine evicting. > by default, above two changes are both off and the behavior keeps the same as > before applying this patch. it's client/user's choice to determine whether or > which behavior to use by enabling one of these two enhancements. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10263) make LruBlockCache single/multi/in-memory ratio user-configurable and provide preemptive mode for in-memory type block
[ https://issues.apache.org/jira/browse/HBASE-10263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13868664#comment-13868664 ] Andrew Purtell commented on HBASE-10263: I am sorry also that I missed this. > make LruBlockCache single/multi/in-memory ratio user-configurable and provide > preemptive mode for in-memory type block > -- > > Key: HBASE-10263 > URL: https://issues.apache.org/jira/browse/HBASE-10263 > Project: HBase > Issue Type: Improvement > Components: io >Reporter: Feng Honghua >Assignee: Feng Honghua > Fix For: 0.98.0, 0.99.0 > > Attachments: HBASE-10263-trunk_v0.patch, HBASE-10263-trunk_v1.patch, > HBASE-10263-trunk_v2.patch > > > currently the single/multi/in-memory ratio in LruBlockCache is hardcoded > 1:2:1, which can lead to somewhat counter-intuition behavior for some user > scenario where in-memory table's read performance is much worse than ordinary > table when two tables' data size is almost equal and larger than > regionserver's cache size (we ever did some such experiment and verified that > in-memory table random read performance is two times worse than ordinary > table). > this patch fixes above issue and provides: > 1. make single/multi/in-memory ratio user-configurable > 2. provide a configurable switch which can make in-memory block preemptive, > by preemptive means when this switch is on in-memory block can kick out any > ordinary block to make room until no ordinary block, when this switch is off > (by default) the behavior is the same as previous, using > single/multi/in-memory ratio to determine evicting. > by default, above two changes are both off and the behavior keeps the same as > before applying this patch. it's client/user's choice to determine whether or > which behavior to use by enabling one of these two enhancements. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10263) make LruBlockCache single/multi/in-memory ratio user-configurable and provide preemptive mode for in-memory type block
[ https://issues.apache.org/jira/browse/HBASE-10263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13868657#comment-13868657 ] Hudson commented on HBASE-10263: FAILURE: Integrated in HBase-0.98 #70 (See [https://builds.apache.org/job/HBase-0.98/70/]) HBASE-10263 make LruBlockCache single/multi/in-memory ratio user-configurable and provide preemptive mode for in-memory type block (Feng Honghua) (liangxie: rev 1557298) * /hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java * /hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestLruBlockCache.java > make LruBlockCache single/multi/in-memory ratio user-configurable and provide > preemptive mode for in-memory type block > -- > > Key: HBASE-10263 > URL: https://issues.apache.org/jira/browse/HBASE-10263 > Project: HBase > Issue Type: Improvement > Components: io >Reporter: Feng Honghua >Assignee: Feng Honghua > Fix For: 0.98.0, 0.99.0 > > Attachments: HBASE-10263-trunk_v0.patch, HBASE-10263-trunk_v1.patch, > HBASE-10263-trunk_v2.patch > > > currently the single/multi/in-memory ratio in LruBlockCache is hardcoded > 1:2:1, which can lead to somewhat counter-intuition behavior for some user > scenario where in-memory table's read performance is much worse than ordinary > table when two tables' data size is almost equal and larger than > regionserver's cache size (we ever did some such experiment and verified that > in-memory table random read performance is two times worse than ordinary > table). > this patch fixes above issue and provides: > 1. make single/multi/in-memory ratio user-configurable > 2. provide a configurable switch which can make in-memory block preemptive, > by preemptive means when this switch is on in-memory block can kick out any > ordinary block to make room until no ordinary block, when this switch is off > (by default) the behavior is the same as previous, using > single/multi/in-memory ratio to determine evicting. > by default, above two changes are both off and the behavior keeps the same as > before applying this patch. it's client/user's choice to determine whether or > which behavior to use by enabling one of these two enhancements. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10263) make LruBlockCache single/multi/in-memory ratio user-configurable and provide preemptive mode for in-memory type block
[ https://issues.apache.org/jira/browse/HBASE-10263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13868655#comment-13868655 ] Hudson commented on HBASE-10263: FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #65 (See [https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/65/]) HBASE-10263 make LruBlockCache single/multi/in-memory ratio user-configurable and provide preemptive mode for in-memory type block (Feng Honghua) (liangxie: rev 1557298) * /hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java * /hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestLruBlockCache.java > make LruBlockCache single/multi/in-memory ratio user-configurable and provide > preemptive mode for in-memory type block > -- > > Key: HBASE-10263 > URL: https://issues.apache.org/jira/browse/HBASE-10263 > Project: HBase > Issue Type: Improvement > Components: io >Reporter: Feng Honghua >Assignee: Feng Honghua > Fix For: 0.98.0, 0.99.0 > > Attachments: HBASE-10263-trunk_v0.patch, HBASE-10263-trunk_v1.patch, > HBASE-10263-trunk_v2.patch > > > currently the single/multi/in-memory ratio in LruBlockCache is hardcoded > 1:2:1, which can lead to somewhat counter-intuition behavior for some user > scenario where in-memory table's read performance is much worse than ordinary > table when two tables' data size is almost equal and larger than > regionserver's cache size (we ever did some such experiment and verified that > in-memory table random read performance is two times worse than ordinary > table). > this patch fixes above issue and provides: > 1. make single/multi/in-memory ratio user-configurable > 2. provide a configurable switch which can make in-memory block preemptive, > by preemptive means when this switch is on in-memory block can kick out any > ordinary block to make room until no ordinary block, when this switch is off > (by default) the behavior is the same as previous, using > single/multi/in-memory ratio to determine evicting. > by default, above two changes are both off and the behavior keeps the same as > before applying this patch. it's client/user's choice to determine whether or > which behavior to use by enabling one of these two enhancements. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10263) make LruBlockCache single/multi/in-memory ratio user-configurable and provide preemptive mode for in-memory type block
[ https://issues.apache.org/jira/browse/HBASE-10263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13868653#comment-13868653 ] Nick Dimiduk commented on HBASE-10263: -- I didn't mean to cause consternation or disrupt your release process, [~apurtell]. You were pinged by Stack, Ted, and myself on this thread over the last week, so I though you had ample time for you to speak your mind. You're the boss of 0.98 -- say the word and I'll revert, no trouble. My apologies. > make LruBlockCache single/multi/in-memory ratio user-configurable and provide > preemptive mode for in-memory type block > -- > > Key: HBASE-10263 > URL: https://issues.apache.org/jira/browse/HBASE-10263 > Project: HBase > Issue Type: Improvement > Components: io >Reporter: Feng Honghua >Assignee: Feng Honghua > Fix For: 0.98.0, 0.99.0 > > Attachments: HBASE-10263-trunk_v0.patch, HBASE-10263-trunk_v1.patch, > HBASE-10263-trunk_v2.patch > > > currently the single/multi/in-memory ratio in LruBlockCache is hardcoded > 1:2:1, which can lead to somewhat counter-intuition behavior for some user > scenario where in-memory table's read performance is much worse than ordinary > table when two tables' data size is almost equal and larger than > regionserver's cache size (we ever did some such experiment and verified that > in-memory table random read performance is two times worse than ordinary > table). > this patch fixes above issue and provides: > 1. make single/multi/in-memory ratio user-configurable > 2. provide a configurable switch which can make in-memory block preemptive, > by preemptive means when this switch is on in-memory block can kick out any > ordinary block to make room until no ordinary block, when this switch is off > (by default) the behavior is the same as previous, using > single/multi/in-memory ratio to determine evicting. > by default, above two changes are both off and the behavior keeps the same as > before applying this patch. it's client/user's choice to determine whether or > which behavior to use by enabling one of these two enhancements. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10263) make LruBlockCache single/multi/in-memory ratio user-configurable and provide preemptive mode for in-memory type block
[ https://issues.apache.org/jira/browse/HBASE-10263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13868641#comment-13868641 ] Liang Xie commented on HBASE-10263: --- got it, andy! > make LruBlockCache single/multi/in-memory ratio user-configurable and provide > preemptive mode for in-memory type block > -- > > Key: HBASE-10263 > URL: https://issues.apache.org/jira/browse/HBASE-10263 > Project: HBase > Issue Type: Improvement > Components: io >Reporter: Feng Honghua >Assignee: Feng Honghua > Fix For: 0.98.0, 0.99.0 > > Attachments: HBASE-10263-trunk_v0.patch, HBASE-10263-trunk_v1.patch, > HBASE-10263-trunk_v2.patch > > > currently the single/multi/in-memory ratio in LruBlockCache is hardcoded > 1:2:1, which can lead to somewhat counter-intuition behavior for some user > scenario where in-memory table's read performance is much worse than ordinary > table when two tables' data size is almost equal and larger than > regionserver's cache size (we ever did some such experiment and verified that > in-memory table random read performance is two times worse than ordinary > table). > this patch fixes above issue and provides: > 1. make single/multi/in-memory ratio user-configurable > 2. provide a configurable switch which can make in-memory block preemptive, > by preemptive means when this switch is on in-memory block can kick out any > ordinary block to make room until no ordinary block, when this switch is off > (by default) the behavior is the same as previous, using > single/multi/in-memory ratio to determine evicting. > by default, above two changes are both off and the behavior keeps the same as > before applying this patch. it's client/user's choice to determine whether or > which behavior to use by enabling one of these two enhancements. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10263) make LruBlockCache single/multi/in-memory ratio user-configurable and provide preemptive mode for in-memory type block
[ https://issues.apache.org/jira/browse/HBASE-10263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13868617#comment-13868617 ] Vladimir Rodionov commented on HBASE-10263: --- I have always thought that artificial LruBlockCache divide on regular and in-memory zones was not a good idea. The good cache implementation must sort all these things out itself ... naturally .. My 2c. > make LruBlockCache single/multi/in-memory ratio user-configurable and provide > preemptive mode for in-memory type block > -- > > Key: HBASE-10263 > URL: https://issues.apache.org/jira/browse/HBASE-10263 > Project: HBase > Issue Type: Improvement > Components: io >Reporter: Feng Honghua >Assignee: Feng Honghua > Fix For: 0.98.0, 0.99.0 > > Attachments: HBASE-10263-trunk_v0.patch, HBASE-10263-trunk_v1.patch, > HBASE-10263-trunk_v2.patch > > > currently the single/multi/in-memory ratio in LruBlockCache is hardcoded > 1:2:1, which can lead to somewhat counter-intuition behavior for some user > scenario where in-memory table's read performance is much worse than ordinary > table when two tables' data size is almost equal and larger than > regionserver's cache size (we ever did some such experiment and verified that > in-memory table random read performance is two times worse than ordinary > table). > this patch fixes above issue and provides: > 1. make single/multi/in-memory ratio user-configurable > 2. provide a configurable switch which can make in-memory block preemptive, > by preemptive means when this switch is on in-memory block can kick out any > ordinary block to make room until no ordinary block, when this switch is off > (by default) the behavior is the same as previous, using > single/multi/in-memory ratio to determine evicting. > by default, above two changes are both off and the behavior keeps the same as > before applying this patch. it's client/user's choice to determine whether or > which behavior to use by enabling one of these two enhancements. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10263) make LruBlockCache single/multi/in-memory ratio user-configurable and provide preemptive mode for in-memory type block
[ https://issues.apache.org/jira/browse/HBASE-10263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13868616#comment-13868616 ] Andrew Purtell commented on HBASE-10263: I don't think it was an unreasonable request to be pinged first as RM for 0.98 [~ndimiduk], [~xieliang007]. Next one I start reverting, thanks. > make LruBlockCache single/multi/in-memory ratio user-configurable and provide > preemptive mode for in-memory type block > -- > > Key: HBASE-10263 > URL: https://issues.apache.org/jira/browse/HBASE-10263 > Project: HBase > Issue Type: Improvement > Components: io >Reporter: Feng Honghua >Assignee: Feng Honghua > Fix For: 0.98.0, 0.99.0 > > Attachments: HBASE-10263-trunk_v0.patch, HBASE-10263-trunk_v1.patch, > HBASE-10263-trunk_v2.patch > > > currently the single/multi/in-memory ratio in LruBlockCache is hardcoded > 1:2:1, which can lead to somewhat counter-intuition behavior for some user > scenario where in-memory table's read performance is much worse than ordinary > table when two tables' data size is almost equal and larger than > regionserver's cache size (we ever did some such experiment and verified that > in-memory table random read performance is two times worse than ordinary > table). > this patch fixes above issue and provides: > 1. make single/multi/in-memory ratio user-configurable > 2. provide a configurable switch which can make in-memory block preemptive, > by preemptive means when this switch is on in-memory block can kick out any > ordinary block to make room until no ordinary block, when this switch is off > (by default) the behavior is the same as previous, using > single/multi/in-memory ratio to determine evicting. > by default, above two changes are both off and the behavior keeps the same as > before applying this patch. it's client/user's choice to determine whether or > which behavior to use by enabling one of these two enhancements. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10263) make LruBlockCache single/multi/in-memory ratio user-configurable and provide preemptive mode for in-memory type block
[ https://issues.apache.org/jira/browse/HBASE-10263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13868589#comment-13868589 ] Liang Xie commented on HBASE-10263: --- [~ndimiduk], just committed into 0.98 branch, thanks for reminder ! > make LruBlockCache single/multi/in-memory ratio user-configurable and provide > preemptive mode for in-memory type block > -- > > Key: HBASE-10263 > URL: https://issues.apache.org/jira/browse/HBASE-10263 > Project: HBase > Issue Type: Improvement > Components: io >Reporter: Feng Honghua >Assignee: Feng Honghua > Fix For: 0.98.0, 0.99.0 > > Attachments: HBASE-10263-trunk_v0.patch, HBASE-10263-trunk_v1.patch, > HBASE-10263-trunk_v2.patch > > > currently the single/multi/in-memory ratio in LruBlockCache is hardcoded > 1:2:1, which can lead to somewhat counter-intuition behavior for some user > scenario where in-memory table's read performance is much worse than ordinary > table when two tables' data size is almost equal and larger than > regionserver's cache size (we ever did some such experiment and verified that > in-memory table random read performance is two times worse than ordinary > table). > this patch fixes above issue and provides: > 1. make single/multi/in-memory ratio user-configurable > 2. provide a configurable switch which can make in-memory block preemptive, > by preemptive means when this switch is on in-memory block can kick out any > ordinary block to make room until no ordinary block, when this switch is off > (by default) the behavior is the same as previous, using > single/multi/in-memory ratio to determine evicting. > by default, above two changes are both off and the behavior keeps the same as > before applying this patch. it's client/user's choice to determine whether or > which behavior to use by enabling one of these two enhancements. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10263) make LruBlockCache single/multi/in-memory ratio user-configurable and provide preemptive mode for in-memory type block
[ https://issues.apache.org/jira/browse/HBASE-10263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13868581#comment-13868581 ] Liang Xie commented on HBASE-10263: --- np:) sorry for missing your request about merging into 0.98. > make LruBlockCache single/multi/in-memory ratio user-configurable and provide > preemptive mode for in-memory type block > -- > > Key: HBASE-10263 > URL: https://issues.apache.org/jira/browse/HBASE-10263 > Project: HBase > Issue Type: Improvement > Components: io >Reporter: Feng Honghua >Assignee: Feng Honghua > Fix For: 0.99.0 > > Attachments: HBASE-10263-trunk_v0.patch, HBASE-10263-trunk_v1.patch, > HBASE-10263-trunk_v2.patch > > > currently the single/multi/in-memory ratio in LruBlockCache is hardcoded > 1:2:1, which can lead to somewhat counter-intuition behavior for some user > scenario where in-memory table's read performance is much worse than ordinary > table when two tables' data size is almost equal and larger than > regionserver's cache size (we ever did some such experiment and verified that > in-memory table random read performance is two times worse than ordinary > table). > this patch fixes above issue and provides: > 1. make single/multi/in-memory ratio user-configurable > 2. provide a configurable switch which can make in-memory block preemptive, > by preemptive means when this switch is on in-memory block can kick out any > ordinary block to make room until no ordinary block, when this switch is off > (by default) the behavior is the same as previous, using > single/multi/in-memory ratio to determine evicting. > by default, above two changes are both off and the behavior keeps the same as > before applying this patch. it's client/user's choice to determine whether or > which behavior to use by enabling one of these two enhancements. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10263) make LruBlockCache single/multi/in-memory ratio user-configurable and provide preemptive mode for in-memory type block
[ https://issues.apache.org/jira/browse/HBASE-10263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13868394#comment-13868394 ] Nick Dimiduk commented on HBASE-10263: -- Actually, [~xieliang007] do you mind committing also to 0.98? I don't want to steal your thunder on commit ;) > make LruBlockCache single/multi/in-memory ratio user-configurable and provide > preemptive mode for in-memory type block > -- > > Key: HBASE-10263 > URL: https://issues.apache.org/jira/browse/HBASE-10263 > Project: HBase > Issue Type: Improvement > Components: io >Reporter: Feng Honghua >Assignee: Feng Honghua > Fix For: 0.99.0 > > Attachments: HBASE-10263-trunk_v0.patch, HBASE-10263-trunk_v1.patch, > HBASE-10263-trunk_v2.patch > > > currently the single/multi/in-memory ratio in LruBlockCache is hardcoded > 1:2:1, which can lead to somewhat counter-intuition behavior for some user > scenario where in-memory table's read performance is much worse than ordinary > table when two tables' data size is almost equal and larger than > regionserver's cache size (we ever did some such experiment and verified that > in-memory table random read performance is two times worse than ordinary > table). > this patch fixes above issue and provides: > 1. make single/multi/in-memory ratio user-configurable > 2. provide a configurable switch which can make in-memory block preemptive, > by preemptive means when this switch is on in-memory block can kick out any > ordinary block to make room until no ordinary block, when this switch is off > (by default) the behavior is the same as previous, using > single/multi/in-memory ratio to determine evicting. > by default, above two changes are both off and the behavior keeps the same as > before applying this patch. it's client/user's choice to determine whether or > which behavior to use by enabling one of these two enhancements. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10263) make LruBlockCache single/multi/in-memory ratio user-configurable and provide preemptive mode for in-memory type block
[ https://issues.apache.org/jira/browse/HBASE-10263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13867149#comment-13867149 ] Hudson commented on HBASE-10263: SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-1.1 #47 (See [https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-1.1/47/]) HBASE-10263 make LruBlockCache single/multi/in-memory ratio user-configurable and provide preemptive mode for in-memory type block (Feng Honghua) (liangxie: rev 1556703) * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestLruBlockCache.java > make LruBlockCache single/multi/in-memory ratio user-configurable and provide > preemptive mode for in-memory type block > -- > > Key: HBASE-10263 > URL: https://issues.apache.org/jira/browse/HBASE-10263 > Project: HBase > Issue Type: Improvement > Components: io >Reporter: Feng Honghua >Assignee: Feng Honghua > Fix For: 0.99.0 > > Attachments: HBASE-10263-trunk_v0.patch, HBASE-10263-trunk_v1.patch, > HBASE-10263-trunk_v2.patch > > > currently the single/multi/in-memory ratio in LruBlockCache is hardcoded > 1:2:1, which can lead to somewhat counter-intuition behavior for some user > scenario where in-memory table's read performance is much worse than ordinary > table when two tables' data size is almost equal and larger than > regionserver's cache size (we ever did some such experiment and verified that > in-memory table random read performance is two times worse than ordinary > table). > this patch fixes above issue and provides: > 1. make single/multi/in-memory ratio user-configurable > 2. provide a configurable switch which can make in-memory block preemptive, > by preemptive means when this switch is on in-memory block can kick out any > ordinary block to make room until no ordinary block, when this switch is off > (by default) the behavior is the same as previous, using > single/multi/in-memory ratio to determine evicting. > by default, above two changes are both off and the behavior keeps the same as > before applying this patch. it's client/user's choice to determine whether or > which behavior to use by enabling one of these two enhancements. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10263) make LruBlockCache single/multi/in-memory ratio user-configurable and provide preemptive mode for in-memory type block
[ https://issues.apache.org/jira/browse/HBASE-10263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13866836#comment-13866836 ] stack commented on HBASE-10263: --- bq. hmmm. what about we firstly make it conservative by keeping it OFF by default, and turn it on if we eventually found most of our users tweak it on for their real use OK boss. > make LruBlockCache single/multi/in-memory ratio user-configurable and provide > preemptive mode for in-memory type block > -- > > Key: HBASE-10263 > URL: https://issues.apache.org/jira/browse/HBASE-10263 > Project: HBase > Issue Type: Improvement > Components: io >Reporter: Feng Honghua >Assignee: Feng Honghua > Fix For: 0.99.0 > > Attachments: HBASE-10263-trunk_v0.patch, HBASE-10263-trunk_v1.patch, > HBASE-10263-trunk_v2.patch > > > currently the single/multi/in-memory ratio in LruBlockCache is hardcoded > 1:2:1, which can lead to somewhat counter-intuition behavior for some user > scenario where in-memory table's read performance is much worse than ordinary > table when two tables' data size is almost equal and larger than > regionserver's cache size (we ever did some such experiment and verified that > in-memory table random read performance is two times worse than ordinary > table). > this patch fixes above issue and provides: > 1. make single/multi/in-memory ratio user-configurable > 2. provide a configurable switch which can make in-memory block preemptive, > by preemptive means when this switch is on in-memory block can kick out any > ordinary block to make room until no ordinary block, when this switch is off > (by default) the behavior is the same as previous, using > single/multi/in-memory ratio to determine evicting. > by default, above two changes are both off and the behavior keeps the same as > before applying this patch. it's client/user's choice to determine whether or > which behavior to use by enabling one of these two enhancements. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10263) make LruBlockCache single/multi/in-memory ratio user-configurable and provide preemptive mode for in-memory type block
[ https://issues.apache.org/jira/browse/HBASE-10263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13866796#comment-13866796 ] Nick Dimiduk commented on HBASE-10263: -- I'd like this in 0.98. Will commit there as well unless someone strongly objects. (cc [~apurtell]) > make LruBlockCache single/multi/in-memory ratio user-configurable and provide > preemptive mode for in-memory type block > -- > > Key: HBASE-10263 > URL: https://issues.apache.org/jira/browse/HBASE-10263 > Project: HBase > Issue Type: Improvement > Components: io >Reporter: Feng Honghua >Assignee: Feng Honghua > Fix For: 0.99.0 > > Attachments: HBASE-10263-trunk_v0.patch, HBASE-10263-trunk_v1.patch, > HBASE-10263-trunk_v2.patch > > > currently the single/multi/in-memory ratio in LruBlockCache is hardcoded > 1:2:1, which can lead to somewhat counter-intuition behavior for some user > scenario where in-memory table's read performance is much worse than ordinary > table when two tables' data size is almost equal and larger than > regionserver's cache size (we ever did some such experiment and verified that > in-memory table random read performance is two times worse than ordinary > table). > this patch fixes above issue and provides: > 1. make single/multi/in-memory ratio user-configurable > 2. provide a configurable switch which can make in-memory block preemptive, > by preemptive means when this switch is on in-memory block can kick out any > ordinary block to make room until no ordinary block, when this switch is off > (by default) the behavior is the same as previous, using > single/multi/in-memory ratio to determine evicting. > by default, above two changes are both off and the behavior keeps the same as > before applying this patch. it's client/user's choice to determine whether or > which behavior to use by enabling one of these two enhancements. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10263) make LruBlockCache single/multi/in-memory ratio user-configurable and provide preemptive mode for in-memory type block
[ https://issues.apache.org/jira/browse/HBASE-10263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13866379#comment-13866379 ] Feng Honghua commented on HBASE-10263: -- thanks [~xieliang007] :-) > make LruBlockCache single/multi/in-memory ratio user-configurable and provide > preemptive mode for in-memory type block > -- > > Key: HBASE-10263 > URL: https://issues.apache.org/jira/browse/HBASE-10263 > Project: HBase > Issue Type: Improvement > Components: io >Reporter: Feng Honghua >Assignee: Feng Honghua > Fix For: 0.99.0 > > Attachments: HBASE-10263-trunk_v0.patch, HBASE-10263-trunk_v1.patch, > HBASE-10263-trunk_v2.patch > > > currently the single/multi/in-memory ratio in LruBlockCache is hardcoded > 1:2:1, which can lead to somewhat counter-intuition behavior for some user > scenario where in-memory table's read performance is much worse than ordinary > table when two tables' data size is almost equal and larger than > regionserver's cache size (we ever did some such experiment and verified that > in-memory table random read performance is two times worse than ordinary > table). > this patch fixes above issue and provides: > 1. make single/multi/in-memory ratio user-configurable > 2. provide a configurable switch which can make in-memory block preemptive, > by preemptive means when this switch is on in-memory block can kick out any > ordinary block to make room until no ordinary block, when this switch is off > (by default) the behavior is the same as previous, using > single/multi/in-memory ratio to determine evicting. > by default, above two changes are both off and the behavior keeps the same as > before applying this patch. it's client/user's choice to determine whether or > which behavior to use by enabling one of these two enhancements. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10263) make LruBlockCache single/multi/in-memory ratio user-configurable and provide preemptive mode for in-memory type block
[ https://issues.apache.org/jira/browse/HBASE-10263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13866331#comment-13866331 ] Hudson commented on HBASE-10263: SUCCESS: Integrated in HBase-TRUNK #4799 (See [https://builds.apache.org/job/HBase-TRUNK/4799/]) HBASE-10263 make LruBlockCache single/multi/in-memory ratio user-configurable and provide preemptive mode for in-memory type block (Feng Honghua) (liangxie: rev 1556703) * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestLruBlockCache.java > make LruBlockCache single/multi/in-memory ratio user-configurable and provide > preemptive mode for in-memory type block > -- > > Key: HBASE-10263 > URL: https://issues.apache.org/jira/browse/HBASE-10263 > Project: HBase > Issue Type: Improvement > Components: io >Reporter: Feng Honghua >Assignee: Feng Honghua > Fix For: 0.99.0 > > Attachments: HBASE-10263-trunk_v0.patch, HBASE-10263-trunk_v1.patch, > HBASE-10263-trunk_v2.patch > > > currently the single/multi/in-memory ratio in LruBlockCache is hardcoded > 1:2:1, which can lead to somewhat counter-intuition behavior for some user > scenario where in-memory table's read performance is much worse than ordinary > table when two tables' data size is almost equal and larger than > regionserver's cache size (we ever did some such experiment and verified that > in-memory table random read performance is two times worse than ordinary > table). > this patch fixes above issue and provides: > 1. make single/multi/in-memory ratio user-configurable > 2. provide a configurable switch which can make in-memory block preemptive, > by preemptive means when this switch is on in-memory block can kick out any > ordinary block to make room until no ordinary block, when this switch is off > (by default) the behavior is the same as previous, using > single/multi/in-memory ratio to determine evicting. > by default, above two changes are both off and the behavior keeps the same as > before applying this patch. it's client/user's choice to determine whether or > which behavior to use by enabling one of these two enhancements. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10263) make LruBlockCache single/multi/in-memory ratio user-configurable and provide preemptive mode for in-memory type block
[ https://issues.apache.org/jira/browse/HBASE-10263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13866285#comment-13866285 ] Feng Honghua commented on HBASE-10263: -- [~yuzhih...@gmail.com] : done, thanks very much for the kind reminder and instrument :-) > make LruBlockCache single/multi/in-memory ratio user-configurable and provide > preemptive mode for in-memory type block > -- > > Key: HBASE-10263 > URL: https://issues.apache.org/jira/browse/HBASE-10263 > Project: HBase > Issue Type: Improvement > Components: io >Reporter: Feng Honghua >Assignee: Feng Honghua > Fix For: 0.99.0 > > Attachments: HBASE-10263-trunk_v0.patch, HBASE-10263-trunk_v1.patch, > HBASE-10263-trunk_v2.patch > > > currently the single/multi/in-memory ratio in LruBlockCache is hardcoded > 1:2:1, which can lead to somewhat counter-intuition behavior for some user > scenario where in-memory table's read performance is much worse than ordinary > table when two tables' data size is almost equal and larger than > regionserver's cache size (we ever did some such experiment and verified that > in-memory table random read performance is two times worse than ordinary > table). > this patch fixes above issue and provides: > 1. make single/multi/in-memory ratio user-configurable > 2. provide a configurable switch which can make in-memory block preemptive, > by preemptive means when this switch is on in-memory block can kick out any > ordinary block to make room until no ordinary block, when this switch is off > (by default) the behavior is the same as previous, using > single/multi/in-memory ratio to determine evicting. > by default, above two changes are both off and the behavior keeps the same as > before applying this patch. it's client/user's choice to determine whether or > which behavior to use by enabling one of these two enhancements. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10263) make LruBlockCache single/multi/in-memory ratio user-configurable and provide preemptive mode for in-memory type block
[ https://issues.apache.org/jira/browse/HBASE-10263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13866277#comment-13866277 ] Ted Yu commented on HBASE-10263: When you click the Edit button, you would see input box for Release Notes. There are several new config params for this feature, such as hbase.lru.blockcache.single.percentage Please document them. Thanks > make LruBlockCache single/multi/in-memory ratio user-configurable and provide > preemptive mode for in-memory type block > -- > > Key: HBASE-10263 > URL: https://issues.apache.org/jira/browse/HBASE-10263 > Project: HBase > Issue Type: Improvement > Components: io >Reporter: Feng Honghua >Assignee: Feng Honghua > Fix For: 0.99.0 > > Attachments: HBASE-10263-trunk_v0.patch, HBASE-10263-trunk_v1.patch, > HBASE-10263-trunk_v2.patch > > > currently the single/multi/in-memory ratio in LruBlockCache is hardcoded > 1:2:1, which can lead to somewhat counter-intuition behavior for some user > scenario where in-memory table's read performance is much worse than ordinary > table when two tables' data size is almost equal and larger than > regionserver's cache size (we ever did some such experiment and verified that > in-memory table random read performance is two times worse than ordinary > table). > this patch fixes above issue and provides: > 1. make single/multi/in-memory ratio user-configurable > 2. provide a configurable switch which can make in-memory block preemptive, > by preemptive means when this switch is on in-memory block can kick out any > ordinary block to make room until no ordinary block, when this switch is off > (by default) the behavior is the same as previous, using > single/multi/in-memory ratio to determine evicting. > by default, above two changes are both off and the behavior keeps the same as > before applying this patch. it's client/user's choice to determine whether or > which behavior to use by enabling one of these two enhancements. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10263) make LruBlockCache single/multi/in-memory ratio user-configurable and provide preemptive mode for in-memory type block
[ https://issues.apache.org/jira/browse/HBASE-10263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13866270#comment-13866270 ] Feng Honghua commented on HBASE-10263: -- [~yuzhih...@gmail.com]: you mean to fix the release audit warnings? sure, but the link is not accessible(404) so can't know what the warnings are about :-( > make LruBlockCache single/multi/in-memory ratio user-configurable and provide > preemptive mode for in-memory type block > -- > > Key: HBASE-10263 > URL: https://issues.apache.org/jira/browse/HBASE-10263 > Project: HBase > Issue Type: Improvement > Components: io >Reporter: Feng Honghua >Assignee: Feng Honghua > Fix For: 0.99.0 > > Attachments: HBASE-10263-trunk_v0.patch, HBASE-10263-trunk_v1.patch, > HBASE-10263-trunk_v2.patch > > > currently the single/multi/in-memory ratio in LruBlockCache is hardcoded > 1:2:1, which can lead to somewhat counter-intuition behavior for some user > scenario where in-memory table's read performance is much worse than ordinary > table when two tables' data size is almost equal and larger than > regionserver's cache size (we ever did some such experiment and verified that > in-memory table random read performance is two times worse than ordinary > table). > this patch fixes above issue and provides: > 1. make single/multi/in-memory ratio user-configurable > 2. provide a configurable switch which can make in-memory block preemptive, > by preemptive means when this switch is on in-memory block can kick out any > ordinary block to make room until no ordinary block, when this switch is off > (by default) the behavior is the same as previous, using > single/multi/in-memory ratio to determine evicting. > by default, above two changes are both off and the behavior keeps the same as > before applying this patch. it's client/user's choice to determine whether or > which behavior to use by enabling one of these two enhancements. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10263) make LruBlockCache single/multi/in-memory ratio user-configurable and provide preemptive mode for in-memory type block
[ https://issues.apache.org/jira/browse/HBASE-10263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13866239#comment-13866239 ] Liang Xie commented on HBASE-10263: --- Integrated into trunk. Thanks all for review, thanks making the patch [~fenghh] :) P.S. the release audit was not related with current jira, just checked new jira, should be HBASE-10302 [~eclark] > make LruBlockCache single/multi/in-memory ratio user-configurable and provide > preemptive mode for in-memory type block > -- > > Key: HBASE-10263 > URL: https://issues.apache.org/jira/browse/HBASE-10263 > Project: HBase > Issue Type: Improvement > Components: io >Reporter: Feng Honghua >Assignee: Feng Honghua > Attachments: HBASE-10263-trunk_v0.patch, HBASE-10263-trunk_v1.patch, > HBASE-10263-trunk_v2.patch > > > currently the single/multi/in-memory ratio in LruBlockCache is hardcoded > 1:2:1, which can lead to somewhat counter-intuition behavior for some user > scenario where in-memory table's read performance is much worse than ordinary > table when two tables' data size is almost equal and larger than > regionserver's cache size (we ever did some such experiment and verified that > in-memory table random read performance is two times worse than ordinary > table). > this patch fixes above issue and provides: > 1. make single/multi/in-memory ratio user-configurable > 2. provide a configurable switch which can make in-memory block preemptive, > by preemptive means when this switch is on in-memory block can kick out any > ordinary block to make room until no ordinary block, when this switch is off > (by default) the behavior is the same as previous, using > single/multi/in-memory ratio to determine evicting. > by default, above two changes are both off and the behavior keeps the same as > before applying this patch. it's client/user's choice to determine whether or > which behavior to use by enabling one of these two enhancements. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10263) make LruBlockCache single/multi/in-memory ratio user-configurable and provide preemptive mode for in-memory type block
[ https://issues.apache.org/jira/browse/HBASE-10263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13865267#comment-13865267 ] Liang Xie commented on HBASE-10263: --- there're two +1 already. If no new comment/objection, i'd like to commit trunk_v2 into trunk tomorrow. > make LruBlockCache single/multi/in-memory ratio user-configurable and provide > preemptive mode for in-memory type block > -- > > Key: HBASE-10263 > URL: https://issues.apache.org/jira/browse/HBASE-10263 > Project: HBase > Issue Type: Improvement > Components: io >Reporter: Feng Honghua >Assignee: Feng Honghua > Attachments: HBASE-10263-trunk_v0.patch, HBASE-10263-trunk_v1.patch, > HBASE-10263-trunk_v2.patch > > > currently the single/multi/in-memory ratio in LruBlockCache is hardcoded > 1:2:1, which can lead to somewhat counter-intuition behavior for some user > scenario where in-memory table's read performance is much worse than ordinary > table when two tables' data size is almost equal and larger than > regionserver's cache size (we ever did some such experiment and verified that > in-memory table random read performance is two times worse than ordinary > table). > this patch fixes above issue and provides: > 1. make single/multi/in-memory ratio user-configurable > 2. provide a configurable switch which can make in-memory block preemptive, > by preemptive means when this switch is on in-memory block can kick out any > ordinary block to make room until no ordinary block, when this switch is off > (by default) the behavior is the same as previous, using > single/multi/in-memory ratio to determine evicting. > by default, above two changes are both off and the behavior keeps the same as > before applying this patch. it's client/user's choice to determine whether or > which behavior to use by enabling one of these two enhancements. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10263) make LruBlockCache single/multi/in-memory ratio user-configurable and provide preemptive mode for in-memory type block
[ https://issues.apache.org/jira/browse/HBASE-10263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13865119#comment-13865119 ] Feng Honghua commented on HBASE-10263: -- Before this jira, a rough performance comparison estimate can be like this(within a single regionserver): suppose the total size of in-memory data served by this regionserver is M, total size of non-in-memory data is N, the block cache size is C, then C/4 is for in-memory data, 3*C/4 is for non-in-memory data, the cache hit ratio of random read for in-memory data is C/(4*M), cache hit ratio for non-in-memory data is 3*C/(4*N), so the performance of random read to these two kinds of data is equal when C/(4*M) == 3*C/(4*N), so: 1. when M > N/3, in-memory table random read performance is worse than ordinary table; 2. when M == N/3, in-memory table random read performance is equal to ordinary table; 2. when M < N/3, in-memory table random read performance is better than ordinary table; > make LruBlockCache single/multi/in-memory ratio user-configurable and provide > preemptive mode for in-memory type block > -- > > Key: HBASE-10263 > URL: https://issues.apache.org/jira/browse/HBASE-10263 > Project: HBase > Issue Type: Improvement > Components: io >Reporter: Feng Honghua >Assignee: Feng Honghua > Attachments: HBASE-10263-trunk_v0.patch, HBASE-10263-trunk_v1.patch, > HBASE-10263-trunk_v2.patch > > > currently the single/multi/in-memory ratio in LruBlockCache is hardcoded > 1:2:1, which can lead to somewhat counter-intuition behavior for some user > scenario where in-memory table's read performance is much worse than ordinary > table when two tables' data size is almost equal and larger than > regionserver's cache size (we ever did some such experiment and verified that > in-memory table random read performance is two times worse than ordinary > table). > this patch fixes above issue and provides: > 1. make single/multi/in-memory ratio user-configurable > 2. provide a configurable switch which can make in-memory block preemptive, > by preemptive means when this switch is on in-memory block can kick out any > ordinary block to make room until no ordinary block, when this switch is off > (by default) the behavior is the same as previous, using > single/multi/in-memory ratio to determine evicting. > by default, above two changes are both off and the behavior keeps the same as > before applying this patch. it's client/user's choice to determine whether or > which behavior to use by enabling one of these two enhancements. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10263) make LruBlockCache single/multi/in-memory ratio user-configurable and provide preemptive mode for in-memory type block
[ https://issues.apache.org/jira/browse/HBASE-10263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13865117#comment-13865117 ] Liang Xie commented on HBASE-10263: --- +1 from me. It had been integrated into our internal branch long long time ago:) > make LruBlockCache single/multi/in-memory ratio user-configurable and provide > preemptive mode for in-memory type block > -- > > Key: HBASE-10263 > URL: https://issues.apache.org/jira/browse/HBASE-10263 > Project: HBase > Issue Type: Improvement > Components: io >Reporter: Feng Honghua >Assignee: Feng Honghua > Attachments: HBASE-10263-trunk_v0.patch, HBASE-10263-trunk_v1.patch, > HBASE-10263-trunk_v2.patch > > > currently the single/multi/in-memory ratio in LruBlockCache is hardcoded > 1:2:1, which can lead to somewhat counter-intuition behavior for some user > scenario where in-memory table's read performance is much worse than ordinary > table when two tables' data size is almost equal and larger than > regionserver's cache size (we ever did some such experiment and verified that > in-memory table random read performance is two times worse than ordinary > table). > this patch fixes above issue and provides: > 1. make single/multi/in-memory ratio user-configurable > 2. provide a configurable switch which can make in-memory block preemptive, > by preemptive means when this switch is on in-memory block can kick out any > ordinary block to make room until no ordinary block, when this switch is off > (by default) the behavior is the same as previous, using > single/multi/in-memory ratio to determine evicting. > by default, above two changes are both off and the behavior keeps the same as > before applying this patch. it's client/user's choice to determine whether or > which behavior to use by enabling one of these two enhancements. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10263) make LruBlockCache single/multi/in-memory ratio user-configurable and provide preemptive mode for in-memory type block
[ https://issues.apache.org/jira/browse/HBASE-10263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13865002#comment-13865002 ] Feng Honghua commented on HBASE-10263: -- Performance result: 1. create 2 tables: TA, and TM, all with a single CF, TM's CF is in-memory; disable split to guarantee only 1 region each table; move these 2 tables to a same regionserver 2. write about 20G data (20M rows * 1K row-size) to each table respectively, major compact after the write is done; the block cache size is 21G; Then read: 2 client with 20 threads each issues random read 20G times to these 2 tables... 1.old version regionserver(fixed 25%:50%:25% single/multi/memory ratio): TA latency=5.5ms; TM latency=11.4ms; (in-memory table performance is about 2 times worse than ordinary table) 2.new version regionserver(preemptive mode is on): TA latency=19.4ms; TM latency=3.8ms; (now in-memory table performance is much better than ordinary table) Since tests against different underlying hardware can yield different test results, but above numbers did prove the improvement effect. Additional performance comparison test is encouraged to verify the effect for different hardware configurations :-) > make LruBlockCache single/multi/in-memory ratio user-configurable and provide > preemptive mode for in-memory type block > -- > > Key: HBASE-10263 > URL: https://issues.apache.org/jira/browse/HBASE-10263 > Project: HBase > Issue Type: Improvement > Components: io >Reporter: Feng Honghua >Assignee: Feng Honghua > Attachments: HBASE-10263-trunk_v0.patch, HBASE-10263-trunk_v1.patch, > HBASE-10263-trunk_v2.patch > > > currently the single/multi/in-memory ratio in LruBlockCache is hardcoded > 1:2:1, which can lead to somewhat counter-intuition behavior for some user > scenario where in-memory table's read performance is much worse than ordinary > table when two tables' data size is almost equal and larger than > regionserver's cache size (we ever did some such experiment and verified that > in-memory table random read performance is two times worse than ordinary > table). > this patch fixes above issue and provides: > 1. make single/multi/in-memory ratio user-configurable > 2. provide a configurable switch which can make in-memory block preemptive, > by preemptive means when this switch is on in-memory block can kick out any > ordinary block to make room until no ordinary block, when this switch is off > (by default) the behavior is the same as previous, using > single/multi/in-memory ratio to determine evicting. > by default, above two changes are both off and the behavior keeps the same as > before applying this patch. it's client/user's choice to determine whether or > which behavior to use by enabling one of these two enhancements. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10263) make LruBlockCache single/multi/in-memory ratio user-configurable and provide preemptive mode for in-memory type block
[ https://issues.apache.org/jira/browse/HBASE-10263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13864982#comment-13864982 ] Feng Honghua commented on HBASE-10263: -- bq.Any chance you can post some more detailed performance numbers around this new preemptive mode? Any commentary from your profiling session as to why the existing logic caused such unexpected poor performance? If it's broken always, for everyone, we should follow Good stack's advice and tear out the broken logic, enabling forceInMemory by default. Sure, I ever did a series of performance comparison tests for this patch several months ago(it's based on our internal 0.94.3 branch), let me copy the test result from our wiki page to here, wait... > make LruBlockCache single/multi/in-memory ratio user-configurable and provide > preemptive mode for in-memory type block > -- > > Key: HBASE-10263 > URL: https://issues.apache.org/jira/browse/HBASE-10263 > Project: HBase > Issue Type: Improvement > Components: io >Reporter: Feng Honghua >Assignee: Feng Honghua > Attachments: HBASE-10263-trunk_v0.patch, HBASE-10263-trunk_v1.patch, > HBASE-10263-trunk_v2.patch > > > currently the single/multi/in-memory ratio in LruBlockCache is hardcoded > 1:2:1, which can lead to somewhat counter-intuition behavior for some user > scenario where in-memory table's read performance is much worse than ordinary > table when two tables' data size is almost equal and larger than > regionserver's cache size (we ever did some such experiment and verified that > in-memory table random read performance is two times worse than ordinary > table). > this patch fixes above issue and provides: > 1. make single/multi/in-memory ratio user-configurable > 2. provide a configurable switch which can make in-memory block preemptive, > by preemptive means when this switch is on in-memory block can kick out any > ordinary block to make room until no ordinary block, when this switch is off > (by default) the behavior is the same as previous, using > single/multi/in-memory ratio to determine evicting. > by default, above two changes are both off and the behavior keeps the same as > before applying this patch. it's client/user's choice to determine whether or > which behavior to use by enabling one of these two enhancements. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10263) make LruBlockCache single/multi/in-memory ratio user-configurable and provide preemptive mode for in-memory type block
[ https://issues.apache.org/jira/browse/HBASE-10263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13864972#comment-13864972 ] Nick Dimiduk commented on HBASE-10263: -- Thanks [~fenghh] for the careful correction of my careless review :) I do indeed see the evictionThread boolean and your appropriate use of it in the test method. Let's get this out in front of people to experiment with, +1 from me! Any chance you can post some more detailed performance numbers around this new preemptive mode? Any commentary from your profiling session as to why the existing logic caused such unexpected poor performance? If it's broken always, for everyone, we should follow Good [~stack]'s advice and tear out the broken logic, enabling forceInMemory by default. > make LruBlockCache single/multi/in-memory ratio user-configurable and provide > preemptive mode for in-memory type block > -- > > Key: HBASE-10263 > URL: https://issues.apache.org/jira/browse/HBASE-10263 > Project: HBase > Issue Type: Improvement > Components: io >Reporter: Feng Honghua >Assignee: Feng Honghua > Attachments: HBASE-10263-trunk_v0.patch, HBASE-10263-trunk_v1.patch, > HBASE-10263-trunk_v2.patch > > > currently the single/multi/in-memory ratio in LruBlockCache is hardcoded > 1:2:1, which can lead to somewhat counter-intuition behavior for some user > scenario where in-memory table's read performance is much worse than ordinary > table when two tables' data size is almost equal and larger than > regionserver's cache size (we ever did some such experiment and verified that > in-memory table random read performance is two times worse than ordinary > table). > this patch fixes above issue and provides: > 1. make single/multi/in-memory ratio user-configurable > 2. provide a configurable switch which can make in-memory block preemptive, > by preemptive means when this switch is on in-memory block can kick out any > ordinary block to make room until no ordinary block, when this switch is off > (by default) the behavior is the same as previous, using > single/multi/in-memory ratio to determine evicting. > by default, above two changes are both off and the behavior keeps the same as > before applying this patch. it's client/user's choice to determine whether or > which behavior to use by enabling one of these two enhancements. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10263) make LruBlockCache single/multi/in-memory ratio user-configurable and provide preemptive mode for in-memory type block
[ https://issues.apache.org/jira/browse/HBASE-10263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863885#comment-13863885 ] Hadoop QA commented on HBASE-10263: --- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12621740/HBASE-10263-trunk_v2.patch against trunk revision . ATTACHMENT ID: 12621740 {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 3 new or modified tests. {color:green}+1 hadoop1.0{color}. The patch compiles against the hadoop 1.0 profile. {color:green}+1 hadoop1.1{color}. The patch compiles against the hadoop 1.1 profile. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:red}-1 release audit{color}. The applied patch generated 4 release audit warnings (more than the trunk's current 0 warnings). {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:red}-1 site{color}. The patch appears to cause mvn site goal to fail. {color:red}-1 core tests{color}. The patch failed these unit tests: org.apache.hadoop.hbase.regionserver.TestSplitLogWorker Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/8353//testReport/ Release audit warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8353//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8353//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8353//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8353//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8353//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8353//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8353//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8353//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8353//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8353//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/8353//console This message is automatically generated. > make LruBlockCache single/multi/in-memory ratio user-configurable and provide > preemptive mode for in-memory type block > -- > > Key: HBASE-10263 > URL: https://issues.apache.org/jira/browse/HBASE-10263 > Project: HBase > Issue Type: Improvement > Components: io >Reporter: Feng Honghua >Assignee: Feng Honghua > Attachments: HBASE-10263-trunk_v0.patch, HBASE-10263-trunk_v1.patch, > HBASE-10263-trunk_v2.patch > > > currently the single/multi/in-memory ratio in LruBlockCache is hardcoded > 1:2:1, which can lead to somewhat counter-intuition behavior for some user > scenario where in-memory table's read performance is much worse than ordinary > table when two tables' data size is almost equal and larger than > regionserver's cache size (we ever did some such experiment and verified that > in-memory table random read performance is two times worse than ordinary > table). > this patch fixes above issue and provides: > 1. make single/multi/in-memory ratio user-configurable > 2. provide a configurable switch which can make in-memory block preemptive, > by preemptive means when this switch is on in-memory block can kick out any > ordinary block to make room until no ordinary block, when this switch is off > (by default) the behavior is the same as previous, using > single/multi/in-memory ratio to determine evicting. > by default, above two changes are both off and the behavior keeps the same as > before apply
[jira] [Commented] (HBASE-10263) make LruBlockCache single/multi/in-memory ratio user-configurable and provide preemptive mode for in-memory type block
[ https://issues.apache.org/jira/browse/HBASE-10263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863838#comment-13863838 ] Feng Honghua commented on HBASE-10263: -- Thanks [~ndimiduk] for the careful review :-) bq.Evictions happen on a background thread. Filling the cache and then immediately checking the eviction count results in a race between the current thread and the eviction thread; thus this is very likely a flakey test on our over-extended build machines. In the above block, the call to cacheBlock() will only notify the eviction thread, not force eviction. What you said is correct for the real usage scenario of LruBlockCache where the evictionThread flag is implicitly true when constructing the LruBlockCache object, that way a background eviction thread is created to do the eviction job, but it's *not* the case for this newly added unit test: to be able to verify the evict effect of the new configuration/preemptive-mode as quickly as possible without worrying how long to sleep or introducing other kind of synchronization overhead, I disabled the background eviction thread when constructing the LruBlockCache object for this unit test case, this way the eviction will be triggered immediately and synchronously within the cache.cacheBlock call when the cache size exceeds the acceptable cache size. {code}LruBlockCache cache = new LruBlockCache(maxSize, blockSize, false...){code} > make LruBlockCache single/multi/in-memory ratio user-configurable and provide > preemptive mode for in-memory type block > -- > > Key: HBASE-10263 > URL: https://issues.apache.org/jira/browse/HBASE-10263 > Project: HBase > Issue Type: Improvement > Components: io >Reporter: Feng Honghua >Assignee: Feng Honghua > Attachments: HBASE-10263-trunk_v0.patch, HBASE-10263-trunk_v1.patch > > > currently the single/multi/in-memory ratio in LruBlockCache is hardcoded > 1:2:1, which can lead to somewhat counter-intuition behavior for some user > scenario where in-memory table's read performance is much worse than ordinary > table when two tables' data size is almost equal and larger than > regionserver's cache size (we ever did some such experiment and verified that > in-memory table random read performance is two times worse than ordinary > table). > this patch fixes above issue and provides: > 1. make single/multi/in-memory ratio user-configurable > 2. provide a configurable switch which can make in-memory block preemptive, > by preemptive means when this switch is on in-memory block can kick out any > ordinary block to make room until no ordinary block, when this switch is off > (by default) the behavior is the same as previous, using > single/multi/in-memory ratio to determine evicting. > by default, above two changes are both off and the behavior keeps the same as > before applying this patch. it's client/user's choice to determine whether or > which behavior to use by enabling one of these two enhancements. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10263) make LruBlockCache single/multi/in-memory ratio user-configurable and provide preemptive mode for in-memory type block
[ https://issues.apache.org/jira/browse/HBASE-10263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863192#comment-13863192 ] Nick Dimiduk commented on HBASE-10263: -- One other stray comment. Your config names should all be modified to reflect that these options are specific to the lru cache. So hbase.rs.inmemoryforcemode, hbase.blockcache.single.percentage, hbase.blockcache.multi.percentage, hbase.blockcache.memory.percentage should all share the common prefix of hbase.lru.blockcache. This follows the precedent established by the existing factor configs. > make LruBlockCache single/multi/in-memory ratio user-configurable and provide > preemptive mode for in-memory type block > -- > > Key: HBASE-10263 > URL: https://issues.apache.org/jira/browse/HBASE-10263 > Project: HBase > Issue Type: Improvement > Components: io >Reporter: Feng Honghua >Assignee: Feng Honghua > Attachments: HBASE-10263-trunk_v0.patch, HBASE-10263-trunk_v1.patch > > > currently the single/multi/in-memory ratio in LruBlockCache is hardcoded > 1:2:1, which can lead to somewhat counter-intuition behavior for some user > scenario where in-memory table's read performance is much worse than ordinary > table when two tables' data size is almost equal and larger than > regionserver's cache size (we ever did some such experiment and verified that > in-memory table random read performance is two times worse than ordinary > table). > this patch fixes above issue and provides: > 1. make single/multi/in-memory ratio user-configurable > 2. provide a configurable switch which can make in-memory block preemptive, > by preemptive means when this switch is on in-memory block can kick out any > ordinary block to make room until no ordinary block, when this switch is off > (by default) the behavior is the same as previous, using > single/multi/in-memory ratio to determine evicting. > by default, above two changes are both off and the behavior keeps the same as > before applying this patch. it's client/user's choice to determine whether or > which behavior to use by enabling one of these two enhancements. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10263) make LruBlockCache single/multi/in-memory ratio user-configurable and provide preemptive mode for in-memory type block
[ https://issues.apache.org/jira/browse/HBASE-10263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863181#comment-13863181 ] Nick Dimiduk commented on HBASE-10263: -- Evictions happen on a background thread. Filling the cache and then immediately checking the eviction count results in a race between the current thread and the eviction thread; thus this is very likely a flakey test on our over-extended build machines. {noformat} +// 5th single block +cache.cacheBlock(singleBlocks[4].cacheKey, singleBlocks[4]); +expectedCacheSize += singleBlocks[4].cacheBlockHeapSize(); +// Do not expect any evictions yet +assertEquals(0, cache.getEvictionCount()); +// Verify cache size +assertEquals(expectedCacheSize, cache.heapSize()); {noformat} In the above block, the call to cacheBlock() will only notify the eviction thread, not force eviction. A yield or short sleep should be inserted before the call to getEvictionCount() in order to help reduce the chance of exercising the race condition. Repeat for all the following stanzas. > make LruBlockCache single/multi/in-memory ratio user-configurable and provide > preemptive mode for in-memory type block > -- > > Key: HBASE-10263 > URL: https://issues.apache.org/jira/browse/HBASE-10263 > Project: HBase > Issue Type: Improvement > Components: io >Reporter: Feng Honghua >Assignee: Feng Honghua > Attachments: HBASE-10263-trunk_v0.patch, HBASE-10263-trunk_v1.patch > > > currently the single/multi/in-memory ratio in LruBlockCache is hardcoded > 1:2:1, which can lead to somewhat counter-intuition behavior for some user > scenario where in-memory table's read performance is much worse than ordinary > table when two tables' data size is almost equal and larger than > regionserver's cache size (we ever did some such experiment and verified that > in-memory table random read performance is two times worse than ordinary > table). > this patch fixes above issue and provides: > 1. make single/multi/in-memory ratio user-configurable > 2. provide a configurable switch which can make in-memory block preemptive, > by preemptive means when this switch is on in-memory block can kick out any > ordinary block to make room until no ordinary block, when this switch is off > (by default) the behavior is the same as previous, using > single/multi/in-memory ratio to determine evicting. > by default, above two changes are both off and the behavior keeps the same as > before applying this patch. it's client/user's choice to determine whether or > which behavior to use by enabling one of these two enhancements. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10263) make LruBlockCache single/multi/in-memory ratio user-configurable and provide preemptive mode for in-memory type block
[ https://issues.apache.org/jira/browse/HBASE-10263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13862198#comment-13862198 ] Hadoop QA commented on HBASE-10263: --- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12621447/HBASE-10263-trunk_v1.patch against trunk revision . ATTACHMENT ID: 12621447 {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 3 new or modified tests. {color:green}+1 hadoop1.0{color}. The patch compiles against the hadoop 1.0 profile. {color:green}+1 hadoop1.1{color}. The patch compiles against the hadoop 1.1 profile. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:red}-1 release audit{color}. The applied patch generated 4 release audit warnings (more than the trunk's current 0 warnings). {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:red}-1 site{color}. The patch appears to cause mvn site goal to fail. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/8338//testReport/ Release audit warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8338//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8338//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8338//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8338//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8338//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8338//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8338//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8338//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8338//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8338//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/8338//console This message is automatically generated. > make LruBlockCache single/multi/in-memory ratio user-configurable and provide > preemptive mode for in-memory type block > -- > > Key: HBASE-10263 > URL: https://issues.apache.org/jira/browse/HBASE-10263 > Project: HBase > Issue Type: Improvement > Components: io >Reporter: Feng Honghua >Assignee: Feng Honghua > Attachments: HBASE-10263-trunk_v0.patch, HBASE-10263-trunk_v1.patch > > > currently the single/multi/in-memory ratio in LruBlockCache is hardcoded > 1:2:1, which can lead to somewhat counter-intuition behavior for some user > scenario where in-memory table's read performance is much worse than ordinary > table when two tables' data size is almost equal and larger than > regionserver's cache size (we ever did some such experiment and verified that > in-memory table random read performance is two times worse than ordinary > table). > this patch fixes above issue and provides: > 1. make single/multi/in-memory ratio user-configurable > 2. provide a configurable switch which can make in-memory block preemptive, > by preemptive means when this switch is on in-memory block can kick out any > ordinary block to make room until no ordinary block, when this switch is off > (by default) the behavior is the same as previous, using > single/multi/in-memory ratio to determine evicting. > by default, above two changes are both off and the behavior keeps the same as > before applying this patch. it's client/user's choice to determine whether or > which behavior to use by enabling one of t
[jira] [Commented] (HBASE-10263) make LruBlockCache single/multi/in-memory ratio user-configurable and provide preemptive mode for in-memory type block
[ https://issues.apache.org/jira/browse/HBASE-10263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13862186#comment-13862186 ] Ted Yu commented on HBASE-10263: [~apurtell]: Do you want this in 0.98 ? > make LruBlockCache single/multi/in-memory ratio user-configurable and provide > preemptive mode for in-memory type block > -- > > Key: HBASE-10263 > URL: https://issues.apache.org/jira/browse/HBASE-10263 > Project: HBase > Issue Type: Improvement > Components: io >Reporter: Feng Honghua >Assignee: Feng Honghua > Attachments: HBASE-10263-trunk_v0.patch, HBASE-10263-trunk_v1.patch > > > currently the single/multi/in-memory ratio in LruBlockCache is hardcoded > 1:2:1, which can lead to somewhat counter-intuition behavior for some user > scenario where in-memory table's read performance is much worse than ordinary > table when two tables' data size is almost equal and larger than > regionserver's cache size (we ever did some such experiment and verified that > in-memory table random read performance is two times worse than ordinary > table). > this patch fixes above issue and provides: > 1. make single/multi/in-memory ratio user-configurable > 2. provide a configurable switch which can make in-memory block preemptive, > by preemptive means when this switch is on in-memory block can kick out any > ordinary block to make room until no ordinary block, when this switch is off > (by default) the behavior is the same as previous, using > single/multi/in-memory ratio to determine evicting. > by default, above two changes are both off and the behavior keeps the same as > before applying this patch. it's client/user's choice to determine whether or > which behavior to use by enabling one of these two enhancements. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10263) make LruBlockCache single/multi/in-memory ratio user-configurable and provide preemptive mode for in-memory type block
[ https://issues.apache.org/jira/browse/HBASE-10263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13862163#comment-13862163 ] Feng Honghua commented on HBASE-10263: -- [~yuzhih...@gmail.com], [HBASE-10280|https://issues.apache.org/jira/browse/HBASE-10280] is created per your suggestion, please check its description to see if the per-column-family behavior matches your expectation, thanks agaion :-) > make LruBlockCache single/multi/in-memory ratio user-configurable and provide > preemptive mode for in-memory type block > -- > > Key: HBASE-10263 > URL: https://issues.apache.org/jira/browse/HBASE-10263 > Project: HBase > Issue Type: Improvement > Components: io >Reporter: Feng Honghua >Assignee: Feng Honghua > Attachments: HBASE-10263-trunk_v0.patch, HBASE-10263-trunk_v1.patch > > > currently the single/multi/in-memory ratio in LruBlockCache is hardcoded > 1:2:1, which can lead to somewhat counter-intuition behavior for some user > scenario where in-memory table's read performance is much worse than ordinary > table when two tables' data size is almost equal and larger than > regionserver's cache size (we ever did some such experiment and verified that > in-memory table random read performance is two times worse than ordinary > table). > this patch fixes above issue and provides: > 1. make single/multi/in-memory ratio user-configurable > 2. provide a configurable switch which can make in-memory block preemptive, > by preemptive means when this switch is on in-memory block can kick out any > ordinary block to make room until no ordinary block, when this switch is off > (by default) the behavior is the same as previous, using > single/multi/in-memory ratio to determine evicting. > by default, above two changes are both off and the behavior keeps the same as > before applying this patch. it's client/user's choice to determine whether or > which behavior to use by enabling one of these two enhancements. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10263) make LruBlockCache single/multi/in-memory ratio user-configurable and provide preemptive mode for in-memory type block
[ https://issues.apache.org/jira/browse/HBASE-10263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13862157#comment-13862157 ] Feng Honghua commented on HBASE-10263: -- [~yuzhih...@gmail.com] bq.Is there plan to make inMemoryForceMode column-family config ? ==> hmmm...sounds reasonable and feasible, but not sure providing such finer-grained control for this flag is desirable for users. Let me create a new jira for it, and will implementation it if seeing request or someone wants it, thanks for suggestion :-) > make LruBlockCache single/multi/in-memory ratio user-configurable and provide > preemptive mode for in-memory type block > -- > > Key: HBASE-10263 > URL: https://issues.apache.org/jira/browse/HBASE-10263 > Project: HBase > Issue Type: Improvement >Reporter: Feng Honghua >Assignee: Feng Honghua > Attachments: HBASE-10263-trunk_v0.patch, HBASE-10263-trunk_v1.patch > > > currently the single/multi/in-memory ratio in LruBlockCache is hardcoded > 1:2:1, which can lead to somewhat counter-intuition behavior for some user > scenario where in-memory table's read performance is much worse than ordinary > table when two tables' data size is almost equal and larger than > regionserver's cache size (we ever did some such experiment and verified that > in-memory table random read performance is two times worse than ordinary > table). > this patch fixes above issue and provides: > 1. make single/multi/in-memory ratio user-configurable > 2. provide a configurable switch which can make in-memory block preemptive, > by preemptive means when this switch is on in-memory block can kick out any > ordinary block to make room until no ordinary block, when this switch is off > (by default) the behavior is the same as previous, using > single/multi/in-memory ratio to determine evicting. > by default, above two changes are both off and the behavior keeps the same as > before applying this patch. it's client/user's choice to determine whether or > which behavior to use by enabling one of these two enhancements. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10263) make LruBlockCache single/multi/in-memory ratio user-configurable and provide preemptive mode for in-memory type block
[ https://issues.apache.org/jira/browse/HBASE-10263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13862153#comment-13862153 ] Feng Honghua commented on HBASE-10263: -- [~stack] bq.I would suggest that this behavior be ON by default in a major release of hbase (0.98 if @apurtell is amenable or 1.0.0 if not); to me, the way this patch is more the 'expected' behavior. ==> the single/multi/memory ratio by default is the same as before(without any tweak): 25%:50%:25%, but user can change them by setting the new configurations, the 'inMemoryForceMode'(preemptive mode for in-memory blocks) is by default OFF, you want to turn 'inMemoryForceMode' ON? hmmm. what about we firstly make it conservative by keeping it OFF by default, and turn it on if we eventually found most of our users tweak it on for their real use :-) At least we now provide users a new option to control how 'in-memory' cached blocks mean and behave, and when it's off we enable users to configure the single/multi/memory ratios. Opinion? > make LruBlockCache single/multi/in-memory ratio user-configurable and provide > preemptive mode for in-memory type block > -- > > Key: HBASE-10263 > URL: https://issues.apache.org/jira/browse/HBASE-10263 > Project: HBase > Issue Type: Improvement >Reporter: Feng Honghua >Assignee: Feng Honghua > Attachments: HBASE-10263-trunk_v0.patch, HBASE-10263-trunk_v1.patch > > > currently the single/multi/in-memory ratio in LruBlockCache is hardcoded > 1:2:1, which can lead to somewhat counter-intuition behavior for some user > scenario where in-memory table's read performance is much worse than ordinary > table when two tables' data size is almost equal and larger than > regionserver's cache size (we ever did some such experiment and verified that > in-memory table random read performance is two times worse than ordinary > table). > this patch fixes above issue and provides: > 1. make single/multi/in-memory ratio user-configurable > 2. provide a configurable switch which can make in-memory block preemptive, > by preemptive means when this switch is on in-memory block can kick out any > ordinary block to make room until no ordinary block, when this switch is off > (by default) the behavior is the same as previous, using > single/multi/in-memory ratio to determine evicting. > by default, above two changes are both off and the behavior keeps the same as > before applying this patch. it's client/user's choice to determine whether or > which behavior to use by enabling one of these two enhancements. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10263) make LruBlockCache single/multi/in-memory ratio user-configurable and provide preemptive mode for in-memory type block
[ https://issues.apache.org/jira/browse/HBASE-10263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13862146#comment-13862146 ] Feng Honghua commented on HBASE-10263: -- They are not used in CacheConfig.java, they are read from conf in constructor and surely used in LruBlockCache.java > make LruBlockCache single/multi/in-memory ratio user-configurable and provide > preemptive mode for in-memory type block > -- > > Key: HBASE-10263 > URL: https://issues.apache.org/jira/browse/HBASE-10263 > Project: HBase > Issue Type: Improvement >Reporter: Feng Honghua >Assignee: Feng Honghua > Attachments: HBASE-10263-trunk_v0.patch > > > currently the single/multi/in-memory ratio in LruBlockCache is hardcoded > 1:2:1, which can lead to somewhat counter-intuition behavior for some user > scenario where in-memory table's read performance is much worse than ordinary > table when two tables' data size is almost equal and larger than > regionserver's cache size (we ever did some such experiment and verified that > in-memory table random read performance is two times worse than ordinary > table). > this patch fixes above issue and provides: > 1. make single/multi/in-memory ratio user-configurable > 2. provide a configurable switch which can make in-memory block preemptive, > by preemptive means when this switch is on in-memory block can kick out any > ordinary block to make room until no ordinary block, when this switch is off > (by default) the behavior is the same as previous, using > single/multi/in-memory ratio to determine evicting. > by default, above two changes are both off and the behavior keeps the same as > before applying this patch. it's client/user's choice to determine whether or > which behavior to use by enabling one of these two enhancements. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10263) make LruBlockCache single/multi/in-memory ratio user-configurable and provide preemptive mode for in-memory type block
[ https://issues.apache.org/jira/browse/HBASE-10263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13862144#comment-13862144 ] Feng Honghua commented on HBASE-10263: -- [~yuzhih...@gmail.com]: bq.Is the above variable used(inMemoryForceMode) ? ==> No, they(together with single/multi/memory factors) are not used. There is a historical reason for these variables here: this flag(and other 3 factors) will be read from *conf* passed as parameter in LruBlockCache constructor, in 0.94.3(our internal branch) there is a INFO log for max-size before constructing the LruBlockCache, and I added these 'forceMode/single/multi/memory' info in that INFO log as well, they are used just for info purpose, but this INFO log in CacheConfig.java doesn't exist in trunk code(it's removed), and I forgot to remove these four just-for-info variables accordingly. *It won't affect correctness*. Thanks for point this out :-) > make LruBlockCache single/multi/in-memory ratio user-configurable and provide > preemptive mode for in-memory type block > -- > > Key: HBASE-10263 > URL: https://issues.apache.org/jira/browse/HBASE-10263 > Project: HBase > Issue Type: Improvement >Reporter: Feng Honghua >Assignee: Feng Honghua > Attachments: HBASE-10263-trunk_v0.patch > > > currently the single/multi/in-memory ratio in LruBlockCache is hardcoded > 1:2:1, which can lead to somewhat counter-intuition behavior for some user > scenario where in-memory table's read performance is much worse than ordinary > table when two tables' data size is almost equal and larger than > regionserver's cache size (we ever did some such experiment and verified that > in-memory table random read performance is two times worse than ordinary > table). > this patch fixes above issue and provides: > 1. make single/multi/in-memory ratio user-configurable > 2. provide a configurable switch which can make in-memory block preemptive, > by preemptive means when this switch is on in-memory block can kick out any > ordinary block to make room until no ordinary block, when this switch is off > (by default) the behavior is the same as previous, using > single/multi/in-memory ratio to determine evicting. > by default, above two changes are both off and the behavior keeps the same as > before applying this patch. it's client/user's choice to determine whether or > which behavior to use by enabling one of these two enhancements. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10263) make LruBlockCache single/multi/in-memory ratio user-configurable and provide preemptive mode for in-memory type block
[ https://issues.apache.org/jira/browse/HBASE-10263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13861941#comment-13861941 ] Ted Yu commented on HBASE-10263: In CacheConfig.java : {code} +boolean inMemoryForceMode = conf.getBoolean("hbase.rs.inmemoryforcemode", +false); {code} Is the above variable used ? {code} + * configuration, inMemoryForceMode is a cluster-wide configuration {code} Is there plan to make inMemoryForceMode column-family config ? > make LruBlockCache single/multi/in-memory ratio user-configurable and provide > preemptive mode for in-memory type block > -- > > Key: HBASE-10263 > URL: https://issues.apache.org/jira/browse/HBASE-10263 > Project: HBase > Issue Type: Improvement >Reporter: Feng Honghua >Assignee: Feng Honghua > Attachments: HBASE-10263-trunk_v0.patch > > > currently the single/multi/in-memory ratio in LruBlockCache is hardcoded > 1:2:1, which can lead to somewhat counter-intuition behavior for some user > scenario where in-memory table's read performance is much worse than ordinary > table when two tables' data size is almost equal and larger than > regionserver's cache size (we ever did some such experiment and verified that > in-memory table random read performance is two times worse than ordinary > table). > this patch fixes above issue and provides: > 1. make single/multi/in-memory ratio user-configurable > 2. provide a configurable switch which can make in-memory block preemptive, > by preemptive means when this switch is on in-memory block can kick out any > ordinary block to make room until no ordinary block, when this switch is off > (by default) the behavior is the same as previous, using > single/multi/in-memory ratio to determine evicting. > by default, above two changes are both off and the behavior keeps the same as > before applying this patch. it's client/user's choice to determine whether or > which behavior to use by enabling one of these two enhancements. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10263) make LruBlockCache single/multi/in-memory ratio user-configurable and provide preemptive mode for in-memory type block
[ https://issues.apache.org/jira/browse/HBASE-10263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13861648#comment-13861648 ] stack commented on HBASE-10263: --- Test failure is likely unrelated. Let me repost the patch to get another hadoopqa run. I skimmed the patch. LGTM. Nice test [~fenghh]. I would suggest that this behavior be ON by default in a major release of hbase (0.98 if @apurtell is amenable or 1.0.0 if not); to me, the way this patch is more the 'expected' behavior. > make LruBlockCache single/multi/in-memory ratio user-configurable and provide > preemptive mode for in-memory type block > -- > > Key: HBASE-10263 > URL: https://issues.apache.org/jira/browse/HBASE-10263 > Project: HBase > Issue Type: Improvement >Reporter: Feng Honghua >Assignee: Feng Honghua > Attachments: HBASE-10263-trunk_v0.patch > > > currently the single/multi/in-memory ratio in LruBlockCache is hardcoded > 1:2:1, which can lead to somewhat counter-intuition behavior for some user > scenario where in-memory table's read performance is much worse than ordinary > table when two tables' data size is almost equal and larger than > regionserver's cache size (we ever did some such experiment and verified that > in-memory table random read performance is two times worse than ordinary > table). > this patch fixes above issue and provides: > 1. make single/multi/in-memory ratio user-configurable > 2. provide a configurable switch which can make in-memory block preemptive, > by preemptive means when this switch is on in-memory block can kick out any > ordinary block to make room until no ordinary block, when this switch is off > (by default) the behavior is the same as previous, using > single/multi/in-memory ratio to determine evicting. > by default, above two changes are both off and the behavior keeps the same as > before applying this patch. it's client/user's choice to determine whether or > which behavior to use by enabling one of these two enhancements. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10263) make LruBlockCache single/multi/in-memory ratio user-configurable and provide preemptive mode for in-memory type block
[ https://issues.apache.org/jira/browse/HBASE-10263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13861358#comment-13861358 ] Feng Honghua commented on HBASE-10263: -- I re-run unit tests several times locally and all passed. > make LruBlockCache single/multi/in-memory ratio user-configurable and provide > preemptive mode for in-memory type block > -- > > Key: HBASE-10263 > URL: https://issues.apache.org/jira/browse/HBASE-10263 > Project: HBase > Issue Type: Improvement >Reporter: Feng Honghua >Assignee: Feng Honghua > Attachments: HBASE-10263-trunk_v0.patch > > > currently the single/multi/in-memory ratio in LruBlockCache is hardcoded > 1:2:1, which can lead to somewhat counter-intuition behavior for some user > scenario where in-memory table's read performance is much worse than ordinary > table when two tables' data size is almost equal and larger than > regionserver's cache size (we ever did some such experiment and verified that > in-memory table random read performance is two times worse than ordinary > table). > this patch fixes above issue and provides: > 1. make single/multi/in-memory ratio user-configurable > 2. provide a configurable switch which can make in-memory block preemptive, > by preemptive means when this switch is on in-memory block can kick out any > ordinary block to make room until no ordinary block, when this switch is off > (by default) the behavior is the same as previous, using > single/multi/in-memory ratio to determine evicting. > by default, above two changes are both off and the behavior keeps the same as > before applying this patch. it's client/user's choice to determine whether or > which behavior to use by enabling one of these two enhancements. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HBASE-10263) make LruBlockCache single/multi/in-memory ratio user-configurable and provide preemptive mode for in-memory type block
[ https://issues.apache.org/jira/browse/HBASE-10263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13859527#comment-13859527 ] Hadoop QA commented on HBASE-10263: --- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12620929/HBASE-10263-trunk_v0.patch against trunk revision . ATTACHMENT ID: 12620929 {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 3 new or modified tests. {color:green}+1 hadoop1.0{color}. The patch compiles against the hadoop 1.0 profile. {color:green}+1 hadoop1.1{color}. The patch compiles against the hadoop 1.1 profile. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:red}-1 findbugs{color}. The patch appears to introduce 1 new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:red}-1 site{color}. The patch appears to cause mvn site goal to fail. {color:red}-1 core tests{color}. The patch failed these unit tests: org.apache.hadoop.hbase.master.TestSplitLogManager Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/8310//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8310//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8310//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8310//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8310//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8310//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8310//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8310//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8310//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/8310//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/8310//console This message is automatically generated. > make LruBlockCache single/multi/in-memory ratio user-configurable and provide > preemptive mode for in-memory type block > -- > > Key: HBASE-10263 > URL: https://issues.apache.org/jira/browse/HBASE-10263 > Project: HBase > Issue Type: Improvement >Reporter: Feng Honghua >Assignee: Feng Honghua > Attachments: HBASE-10263-trunk_v0.patch > > > currently the single/multi/in-memory ratio in LruBlockCache is hardcoded > 1:2:1, which can lead to somewhat counter-intuition behavior for some user > scenario where in-memory table's read performance is much worse than ordinary > table when two tables' data size is almost equal and larger than > regionserver's cache size (we ever did some such experiment and verified that > in-memory table random read performance is two times worse than ordinary > table). > this patch fixes above issue and provides: > 1. make single/multi/in-memory ratio user-configurable > 2. provide a configurable switch which can make in-memory block preemptive, > by preemptive means when this switch is on in-memory block can kick out any > ordinary block to make room until no ordinary block, when this switch is off > (by default) the behavior is the same as previous, using > single/multi/in-memory ratio to determine evicting. > by default, above two changes are both off and the behavior keeps the same as > before applying this patch. it's client/user's choice to determine whether or > which behavior to use by enabling one of these two enhancements. -- This message was sent by Atlassian JIRA (v6.1.5#6160)