[ 
https://issues.apache.org/jira/browse/HBASE-10263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13862153#comment-13862153
 ] 

Feng Honghua commented on HBASE-10263:
--------------------------------------

[~stack]
bq.I would suggest that this behavior be ON by default in a major release of 
hbase (0.98 if @apurtell is amenable or 1.0.0 if not); to me, the way this 
patch is more the 'expected' behavior.
==> the single/multi/memory ratio by default is the same as before(without any 
tweak): 25%:50%:25%, but user can change them by setting the new 
configurations, the 'inMemoryForceMode'(preemptive mode for in-memory blocks) 
is by default OFF, you want to turn 'inMemoryForceMode' ON? hmmm. what about we 
firstly make it conservative by keeping it OFF by default, and turn it on if we 
eventually found most of our users tweak it on for their real use :-)
    At least we now provide users a new option to control how 'in-memory' 
cached blocks mean and behave, and when it's off we enable users to configure 
the single/multi/memory ratios.
    Opinion?

> make LruBlockCache single/multi/in-memory ratio user-configurable and provide 
> preemptive mode for in-memory type block
> ----------------------------------------------------------------------------------------------------------------------
>
>                 Key: HBASE-10263
>                 URL: https://issues.apache.org/jira/browse/HBASE-10263
>             Project: HBase
>          Issue Type: Improvement
>            Reporter: Feng Honghua
>            Assignee: Feng Honghua
>         Attachments: HBASE-10263-trunk_v0.patch, HBASE-10263-trunk_v1.patch
>
>
> currently the single/multi/in-memory ratio in LruBlockCache is hardcoded 
> 1:2:1, which can lead to somewhat counter-intuition behavior for some user 
> scenario where in-memory table's read performance is much worse than ordinary 
> table when two tables' data size is almost equal and larger than 
> regionserver's cache size (we ever did some such experiment and verified that 
> in-memory table random read performance is two times worse than ordinary 
> table).
> this patch fixes above issue and provides:
> 1. make single/multi/in-memory ratio user-configurable
> 2. provide a configurable switch which can make in-memory block preemptive, 
> by preemptive means when this switch is on in-memory block can kick out any 
> ordinary block to make room until no ordinary block, when this switch is off 
> (by default) the behavior is the same as previous, using 
> single/multi/in-memory ratio to determine evicting.
> by default, above two changes are both off and the behavior keeps the same as 
> before applying this patch. it's client/user's choice to determine whether or 
> which behavior to use by enabling one of these two enhancements.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

Reply via email to