[ 
https://issues.apache.org/jira/browse/HBASE-14383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14746597#comment-14746597
 ] 

Enis Soztutar edited comment on HBASE-14383 at 9/16/15 12:48 AM:
-----------------------------------------------------------------

bq. Can we retire hbase.regionserver.maxlogs?
I am in favor of that, or keeping it as a safety net, but with a much higher 
default (128?). 

With default settings
{code}
hbase.regionserver.maxlogs=32
hbase.regionserver.hlog.blocksize=128MB
hbase.regionserver.logroll.multiplier=0.95
{code}
We can only have 32*128*0.95 = 3.9GB of WAL entries. So, if you are running 
with 32GB heap and 0.4 memstore size, the memstore space is just left unused. 

Also, not related to compactions, but I have seen cases where there are not 
enough regions per region server to fill the whole memstore space with the 
128MB flush size, a few active regions and big heaps. We do not allow a 
memstore to grow beyond the flush limit to guard against long flushes and long 
MTTR times. But my feeling is that, maybe we can have a dynamically adjustable 
flush size taking into account a min and max flush size and delay triggering 
the flush if there is more space. 



was (Author: enis):
bq. Can we retire hbase.regionserver.maxlogs?
I am in favor of that, or keeping it as a safety net, but with a much higher 
default (128?). 

With default settings
{code}
hbase.regionserver.maxlogs=32
hbase.regionserver.hlog.blocksize=128MB
hbase.regionserver.logroll.multiplier=0.95
{code}
We can only have 32*128*0.95 = 3.9MB of WAL entries. So, if you are running 
with 32GB heap and 0.4 memstore size, the memstore space is just left unused. 

Also, not related to compactions, but I have seen cases where there are not 
enough regions per region server to fill the whole memstore space with the 
128MB flush size, a few active regions and big heaps. We do not allow a 
memstore to grow beyond the flush limit to guard against long flushes and long 
MTTR times. But my feeling is that, maybe we can have a dynamically adjustable 
flush size taking into account a min and max flush size and delay triggering 
the flush if there is more space. 


> Compaction improvements
> -----------------------
>
>                 Key: HBASE-14383
>                 URL: https://issues.apache.org/jira/browse/HBASE-14383
>             Project: HBase
>          Issue Type: Improvement
>            Reporter: Vladimir Rodionov
>            Assignee: Vladimir Rodionov
>             Fix For: 2.0.0
>
>
> Still major issue in many production environments. The general recommendation 
> - disabling region splitting and major compactions to reduce unpredictable 
> IO/CPU spikes, especially during peak times and running them manually during 
> off peak times. Still do not resolve the issues completely.
> h3. Flush storms
> * rolling WAL events across cluster can be highly correlated, hence flushing 
> memstores, hence triggering minor compactions, that can be promoted to major 
> ones. These events are highly correlated in time if there is a balanced 
> write-load on the regions in a table.
> *  the same is true for memstore flushing due to periodic memstore flusher 
> operation. 
> Both above may produce *flush storms* which are as bad as *compaction 
> storms*. 
> What can be done here. We can spread these events over time by randomizing 
> (with jitter) several  config options:
> # hbase.regionserver.optionalcacheflushinterval
> # hbase.regionserver.flush.per.changes
> # hbase.regionserver.maxlogs   
> h3. ExploringCompactionPolicy max compaction size
> One more optimization can be added to ExploringCompactionPolicy. To limit 
> size of a compaction there is a config parameter one could use 
> hbase.hstore.compaction.max.size. It would be nice to have two separate 
> limits: for peak and off peak hours.
> h3. ExploringCompactionPolicy selection evaluation algorithm
> Too simple? Selection with more files always wins, selection of smaller size 
> wins if number of files is the same. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to