[ 
https://issues.apache.org/jira/browse/HADOOP-2636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Kellerman updated HADOOP-2636:
----------------------------------

    Summary: [hbase] Make cache flush triggering less simplistic  (was: [hbase] 
Make flusher less dumb)

> [hbase] Make cache flush triggering less simplistic
> ---------------------------------------------------
>
>                 Key: HADOOP-2636
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2636
>             Project: Hadoop
>          Issue Type: Improvement
>          Components: contrib/hbase
>            Reporter: stack
>            Assignee: Jim Kellerman
>            Priority: Minor
>
> When flusher runs -- its triggered when the sum of all Stores in a Region > a 
> configurable max size -- we flush all Stores though a Store memcache might 
> have but a few bytes.
> I would think Stores should only dump their memcache disk if they have some 
> substance.
> The problem becomes more acute, the more families you have in a Region.
> Possible behaviors would be to dump the biggest Store only, or only those 
> Stores > 50% of max memcache size.  Behavior would vary dependent on the 
> prompt that provoked the flush.  Would also log why the flush is running: 
> optional or > max size.
> This issue comes out of HADOOP-2621.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to