[
https://issues.apache.org/jira/browse/HBASE-69?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12566384#action_12566384
]
Billy Pearson commented on HBASE-69:
------------------------------------
I thank what is happening is while the compaction is working on a column any
request to add a compaction to the queue is rejected but after the compaction
starts new mapfiles may exceed the threshold of 3 map files. This leaves extra
map files waiting for the next memcache flush to add a new compaction to the
queue after the compaction completes on the region.
I do not thank this is much of a problem there will be a memcache flush some
time down the road starting a new compaction.
> [hbase] Make cache flush triggering less simplistic
> ---------------------------------------------------
>
> Key: HBASE-69
> URL: https://issues.apache.org/jira/browse/HBASE-69
> Project: Hadoop HBase
> Issue Type: Improvement
> Components: regionserver
> Reporter: stack
> Assignee: Jim Kellerman
> Fix For: 0.2.0
>
> Attachments: patch.txt, patch.txt, patch.txt, patch.txt, patch.txt,
> patch.txt, patch.txt, patch.txt, patch.txt
>
>
> When flusher runs -- its triggered when the sum of all Stores in a Region > a
> configurable max size -- we flush all Stores though a Store memcache might
> have but a few bytes.
> I would think Stores should only dump their memcache disk if they have some
> substance.
> The problem becomes more acute, the more families you have in a Region.
> Possible behaviors would be to dump the biggest Store only, or only those
> Stores > 50% of max memcache size. Behavior would vary dependent on the
> prompt that provoked the flush. Would also log why the flush is running:
> optional or > max size.
> This issue comes out of HADOOP-2621.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.