[
https://issues.apache.org/jira/browse/HBASE-69?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12568440#action_12568440
]
Billy Pearson commented on HBASE-69:
------------------------------------
I get these
{code}
INFO org.apache.hadoop.hbase.HRegion: Blocking updates for 'IPC Server handler
21 on 60020': cache flushes requested 8 is >= max flush request count 8
{code}
then everything stops not long after a job starts an no more processing
everything is blocked I waited 3 hours to see if the option flush would unblock
but it does not seam to be happening
might take a look at the memcache blocker again
> [hbase] Make cache flush triggering less simplistic
> ---------------------------------------------------
>
> Key: HBASE-69
> URL: https://issues.apache.org/jira/browse/HBASE-69
> Project: Hadoop HBase
> Issue Type: Improvement
> Components: regionserver
> Reporter: stack
> Assignee: Jim Kellerman
> Fix For: 0.2.0
>
> Attachments: patch.txt, patch.txt, patch.txt, patch.txt, patch.txt,
> patch.txt, patch.txt, patch.txt, patch.txt, patch.txt, patch.txt, patch.txt
>
>
> When flusher runs -- its triggered when the sum of all Stores in a Region > a
> configurable max size -- we flush all Stores though a Store memcache might
> have but a few bytes.
> I would think Stores should only dump their memcache disk if they have some
> substance.
> The problem becomes more acute, the more families you have in a Region.
> Possible behaviors would be to dump the biggest Store only, or only those
> Stores > 50% of max memcache size. Behavior would vary dependent on the
> prompt that provoked the flush. Would also log why the flush is running:
> optional or > max size.
> This issue comes out of HADOOP-2621.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.