>>I still (highly)suspect that there is something wrong with the flush
queue(some entry pushed into it can't be poll out).
Ya I too have that suspect.  May be any new logs may help to uncover the
issue.


On Thu, Jun 5, 2014 at 11:06 AM, Stack <st...@duboce.net> wrote:

> Always the same two regions that get stuck or does it vary?  Another set of
> example logs may help uncover the sequence of trouble-causing events.
>
> Thanks,
> St.Ack
>
>
> On Wed, Jun 4, 2014 at 7:31 PM, sunweiwei <su...@asiainfo-linkage.com>
> wrote:
>
> > my log is similar as HBASE-10499.
> >
> > Thanks
> >
> > -----邮件原件-----
> > 发件人: saint....@gmail.com [mailto:saint....@gmail.com] 代表 Stack
> > 发送时间: 2014年6月3日 23:10
> > 收件人: Hbase-User
> > 主题: Re: 答复: forcing flush not works
> >
> > Mind posting link to your log?  Sounds like HBASE-10499 as Honghua says.
> > St.Ack
> >
> >
> > On Tue, Jun 3, 2014 at 2:34 AM, sunweiwei <su...@asiainfo-linkage.com>
> > wrote:
> >
> > > Thanks. Maybe the same as HBase-10499.
> > > I stop the regionserver then start it. Then hbase back to normal.
> > > This is jstack log when  2 regions  can not  flush.
> > >
> > > "Thread-17" prio=10 tid=0x00007f6210383800 nid=0x6540 waiting on
> > condition
> > > [0x00007f61e0a26000]
> > >    java.lang.Thread.State: TIMED_WAITING (parking)
> > >         at sun.misc.Unsafe.park(Native Method)
> > >         - parking to wait for  <0x000000041ae0e6b8> (a
> > > java.util.concurrent.
> > > locks.AbstractQueuedSynchronizer$ConditionObject)
> > >         at
> > > java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:196)
> > >         at
> > >
> > >
> >
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitN
> > > anos(AbstractQueuedSynchronizer.java:2025)
> > >         at java.util.concurrent.DelayQueue.poll(DelayQueue.java:201)
> > >         at java.util.concurrent.DelayQueue.poll(DelayQueue.java:39)
> > >         at
> > >
> > >
> >
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemSto
> > > reFlusher.java:228)
> > >         at java.lang.Thread.run(Thread.java:662)
> > >
> > > -----邮件原件-----
> > > 发件人: 冯宏华 [mailto:fenghong...@xiaomi.com]
> > > 发送时间: 2014年6月3日 16:34
> > > 收件人: user@hbase.apache.org
> > > 主题: 答复: forcing flush not works
> > >
> > > The same symptom as HBase-10499?
> > >
> > > I still (highly)suspect that there is something wrong with the flush
> > > queue(some entry pushed into it can't be poll out).
> > > ________________________________________
> > > 发件人: sunweiwei [su...@asiainfo-linkage.com]
> > > 发送时间: 2014年6月3日 15:43
> > > 收件人: user@hbase.apache.org
> > > 主题: forcing flush not works
> > >
> > > Hi
> > >
> > >
> > >
> > > I'm using a heavy-write hbase0.96 . I find this in regionserver log:
> > >
> > > 2014-06-03 15:13:19,445 INFO  [regionserver60020.logRoller] wal.FSHLog:
> > Too
> > > many hlogs: logs=33, maxlogs=32; forcing flush of 3 regions(s):
> > > 1a7dda3c3815c19970ace39fd99abfe8, aff81bc46aa7d3ed51a01f11f23c8320,
> > > d5666e003f598147b4dda509f173a779
> > >
> > > 2014-06-03 15:13:23,869 INFO  [regionserver60020.logRoller] wal.FSHLog:
> > Too
> > > many hlogs: logs=34, maxlogs=32; forcing flush of 2 regions(s):
> > > aff81bc46aa7d3ed51a01f11f23c8320, d5666e003f598147b4dda509f173a779
> > >
> > > ┇
> > >
> > > ┇
> > >
> > > 2014-06-03 15:18:14,778 INFO  [regionserver60020.logRoller] wal.FSHLog:
> > Too
> > > many hlogs: logs=93, maxlogs=32; forcing flush of 2 regions(s):
> > > aff81bc46aa7d3ed51a01f11f23c8320, d5666e003f598147b4dda509f173a779
> > >
> > >
> > >
> > >
> > >
> > > It seems like 2 regions can’t be flushed and WALs Dir continue to
> > increase
> > > and Then I find this in client log:
> > >
> > > INFO | AsyncProcess-waitForMaximumCurrentTasks [2014-06-03 15:30:53] -
> :
> > > Waiting for the global number of running tasks to be equals or less
> than
> > 0,
> > > tasksSent=15819, tasksDone=15818, currentTasksDone=15818,
> > > tableName=BT_D_BF001_201406
> > >
> > >
> > >
> > > Then write speed will become very slow.
> > >
> > > After I flush 2 regions  manually , write speed can back to normal
>  only
> > > a
> > > short while.
> > >
> > >
> > >
> > > Any suggestion will be appreciated. Thanks.
> > >
> > >
> > >
> >
> >
>

Reply via email to