I have no more good ideas on it, maybe your should throttle the client write 
request ?,i guess the cluster is on a heavy write request status. when you saw 
"Too many hlogs" continually, then probably you'll see the numOfStoreFiles is 
high also, due to the forcing flush region(beginning from the oldest one), then 
compaction queue size will increase as well...

Best,
Liang
________________________________________
发件人: Viral Bajaria [viral.baja...@gmail.com]
发送时间: 2013年6月27日 16:29
收件人: user@hbase.apache.org
主题: Re: 答复: flushing + compactions after config change

Thanks Liang!

Found the logs. I had gone overboard with my grep's and missed the "Too
many hlogs" line for the regions that I was trying to debug.

A few sample log lines:

2013-06-27 07:42:49,602 INFO org.apache.hadoop.hbase.regionserver.wal.HLog:
Too many hlogs: logs=33, maxlogs=32; forcing flush of 2 regions(s):
0e940167482d42f1999b29a023c7c18a, 3f486a879418257f053aa75ba5b69b14
2013-06-27 08:10:29,996 INFO org.apache.hadoop.hbase.regionserver.wal.HLog:
Too many hlogs: logs=33, maxlogs=32; forcing flush of 1 regions(s):
0e940167482d42f1999b29a023c7c18a
2013-06-27 08:17:44,719 INFO org.apache.hadoop.hbase.regionserver.wal.HLog:
Too many hlogs: logs=33, maxlogs=32; forcing flush of 2 regions(s):
0e940167482d42f1999b29a023c7c18a, e380fd8a7174d34feb903baa97564e08
2013-06-27 08:23:45,357 INFO org.apache.hadoop.hbase.regionserver.wal.HLog:
Too many hlogs: logs=33, maxlogs=32; forcing flush of 3 regions(s):
0e940167482d42f1999b29a023c7c18a, 3f486a879418257f053aa75ba5b69b14,
e380fd8a7174d34feb903baa97564e08

Any pointers on what's the best practice for avoiding this scenario ?

Thanks,
Viral

On Thu, Jun 27, 2013 at 1:21 AM, 谢良 <xieli...@xiaomi.com> wrote:

> If  reached memstore global up-limit,  you'll find "Blocking updates on"
> in your files(see MemStoreFlusher.reclaimMemStoreMemory);
> If  it's caused by too many log files, you'll find "Too many hlogs:
> logs="(see HLog.cleanOldLogs)
> Hope it's helpful for you:)
>
> Best,
> Liang
>

Reply via email to