It seems you might have a write hotspot. 
Are your writes evenly distributed across the cluster? Do you have more than 
15-20 regions for that table?

Sent from my iPhone

> On May 22, 2018, at 9:52 PM, Kang Minwoo <minwoo.k...@outlook.com> wrote:
> 
> I think hbase flush is too slow.
> so memstore reached upper limit.
> 
> flush took about 30min.
> I don't know why flush is too long.
> 
> Best regards,
> Minwoo Kang
> 
> ________________________________________
> 보낸 사람: 张铎(Duo Zhang) <palomino...@gmail.com>
> 보낸 날짜: 2018년 5월 23일 수요일 11:37
> 받는 사람: hbase-user
> 제목: Re: can not write to HBase
> 
> org.apache.hadoop.hbase.RegionTooBusyException:
> org.apache.hadoop.hbase.RegionTooBusyException:
> Above memstore limit, regionName={region}, server={server},
> memstoreSize=2600502128, blockingMemStoreSize=2600468480
> 
> This means that you're writing too fast and memstore has reached its upper
> limit. Is the flush and compaction fine at RS side?
> 
> 2018-05-23 10:20 GMT+08:00 Kang Minwoo <minwoo.k...@outlook.com>:
> 
>> attach client exception and stacktrace.
>> 
>> I've looked more.
>> It seems to be the reason why it takes 1290 seconds to flush in the Region
>> Server.
>> 
>> 2018-05-23T07:24:31.202 [INFO] Call exception, tries=34, retries=35,
>> started=513393 ms ago, cancelled=false, msg=row '{row}' on table '{table}'
>> at region={region}, hostname={host}, seqNum=155455658
>> 2018-05-23T07:24:31.208 [ERROR]
>> java.lang.RuntimeException: com.google.protobuf.ServiceException: Error
>> calling method MultiRowMutationService.MutateRows
>>        at com.google.common.base.Throwables.propagate(Throwables.java:160)
>> ~[stormjar.jar:?]
>>        at ...
>>        at org.apache.storm.daemon.executor$fn__8058$tuple_
>> action_fn__8060.invoke(executor.clj:731) [storm-core-1.0.2.jar:1.0.2]
>>        at 
>> org.apache.storm.daemon.executor$mk_task_receiver$fn__7979.invoke(executor.clj:464)
>> [storm-core-1.0.2.jar:1.0.2]
>>        at 
>> org.apache.storm.disruptor$clojure_handler$reify__7492.onEvent(disruptor.clj:40)
>> [storm-core-1.0.2.jar:1.0.2]
>>        at 
>> org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451)
>> [storm-core-1.0.2.jar:1.0.2]
>>        at org.apache.storm.utils.DisruptorQueue.
>> consumeBatchWhenAvailable(DisruptorQueue.java:430)
>> [storm-core-1.0.2.jar:1.0.2]
>>        at 
>> org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73)
>> [storm-core-1.0.2.jar:1.0.2]
>>        at 
>> org.apache.storm.daemon.executor$fn__8058$fn__8071$fn__8124.invoke(executor.clj:850)
>> [storm-core-1.0.2.jar:1.0.2]
>>        at org.apache.storm.util$async_loop$fn__624.invoke(util.clj:484)
>> [storm-core-1.0.2.jar:1.0.2]
>>        at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
>>        at java.lang.Thread.run(Thread.java:745) [?:1.7.0_80]
>> Caused by: com.google.protobuf.ServiceException: Error calling method
>> MultiRowMutationService.MutateRows
>>        at org.apache.hadoop.hbase.ipc.CoprocessorRpcChannel.
>> callBlockingMethod(CoprocessorRpcChannel.java:75) ~[stormjar.jar:?]
>>        at org.apache.hadoop.hbase.protobuf.generated.
>> MultiRowMutationProtos$MultiRowMutationService$BlockingStub.mutateRows(
>> MultiRowMutationProtos.java:2149) ~[stormjar.jar:?]
>>        at ...
>>        ... 13 more
>> Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException:
>> Failed after attempts=35, exceptions:
>> Wed May 23 07:15:57 KST 2018, 
>> RpcRetryingCaller{globalStartTime=1527027357808,
>> pause=100, retries=35}, org.apache.hadoop.hbase.RegionTooBusyException:
>> org.apache.hadoop.hbase.RegionTooBusyException: Above memstore limit,
>> regionName={region}, server={server}, memstoreSize=2600502128,
>> blockingMemStoreSize=2600468480
>>        at org.apache.hadoop.hbase.regionserver.HRegion.
>> checkResources(HRegion.java:3649)
>>        at org.apache.hadoop.hbase.regionserver.HRegion.
>> processRowsWithLocks(HRegion.java:6935)
>>        at org.apache.hadoop.hbase.regionserver.HRegion.
>> mutateRowsWithLocks(HRegion.java:6885)
>>        at org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint.
>> mutateRows(MultiRowMutationEndpoint.java:116)
>>        at org.apache.hadoop.hbase.protobuf.generated.
>> MultiRowMutationProtos$MultiRowMutationService.callMethod(
>> MultiRowMutationProtos.java:2053)
>>        at org.apache.hadoop.hbase.regionserver.HRegion.
>> execService(HRegion.java:7875)
>>        at org.apache.hadoop.hbase.regionserver.RSRpcServices.
>> execServiceOnRegion(RSRpcServices.java:2008)
>> 
>> 
>> Best regards,
>> Minwoo Kang
>> 
>> ________________________________________
>> 보낸 사람: 张铎(Duo Zhang) <palomino...@gmail.com>
>> 보낸 날짜: 2018년 5월 23일 수요일 09:22
>> 받는 사람: hbase-user
>> 제목: Re: can not write to HBase
>> 
>> What is the exception? And the stacktrace?
>> 
>> 2018-05-23 8:17 GMT+08:00 Kang Minwoo <minwoo.k...@outlook.com>:
>> 
>>> Hello, Users
>>> 
>>> My HBase client does not work after print the following logs.
>>> 
>>> Call exception, tries=23, retries=35, started=291277 ms ago,
>>> cancelled=false, msg=row '{row}' on table '{table}' at region={region},
>>> hostname={hostname}, seqNum=100353531
>>> 
>>> There are no special logs in the Master and Region Servers.
>>> Is it something wrong in the client?
>>> 
>>> Best regards,
>>> Minwoo Kang
>>> 
>> 

Reply via email to