Hi Duo, 

To be honest, we only use one master, so there is no way to swap them.
After the attempt throwing the NPE we ran the hbck tool again in which case no 
NPE was thrown but also nothing else - the log looked just the one I’ve send 
you just without the exception stack trace and nothing more. The problem also 
remain the same.

I think we go just back to our snapshot and try the migration again from the 
start.



> Am 23.04.2024 um 09:36 schrieb 张铎(Duo Zhang) <palomino...@gmail.com>:
> 
> Strange, I checked the code, it seems we get NPE on this line
> 
> https://github.com/apache/hbase/blob/4d7ce1aac724fbf09e526fc422b5a11e530c32f0/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java#L2872
> 
> Could you please confirm that you connect to the correct active master
> which is hanging? It seems that you are connecting the backup
> master...
> 
> Thanks.
> 
> 张铎(Duo Zhang) <palomino...@gmail.com> 于2024年4月23日周二 15:31写道:
>> 
>> Ah, NPE usually means a code bug, then there is no simple way to fix
>> it, need to take a deep look on the code :(
>> 
>> Sorry.
>> 
>> Udo Offermann <udo.offerm...@zfabrik.de> 于2024年4月22日周一 15:32写道:
>>> 
>>> Unfortunately not.
>>> I’ve found the node hosting the meta region and was able to run hack 
>>> scheduleRecoveries using hbase-operator-tools-1.2.0.
>>> The tool however stops with an NPE:
>>> 
>>> 09:22:00.532 [main] WARN  org.apache.hadoop.util.NativeCodeLoader - Unable 
>>> to load native-hadoop library for your platform... using builtin-java 
>>> classes where applicable
>>> 09:22:00.703 [main] INFO  org.apache.hadoop.conf.Configuration.deprecation 
>>> - hbase.client.pause.cqtbe is deprecated. Instead, use 
>>> hbase.client.pause.server.overloaded
>>> 09:22:00.765 [ReadOnlyZKClient-HBaseMaster:2181@0x7d9f158f] INFO  
>>> org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ZooKeeper - Client 
>>> environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56,
>>>  built on 2023-10-05 10:34 UTC
>>> 09:22:00.765 [ReadOnlyZKClient-HBaseMaster:2181@0x7d9f158f] INFO  
>>> org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ZooKeeper - Client 
>>> environment:host.name=HBaseMaster.gmd9.intern
>>> 09:22:00.765 [ReadOnlyZKClient-HBaseMaster:2181@0x7d9f158f] INFO  
>>> org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ZooKeeper - Client 
>>> environment:java.version=1.8.0_402
>>> 09:22:00.766 [ReadOnlyZKClient-HBaseMaster:2181@0x7d9f158f] INFO  
>>> org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ZooKeeper - Client 
>>> environment:java.vendor=Red Hat, Inc.
>>> 09:22:00.766 [ReadOnlyZKClient-HBaseMaster:2181@0x7d9f158f] INFO  
>>> org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ZooKeeper - Client 
>>> environment:java.home=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.402.b06-2.el8.x86_64/jre
>>> 09:22:00.766 [ReadOnlyZKClient-HBaseMaster:2181@0x7d9f158f] INFO  
>>> org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ZooKeeper - Client 
>>> environment:java.class.path=hbase-operator-tools-1.2.0/hbase-hbck2/hbase-hbck2-1.2.0.jar:hbase/conf:/opt/seritrack/tt/jdk/lib/tools.jar:/opt/seritrack/tt/nosql/hbase:/opt/seritrack/tt/nosql/hbase/lib/shaded-clients/hbase-shaded-mapreduce-2.5.7.jar:/opt/seritrack/tt/nosql/hbase/lib/client-facing-thirdparty/audience-annotations-0.13.0.jar:/opt/seritrack/tt/nosql/hbase/lib/client-facing-thirdparty/commons-logging-1.2.jar:/opt/seritrack/tt/nosql/hbase/lib/client-facing-thirdparty/htrace-core4-4.1.0-incubating.jar:/opt/seritrack/tt/nosql/hbase/lib/client-facing-thirdparty/jcl-over-slf4j-1.7.33.jar:/opt/seritrack/tt/nosql/hbase/lib/client-facing-thirdparty/jul-to-slf4j-1.7.33.jar:/opt/seritrack/tt/nosql/hbase/lib/client-facing-thirdparty/opentelemetry-api-1.15.0.jar:/opt/seritrack/tt/nosql/hbase/lib/client-facing-thirdparty/opentelemetry-context-1.15.0.jar:/opt/seritrack/tt/nosql/hbase/lib/client-facing-thirdparty/opentelemetry-semconv-1.15.0-alpha.jar:/opt/seritrack/tt/nosql/hbase/lib/client-facing-thirdparty/slf4j-api-1.7.33.jar:/opt/seritrack/tt/nosql/hbase/lib/shaded-clients/hbase-shaded-client-2.5.7.jar:/opt/seritrack/tt/nosql/pl_nosql_ext/libs/pl_nosql_ext-3.0.jar:/opt/seritrack/tt/nosql/hbase/lib/client-facing-thirdparty/log4j-1.2-api-2.17.2.jar:/opt/seritrack/tt/nosql/hbase/lib/client-facing-thirdparty/log4j-api-2.17.2.jar:/opt/seritrack/tt/nosql/hbase/lib/client-facing-thirdparty/log4j-core-2.17.2.jar:/opt/seritrack/tt/nosql/hbase/lib/client-facing-thirdparty/log4j-slf4j-impl-2.17.2.jar:/opt/seritrack/tt/prometheus_exporters/jmx_exporter/jmx_prometheus_javaagent.jar
>>> 09:22:00.766 [ReadOnlyZKClient-HBaseMaster:2181@0x7d9f158f] INFO  
>>> org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ZooKeeper - Client 
>>> environment:java.library.path=/opt/seritrack/tt/nosql/hadoop/lib/native
>>> 09:22:00.766 [ReadOnlyZKClient-HBaseMaster:2181@0x7d9f158f] INFO  
>>> org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ZooKeeper - Client 
>>> environment:java.io.tmpdir=/tmp
>>> 09:22:00.766 [ReadOnlyZKClient-HBaseMaster:2181@0x7d9f158f] INFO  
>>> org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ZooKeeper - Client 
>>> environment:java.compiler=<NA>
>>> 09:22:00.766 [ReadOnlyZKClient-HBaseMaster:2181@0x7d9f158f] INFO  
>>> org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ZooKeeper - Client 
>>> environment:os.name=Linux
>>> 09:22:00.766 [ReadOnlyZKClient-HBaseMaster:2181@0x7d9f158f] INFO  
>>> org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ZooKeeper - Client 
>>> environment:os.arch=amd64
>>> 09:22:00.766 [ReadOnlyZKClient-HBaseMaster:2181@0x7d9f158f] INFO  
>>> org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ZooKeeper - Client 
>>> environment:os.version=4.18.0-513.18.1.el8_9.x86_64
>>> 09:22:00.766 [ReadOnlyZKClient-HBaseMaster:2181@0x7d9f158f] INFO  
>>> org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ZooKeeper - Client 
>>> environment:user.name=seritrack
>>> 09:22:00.766 [ReadOnlyZKClient-HBaseMaster:2181@0x7d9f158f] INFO  
>>> org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ZooKeeper - Client 
>>> environment:user.home=/opt/seritrack
>>> 09:22:00.766 [ReadOnlyZKClient-HBaseMaster:2181@0x7d9f158f] INFO  
>>> org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ZooKeeper - Client 
>>> environment:user.dir=/opt/seritrack/tt/nosql_3.0
>>> 09:22:00.766 [ReadOnlyZKClient-HBaseMaster:2181@0x7d9f158f] INFO  
>>> org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ZooKeeper - Client 
>>> environment:os.memory.free=275MB
>>> 09:22:00.766 [ReadOnlyZKClient-HBaseMaster:2181@0x7d9f158f] INFO  
>>> org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ZooKeeper - Client 
>>> environment:os.memory.max=2966MB
>>> 09:22:00.766 [ReadOnlyZKClient-HBaseMaster:2181@0x7d9f158f] INFO  
>>> org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ZooKeeper - Client 
>>> environment:os.memory.total=361MB
>>> 09:22:00.771 [ReadOnlyZKClient-HBaseMaster:2181@0x7d9f158f] INFO  
>>> org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ZooKeeper - Initiating 
>>> client connection, connectString=HBaseMaster:2181 sessionTimeout=90000 
>>> watcher=org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$45/1091799416@aed32c5
>>> 09:22:00.774 [ReadOnlyZKClient-HBaseMaster:2181@0x7d9f158f] INFO  
>>> org.apache.hadoop.hbase.shaded.org.apache.zookeeper.common.X509Util - 
>>> Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable 
>>> client-initiated TLS renegotiation
>>> 09:22:00.777 [ReadOnlyZKClient-HBaseMaster:2181@0x7d9f158f] INFO  
>>> org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ClientCnxnSocket - 
>>> jute.maxbuffer value is 1048575 Bytes
>>> 09:22:00.785 [ReadOnlyZKClient-HBaseMaster:2181@0x7d9f158f] INFO  
>>> org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ClientCnxn - 
>>> zookeeper.request.timeout value is 0. feature enabled=false
>>> 09:22:00.793 
>>> [ReadOnlyZKClient-HBaseMaster:2181@0x7d9f158f-SendThread(HBaseMaster:2181)] 
>>> INFO  org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ClientCnxn - 
>>> Opening socket connection to server HBaseMaster/10.21.204.230:2181.
>>> 09:22:00.793 
>>> [ReadOnlyZKClient-HBaseMaster:2181@0x7d9f158f-SendThread(HBaseMaster:2181)] 
>>> INFO  org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ClientCnxn - SASL 
>>> config status: Will not attempt to authenticate using SASL (unknown error)
>>> 09:22:00.797 
>>> [ReadOnlyZKClient-HBaseMaster:2181@0x7d9f158f-SendThread(HBaseMaster:2181)] 
>>> INFO  org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ClientCnxn - 
>>> Socket connection established, initiating session, client: 
>>> /10.21.204.230:41072, server: HBaseMaster/10.21.204.230:2181
>>> 09:22:00.801 
>>> [ReadOnlyZKClient-HBaseMaster:2181@0x7d9f158f-SendThread(HBaseMaster:2181)] 
>>> INFO  org.apache.hadoop.hbase.shaded.org.apache.zookeeper.ClientCnxn - 
>>> Session establishment complete on server HBaseMaster/10.21.204.230:2181, 
>>> session id = 0x10009a4f379001e, negotiated timeout = 90000
>>> -1
>>> Exception in thread "main" java.io.IOException: 
>>> org.apache.hbase.thirdparty.com.google.protobuf.ServiceException: 
>>> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(java.io.IOException): 
>>> java.io.IOException
>>>        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:479)
>>>        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
>>>        at org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:102)
>>>        at org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:82)
>>> Caused by: java.lang.NullPointerException
>>>        at 
>>> org.apache.hadoop.hbase.master.MasterRpcServices.shouldSubmitSCP(MasterRpcServices.java:2872)
>>>        at 
>>> org.apache.hadoop.hbase.master.MasterRpcServices.scheduleServerCrashProcedure(MasterRpcServices.java:2600)
>>>        at 
>>> org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$HbckService$2.callBlockingMethod(MasterProtos.java)
>>>        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:415)
>>>        ... 3 more
>>> 
>>>        at 
>>> org.apache.hadoop.hbase.client.HBaseHbck.scheduleServerCrashProcedures(HBaseHbck.java:198)
>>>        at 
>>> org.apache.hadoop.hbase.client.Hbck.scheduleServerCrashProcedure(Hbck.java:128)
>>>        at org.apache.hbase.HBCK2.scheduleRecoveries(HBCK2.java:418)
>>>        at org.apache.hbase.HBCK2.doCommandLine(HBCK2.java:960)
>>>        at org.apache.hbase.HBCK2.run(HBCK2.java:830)
>>>        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>>>        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>>>        at org.apache.hbase.HBCK2.main(HBCK2.java:1145)
>>> Caused by: 
>>> org.apache.hbase.thirdparty.com.google.protobuf.ServiceException: 
>>> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(java.io.IOException): 
>>> java.io.IOException
>>>        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:479)
>>>        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
>>>        at org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:102)
>>>        at org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:82)
>>> Caused by: java.lang.NullPointerException
>>>        at 
>>> org.apache.hadoop.hbase.master.MasterRpcServices.shouldSubmitSCP(MasterRpcServices.java:2872)
>>>        at 
>>> org.apache.hadoop.hbase.master.MasterRpcServices.scheduleServerCrashProcedure(MasterRpcServices.java:2600)
>>>        at 
>>> org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$HbckService$2.callBlockingMethod(MasterProtos.java)
>>>        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:415)
>>>        ... 3 more
>>> 
>>>        at 
>>> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:340)
>>>        at 
>>> org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$200(AbstractRpcClient.java:92)
>>>        at 
>>> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:595)
>>>        at 
>>> org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$HbckService$BlockingStub.scheduleServerCrashProcedure(MasterProtos.java)
>>>        at 
>>> org.apache.hadoop.hbase.client.HBaseHbck.scheduleServerCrashProcedures(HBaseHbck.java:190)
>>>        ... 7 more
>>> Caused by: 
>>> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(java.io.IOException): 
>>> java.io.IOException
>>>        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:479)
>>>        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
>>>        at org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:102)
>>>        at org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:82)
>>> Caused by: java.lang.NullPointerException
>>>        at 
>>> org.apache.hadoop.hbase.master.MasterRpcServices.shouldSubmitSCP(MasterRpcServices.java:2872)
>>>        at 
>>> org.apache.hadoop.hbase.master.MasterRpcServices.scheduleServerCrashProcedure(MasterRpcServices.java:2600)
>>>        at 
>>> org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$HbckService$2.callBlockingMethod(MasterProtos.java)
>>>        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:415)
>>>        ... 3 more
>>> 
>>>        at 
>>> org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:388)
>>>        at 
>>> org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:92)
>>>        at 
>>> org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:425)
>>>        at 
>>> org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:420)
>>>        at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:114)
>>>        at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:129)
>>>        at 
>>> org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:199)
>>>        at 
>>> org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:220)
>>>        at 
>>> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442)
>>>        at 
>>> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
>>>        at 
>>> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
>>>        at 
>>> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346)
>>>        at 
>>> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318)
>>>        at 
>>> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
>>>        at 
>>> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
>>>        at 
>>> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
>>>        at 
>>> org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
>>>        at 
>>> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442)
>>>        at 
>>> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
>>>        at 
>>> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
>>>        at 
>>> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
>>>        at 
>>> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440)
>>>        at 
>>> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
>>>        at 
>>> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
>>>        at 
>>> org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
>>>        at 
>>> org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788)
>>>        at 
>>> org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724)
>>>        at 
>>> org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650)
>>>        at 
>>> org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562)
>>>        at 
>>> org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
>>>        at 
>>> org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
>>>        at 
>>> org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
>>>        at java.lang.Thread.run(Thread.java:750)
>>> 
>>> 
>>> 
>>> 
>>>> Am 20.04.2024 um 15:53 schrieb 张铎(Duo Zhang) <palomino...@gmail.com>:
>>>> 
>>>> OK, it was waitForMetaOnline.
>>>> 
>>>> Maybe the problem is that you do have some correct procedures before
>>>> upgrading, like ServerCrashProcedure, but then you delete all the
>>>> procedure wals so the ServerCrashProcedure is also gone, so meta can
>>>> never be online.
>>>> 
>>>> Please check the /hbase/meta-region-server znode on zookeeper, dump
>>>> its content, it is protobuf based but anyway, you could see the
>>>> encoded server name which hosts meta region.
>>>> 
>>>> Then use HBCK2, to schedule a SCP for this region server, to see if it
>>>> can fix the problem.
>>>> 
>>>> https://github.com/apache/hbase-operator-tools/blob/master/hbase-hbck2/README.md
>>>> 
>>>> This is the document for HBCK2, you should use the scheduleRecoveries 
>>>> command.
>>>> 
>>>> Hope this could fix your problem.
>>>> 
>>>> Thread 92 (master/masterserver:16000:becomeActiveMaster):
>>>> State: TIMED_WAITING
>>>> Blocked count: 165
>>>> Waited count: 404
>>>> Stack:
>>>>   java.lang.Thread.sleep(Native Method)
>>>>   org.apache.hadoop.hbase.util.Threads.sleep(Threads.java:125)
>>>>   org.apache.hadoop.hbase.master.HMaster.isRegionOnline(HMaster.java:1358)
>>>> 
>>>> org.apache.hadoop.hbase.master.HMaster.waitForMetaOnline(HMaster.java:1328)
>>>> 
>>>> org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:1069)
>>>> 
>>>> org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2405)
>>>>   org.apache.hadoop.hbase.master.HMaster.lambda$null$0(HMaster.java:565)
>>>> 
>>>> org.apache.hadoop.hbase.master.HMaster$$Lambda$265/1598878738.run(Unknown
>>>> Source)
>>>>   org.apache.hadoop.hbase.trace.TraceUtil.trace(TraceUtil.java:187)
>>>>   org.apache.hadoop.hbase.trace.TraceUtil.trace(TraceUtil.java:177)
>>>>   org.apache.hadoop.hbase.master.HMaster.lambda$run$1(HMaster.java:562)
>>>> 
>>>> org.apache.hadoop.hbase.master.HMaster$$Lambda$264/1129144214.run(Unknown
>>>> Source)
>>>>   java.lang.Thread.run(Thread.java:750)
>>>> 
>>>> Udo Offermann <udo.offerm...@zfabrik.de <mailto:udo.offerm...@zfabrik.de>> 
>>>> 于2024年4月20日周六 21:13写道:
>>>>> 
>>>>> Master status for masterserver.gmd9.intern,16000,1713515965162 as of Fri
>>>>> Apr 19 10:55:22 CEST 2024
>>>>> 
>>>>> 
>>>>> Version Info:
>>>>> ===========================================================
>>>>> HBase 2.5.7
>>>>> Source code repository
>>>>> git://buildbox.localdomain/home/apurtell/tmp/RM/hbase
>>>>> revision=6788f98356dd70b4a7ff766ea7a8298e022e7b95
>>>>> Compiled by apurtell on Thu Dec 14 15:59:16 PST 2023
>>>>> From source with checksum
>>>>> 1501d7fdf72398791ee335a229d099fc972cea7c2a952da7622eb087ddf975361f107cbbbee5d0ad6f603466e9afa1f4fd242ffccbd4371eb0b56059bb3b5402
>>>>> Hadoop 2.10.2
>>>>> Source code repository Unknown
>>>>> revision=965fd380006fa78b2315668fbc7eb432e1d8200f
>>>>> Compiled by ubuntu on 2022-05-25T00:12Z
>>>>> 
>>>>> 
>>>>> Tasks:
>>>>> ===========================================================
>>>>> Task: Master startup
>>>>> Status: RUNNING:Starting assignment manager
>>>>> Running for 954s
>>>>> 
>>>>> Task: Flushing master:store,,1.1595e783b53d99cd5eef43b6debb2682.
>>>>> Status: COMPLETE:Flush successful flush 
>>>>> result:CANNOT_FLUSH_MEMSTORE_EMPTY,
>>>>> failureReason:Nothing to flush,flush seq id14
>>>>> Completed 49s ago
>>>>> Ran for 0s
>>>>> 
>>>>> Task: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=16000
>>>>> Status: WAITING:Waiting for a call
>>>>> Running for 951s
>>>>> 
>>>>> Task: RpcServer.priority.RWQ.Fifo.write.handler=1,queue=0,port=16000
>>>>> Status: WAITING:Waiting for a call
>>>>> Running for 951s
>>>>> 
>>>>> 
>>>>> 
>>>>> Servers:
>>>>> ===========================================================
>>>>> servername1ct.gmd9.intern,16020,1713514863737: requestsPerSecond=0.0,
>>>>> numberOfOnlineRegions=0, usedHeapMB=37.0MB, maxHeapMB=2966.0MB,
>>>>> numberOfStores=0, numberOfStorefiles=0, storeRefCount=0,
>>>>> maxCompactedStoreFileRefCount=0, storefileUncompressedSizeMB=0,
>>>>> storefileSizeMB=0, memstoreSizeMB=0, readRequestsCount=0,
>>>>> filteredReadRequestsCount=0, writeRequestsCount=0, rootIndexSizeKB=0,
>>>>> totalStaticIndexSizeKB=0, totalStaticBloomSizeKB=0, totalCompactingKVs=0,
>>>>> currentCompactedKVs=0, compactionProgressPct=NaN, coprocessors=[]
>>>>> servername2ct.gmd9.intern,16020,1713514925960: requestsPerSecond=0.0,
>>>>> numberOfOnlineRegions=0, usedHeapMB=20.0MB, maxHeapMB=2966.0MB,
>>>>> numberOfStores=0, numberOfStorefiles=0, storeRefCount=0,
>>>>> maxCompactedStoreFileRefCount=0, storefileUncompressedSizeMB=0,
>>>>> storefileSizeMB=0, memstoreSizeMB=0, readRequestsCount=0,
>>>>> filteredReadRequestsCount=0, writeRequestsCount=0, rootIndexSizeKB=0,
>>>>> totalStaticIndexSizeKB=0, totalStaticBloomSizeKB=0, totalCompactingKVs=0,
>>>>> currentCompactedKVs=0, compactionProgressPct=NaN, coprocessors=[]
>>>>> servername3ct.gmd9.intern,16020,1713514937151: requestsPerSecond=0.0,
>>>>> numberOfOnlineRegions=0, usedHeapMB=67.0MB, maxHeapMB=2966.0MB,
>>>>> numberOfStores=0, numberOfStorefiles=0, storeRefCount=0,
>>>>> maxCompactedStoreFileRefCount=0, storefileUncompressedSizeMB=0,
>>>>> storefileSizeMB=0, memstoreSizeMB=0, readRequestsCount=0,
>>>>> filteredReadRequestsCount=0, writeRequestsCount=0, rootIndexSizeKB=0,
>>>>> totalStaticIndexSizeKB=0, totalStaticBloomSizeKB=0, totalCompactingKVs=0,
>>>>> currentCompactedKVs=0, compactionProgressPct=NaN, coprocessors=[]
>>>>> servername4ct.gmd9.intern,16020,1713514968019: requestsPerSecond=0.0,
>>>>> numberOfOnlineRegions=0, usedHeapMB=24.0MB, maxHeapMB=2966.0MB,
>>>>> numberOfStores=0, numberOfStorefiles=0, storeRefCount=0,
>>>>> maxCompactedStoreFileRefCount=0, storefileUncompressedSizeMB=0,
>>>>> storefileSizeMB=0, memstoreSizeMB=0, readRequestsCount=0,
>>>>> filteredReadRequestsCount=0, writeRequestsCount=0, rootIndexSizeKB=0,
>>>>> totalStaticIndexSizeKB=0, totalStaticBloomSizeKB=0, totalCompactingKVs=0,
>>>>> currentCompactedKVs=0, compactionProgressPct=NaN, coprocessors=[]
>>>>> servername5ct.gmd9.intern,16020,1713514979294: requestsPerSecond=0.0,
>>>>> numberOfOnlineRegions=0, usedHeapMB=58.0MB, maxHeapMB=2966.0MB,
>>>>> numberOfStores=0, numberOfStorefiles=0, storeRefCount=0,
>>>>> maxCompactedStoreFileRefCount=0, storefileUncompressedSizeMB=0,
>>>>> storefileSizeMB=0, memstoreSizeMB=0, readRequestsCount=0,
>>>>> filteredReadRequestsCount=0, writeRequestsCount=0, rootIndexSizeKB=0,
>>>>> totalStaticIndexSizeKB=0, totalStaticBloomSizeKB=0, totalCompactingKVs=0,
>>>>> currentCompactedKVs=0, compactionProgressPct=NaN, coprocessors=[]
>>>>> servername6ct.gmd9.intern,16020,1713514994770: requestsPerSecond=0.0,
>>>>> numberOfOnlineRegions=0, usedHeapMB=31.0MB, maxHeapMB=2966.0MB,
>>>>> numberOfStores=0, numberOfStorefiles=0, storeRefCount=0,
>>>>> maxCompactedStoreFileRefCount=0, storefileUncompressedSizeMB=0,
>>>>> storefileSizeMB=0, memstoreSizeMB=0, readRequestsCount=0,
>>>>> filteredReadRequestsCount=0, writeRequestsCount=0, rootIndexSizeKB=0,
>>>>> totalStaticIndexSizeKB=0, totalStaticBloomSizeKB=0, totalCompactingKVs=0,
>>>>> currentCompactedKVs=0, compactionProgressPct=NaN, coprocessors=[]
>>>>> 
>>>>> 
>>>>> Regions-in-transition:
>>>>> ===========================================================
>>>>> 
>>>>> 
>>>>> Executors:
>>>>> ===========================================================
>>>>> Status for executor:
>>>>> Executor-4-MASTER_META_SERVER_OPERATIONS-master/masterserver:16000
>>>>> =======================================
>>>>> 0 events queued, 0 running
>>>>> Status for executor:
>>>>> Executor-6-MASTER_SNAPSHOT_OPERATIONS-master/masterserver:16000
>>>>> =======================================
>>>>> 0 events queued, 0 running
>>>>> Status for executor:
>>>>> Executor-3-MASTER_SERVER_OPERATIONS-master/masterserver:16000
>>>>> =======================================
>>>>> 0 events queued, 0 running
>>>>> Status for executor: Executor-5-M_LOG_REPLAY_OPS-master/masterserver:16000
>>>>> =======================================
>>>>> 0 events queued, 0 running
>>>>> Status for executor:
>>>>> Executor-2-MASTER_CLOSE_REGION-master/masterserver:16000
>>>>> =======================================
>>>>> 0 events queued, 0 running
>>>>> Status for executor:
>>>>> Executor-7-MASTER_MERGE_OPERATIONS-master/masterserver:16000
>>>>> =======================================
>>>>> 0 events queued, 0 running
>>>>> Status for executor:
>>>>> Executor-8-MASTER_TABLE_OPERATIONS-master/masterserver:16000
>>>>> =======================================
>>>>> 0 events queued, 0 running
>>>>> Status for executor:
>>>>> Executor-1-MASTER_OPEN_REGION-master/masterserver:16000
>>>>> =======================================
>>>>> 0 events queued, 0 running
>>>>> 
>>>>> 
>>>>> Stacks:
>>>>> ===========================================================
>>>>> Process Thread Dump:
>>>>> 131 active threads
>>>>> Thread 186 (WAL-Archive-0):
>>>>> State: WAITING
>>>>> Blocked count: 5
>>>>> Waited count: 11
>>>>> Waiting on
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@42f44d41
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
>>>>> 
>>>>> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
>>>>> 
>>>>> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
>>>>> 
>>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
>>>>> 
>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>>>>>   java.lang.Thread.run(Thread.java:750)
>>>>> Thread 185 (Close-WAL-Writer-0):
>>>>> State: TIMED_WAITING
>>>>> Blocked count: 2
>>>>> Waited count: 6
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
>>>>> 
>>>>> java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
>>>>> 
>>>>> java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
>>>>>   java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941)
>>>>> 
>>>>> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073)
>>>>> 
>>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
>>>>> 
>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>>>>>   java.lang.Thread.run(Thread.java:750)
>>>>> Thread 152 (Session-Scheduler-3bc4ef12-1):
>>>>> State: TIMED_WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 1
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
>>>>> 
>>>>> java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
>>>>> 
>>>>> java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
>>>>> 
>>>>> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
>>>>> 
>>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
>>>>> 
>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>>>>>   java.lang.Thread.run(Thread.java:750)
>>>>> Thread 151
>>>>> (master/masterserver:16000:becomeActiveMaster-HFileCleaner.small.0-1713515973400):
>>>>> State: WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 1
>>>>> Waiting on
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@58626ec5
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
>>>>> 
>>>>> java.util.concurrent.PriorityBlockingQueue.take(PriorityBlockingQueue.java:549)
>>>>> 
>>>>> org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:285)
>>>>> 
>>>>> org.apache.hadoop.hbase.master.cleaner.HFileCleaner$2.run(HFileCleaner.java:269)
>>>>> Thread 150
>>>>> (master/masterserver:16000:becomeActiveMaster-HFileCleaner.large.0-1713515973400):
>>>>> State: WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 1
>>>>> Waiting on
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@18916420
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
>>>>>   org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101)
>>>>> 
>>>>> org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:285)
>>>>> 
>>>>> org.apache.hadoop.hbase.master.cleaner.HFileCleaner$1.run(HFileCleaner.java:254)
>>>>> Thread 149 (snapshot-hfile-cleaner-cache-refresher):
>>>>> State: TIMED_WAITING
>>>>> Blocked count: 4
>>>>> Waited count: 11
>>>>> Stack:
>>>>>   java.lang.Object.wait(Native Method)
>>>>>   java.util.TimerThread.mainLoop(Timer.java:552)
>>>>>   java.util.TimerThread.run(Timer.java:505)
>>>>> Thread 148 (master/masterserver:16000.Chore.1):
>>>>> State: TIMED_WAITING
>>>>> Blocked count: 2
>>>>> Waited count: 10
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
>>>>> 
>>>>> java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
>>>>> 
>>>>> java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
>>>>> 
>>>>> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
>>>>> 
>>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
>>>>> 
>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>>>>>   java.lang.Thread.run(Thread.java:750)
>>>>> Thread 147 (OldWALsCleaner-1):
>>>>> State: WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 1
>>>>> Waiting on
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@7a6a3b7e
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
>>>>> 
>>>>> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
>>>>> 
>>>>> org.apache.hadoop.hbase.master.cleaner.LogCleaner.deleteFile(LogCleaner.java:172)
>>>>> 
>>>>> org.apache.hadoop.hbase.master.cleaner.LogCleaner.lambda$createOldWalsCleaner$1(LogCleaner.java:152)
>>>>> 
>>>>> org.apache.hadoop.hbase.master.cleaner.LogCleaner$$Lambda$494/556458560.run(Unknown
>>>>> Source)
>>>>>   java.lang.Thread.run(Thread.java:750)
>>>>> Thread 146 (OldWALsCleaner-0):
>>>>> State: WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 1
>>>>> Waiting on
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@7a6a3b7e
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
>>>>> 
>>>>> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
>>>>> 
>>>>> org.apache.hadoop.hbase.master.cleaner.LogCleaner.deleteFile(LogCleaner.java:172)
>>>>> 
>>>>> org.apache.hadoop.hbase.master.cleaner.LogCleaner.lambda$createOldWalsCleaner$1(LogCleaner.java:152)
>>>>> 
>>>>> org.apache.hadoop.hbase.master.cleaner.LogCleaner$$Lambda$494/556458560.run(Unknown
>>>>> Source)
>>>>>   java.lang.Thread.run(Thread.java:750)
>>>>> Thread 139 (PEWorker-16):
>>>>> State: TIMED_WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 16
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
>>>>> 
>>>>> org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:165)
>>>>> 
>>>>> org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:147)
>>>>> 
>>>>> org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:2113)
>>>>> Thread 138 (PEWorker-15):
>>>>> State: TIMED_WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 16
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
>>>>> 
>>>>> org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:165)
>>>>> 
>>>>> org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:147)
>>>>> 
>>>>> org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:2113)
>>>>> Thread 137 (PEWorker-14):
>>>>> State: TIMED_WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 16
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
>>>>> 
>>>>> org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:165)
>>>>> 
>>>>> org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:147)
>>>>> 
>>>>> org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:2113)
>>>>> Thread 136 (PEWorker-13):
>>>>> State: TIMED_WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 16
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
>>>>> 
>>>>> org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:165)
>>>>> 
>>>>> org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:147)
>>>>> 
>>>>> org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:2113)
>>>>> Thread 135 (PEWorker-12):
>>>>> State: TIMED_WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 16
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
>>>>> 
>>>>> org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:165)
>>>>> 
>>>>> org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:147)
>>>>> 
>>>>> org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:2113)
>>>>> Thread 134 (PEWorker-11):
>>>>> State: TIMED_WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 17
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
>>>>> 
>>>>> org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:165)
>>>>> 
>>>>> org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:147)
>>>>> 
>>>>> org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:2113)
>>>>> Thread 133 (PEWorker-10):
>>>>> State: TIMED_WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 16
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
>>>>> 
>>>>> org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:165)
>>>>> 
>>>>> org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:147)
>>>>> 
>>>>> org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:2113)
>>>>> Thread 132 (PEWorker-9):
>>>>> State: TIMED_WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 17
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
>>>>> 
>>>>> org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:165)
>>>>> 
>>>>> org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:147)
>>>>> 
>>>>> org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:2113)
>>>>> Thread 131 (PEWorker-8):
>>>>> State: TIMED_WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 18
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
>>>>> 
>>>>> org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:165)
>>>>> 
>>>>> org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:147)
>>>>> 
>>>>> org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:2113)
>>>>> Thread 130 (PEWorker-7):
>>>>> State: TIMED_WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 16
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
>>>>> 
>>>>> org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:165)
>>>>> 
>>>>> org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:147)
>>>>> 
>>>>> org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:2113)
>>>>> Thread 129 (PEWorker-6):
>>>>> State: TIMED_WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 18
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
>>>>> 
>>>>> org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:165)
>>>>> 
>>>>> org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:147)
>>>>> 
>>>>> org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:2113)
>>>>> Thread 128 (PEWorker-5):
>>>>> State: TIMED_WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 18
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
>>>>> 
>>>>> org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:165)
>>>>> 
>>>>> org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:147)
>>>>> 
>>>>> org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:2113)
>>>>> Thread 127 (PEWorker-4):
>>>>> State: TIMED_WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 17
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
>>>>> 
>>>>> org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:165)
>>>>> 
>>>>> org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:147)
>>>>> 
>>>>> org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:2113)
>>>>> Thread 126 (PEWorker-3):
>>>>> State: TIMED_WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 18
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
>>>>> 
>>>>> org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:165)
>>>>> 
>>>>> org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:147)
>>>>> 
>>>>> org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:2113)
>>>>> Thread 125 (PEWorker-2):
>>>>> State: TIMED_WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 21
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
>>>>> 
>>>>> org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:165)
>>>>> 
>>>>> org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:147)
>>>>> 
>>>>> org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:2113)
>>>>> Thread 124 (PEWorker-1):
>>>>> State: TIMED_WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 16
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
>>>>> 
>>>>> org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:165)
>>>>> 
>>>>> org.apache.hadoop.hbase.procedure2.AbstractProcedureScheduler.poll(AbstractProcedureScheduler.java:147)
>>>>> 
>>>>> org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:2113)
>>>>> Thread 123 (WorkerMonitor):
>>>>> State: TIMED_WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 191
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
>>>>>   java.util.concurrent.DelayQueue.poll(DelayQueue.java:273)
>>>>> 
>>>>> org.apache.hadoop.hbase.procedure2.util.DelayedUtil.takeWithoutInterrupt(DelayedUtil.java:81)
>>>>> 
>>>>> org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.run(TimeoutExecutorThread.java:56)
>>>>> Thread 122 (ProcExecTimeout):
>>>>> State: TIMED_WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 64
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
>>>>>   java.util.concurrent.DelayQueue.poll(DelayQueue.java:268)
>>>>> 
>>>>> org.apache.hadoop.hbase.procedure2.util.DelayedUtil.takeWithoutInterrupt(DelayedUtil.java:81)
>>>>> 
>>>>> org.apache.hadoop.hbase.procedure2.TimeoutExecutorThread.run(TimeoutExecutorThread.java:56)
>>>>> Thread 145 (ActiveMasterInitializationMonitor-1713515973319):
>>>>> State: TIMED_WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 2
>>>>> Stack:
>>>>>   java.lang.Thread.sleep(Native Method)
>>>>> 
>>>>> org.apache.hadoop.hbase.master.MasterInitializationMonitor.run(MasterInitializationMonitor.java:63)
>>>>>   java.lang.Thread.run(Thread.java:750)
>>>>> Thread 143 (SnapshotHandlerChoreCleaner):
>>>>> State: TIMED_WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 95
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
>>>>> 
>>>>> java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
>>>>> 
>>>>> java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
>>>>> 
>>>>> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
>>>>> 
>>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
>>>>> 
>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>>>>>   java.lang.Thread.run(Thread.java:750)
>>>>> Thread 142 (normalizer-worker-0):
>>>>> State: WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 1
>>>>> Waiting on
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@1ac17b12
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
>>>>> 
>>>>> org.apache.hadoop.hbase.master.normalizer.RegionNormalizerWorkQueue.take(RegionNormalizerWorkQueue.java:146)
>>>>> 
>>>>> org.apache.hadoop.hbase.master.normalizer.RegionNormalizerWorker.run(RegionNormalizerWorker.java:191)
>>>>>   java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>>>>>   java.util.concurrent.FutureTask.run(FutureTask.java:266)
>>>>> 
>>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>>>>> 
>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>>>>>   java.lang.Thread.run(Thread.java:750)
>>>>> Thread 141 (masterserver:16000):
>>>>> State: WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 1
>>>>> Waiting on
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@20c47452
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
>>>>> 
>>>>> org.apache.hadoop.hbase.master.assignment.AssignmentManager.waitOnAssignQueue(AssignmentManager.java:2195)
>>>>> 
>>>>> org.apache.hadoop.hbase.master.assignment.AssignmentManager.processAssignQueue(AssignmentManager.java:2217)
>>>>> 
>>>>> org.apache.hadoop.hbase.master.assignment.AssignmentManager.access$600(AssignmentManager.java:109)
>>>>> 
>>>>> org.apache.hadoop.hbase.master.assignment.AssignmentManager$1.run(AssignmentManager.java:2157)
>>>>> Thread 140 (ProcedureDispatcherTimeoutThread):
>>>>> State: TIMED_WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 48
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
>>>>>   java.util.concurrent.DelayQueue.poll(DelayQueue.java:259)
>>>>> 
>>>>> org.apache.hadoop.hbase.procedure2.util.DelayedUtil.takeWithoutInterrupt(DelayedUtil.java:81)
>>>>> 
>>>>> org.apache.hadoop.hbase.procedure2.RemoteProcedureDispatcher$TimeoutExecutorThread.run(RemoteProcedureDispatcher.java:320)
>>>>> Thread 121 (Idle-Rpc-Conn-Sweeper-pool-0):
>>>>> State: TIMED_WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 8
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
>>>>> 
>>>>> java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
>>>>> 
>>>>> java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
>>>>> 
>>>>> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
>>>>> 
>>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
>>>>> 
>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>>>>>   java.lang.Thread.run(Thread.java:750)
>>>>> Thread 119 (master:store-Flusher):
>>>>> State: TIMED_WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 3
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163)
>>>>> 
>>>>> org.apache.hadoop.hbase.master.region.MasterRegionFlusherAndCompactor.flushLoop(MasterRegionFlusherAndCompactor.java:193)
>>>>> 
>>>>> org.apache.hadoop.hbase.master.region.MasterRegionFlusherAndCompactor$$Lambda$433/1001275970.run(Unknown
>>>>> Source)
>>>>>   java.lang.Thread.run(Thread.java:750)
>>>>> Thread 114 (AsyncFSWAL-0-hdfs://masterserver:9000/hbase/MasterData):
>>>>> State: WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 3
>>>>> Waiting on
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@4a093bfa
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
>>>>> 
>>>>> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
>>>>> 
>>>>> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
>>>>> 
>>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
>>>>> 
>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>>>>>   java.lang.Thread.run(Thread.java:750)
>>>>> Thread 113 (Connector-Scheduler-5ec5ea63-1):
>>>>> State: TIMED_WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 36
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
>>>>> 
>>>>> java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
>>>>> 
>>>>> java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
>>>>> 
>>>>> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
>>>>> 
>>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
>>>>> 
>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>>>>>   java.lang.Thread.run(Thread.java:750)
>>>>> Thread 112 (prometheus-http-1-1):
>>>>> State: WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 64
>>>>> Waiting on
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@6f7dcfc6
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
>>>>> 
>>>>> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
>>>>> 
>>>>> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
>>>>> 
>>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
>>>>> 
>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>>>>>   java.lang.Thread.run(Thread.java:750)
>>>>> Thread 111 (RS-EventLoopGroup-1-7):
>>>>> State: RUNNABLE
>>>>> Blocked count: 1
>>>>> Waited count: 26
>>>>> Stack:
>>>>> 
>>>>> org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native
>>>>> Method)
>>>>> 
>>>>> org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182)
>>>>> 
>>>>> org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:312)
>>>>> 
>>>>> org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:376)
>>>>> 
>>>>> org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
>>>>> 
>>>>> org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
>>>>> 
>>>>> org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
>>>>>   java.lang.Thread.run(Thread.java:750)
>>>>> Thread 110 (RS-EventLoopGroup-1-6):
>>>>> State: RUNNABLE
>>>>> Blocked count: 0
>>>>> Waited count: 29
>>>>> Stack:
>>>>> 
>>>>> org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native
>>>>> Method)
>>>>> 
>>>>> org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209)
>>>>> 
>>>>> org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202)
>>>>> 
>>>>> org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:316)
>>>>> 
>>>>> org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:373)
>>>>> 
>>>>> org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
>>>>> 
>>>>> org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
>>>>> 
>>>>> org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
>>>>>   java.lang.Thread.run(Thread.java:750)
>>>>> Thread 109 (RS-EventLoopGroup-1-5):
>>>>> State: RUNNABLE
>>>>> Blocked count: 0
>>>>> Waited count: 54
>>>>> Stack:
>>>>> 
>>>>> org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native
>>>>> Method)
>>>>> 
>>>>> org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182)
>>>>> 
>>>>> org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:312)
>>>>> 
>>>>> org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:376)
>>>>> 
>>>>> org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
>>>>> 
>>>>> org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
>>>>> 
>>>>> org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
>>>>>   java.lang.Thread.run(Thread.java:750)
>>>>> Thread 108 (RS-EventLoopGroup-1-4):
>>>>> State: RUNNABLE
>>>>> Blocked count: 98
>>>>> Waited count: 38
>>>>> Stack:
>>>>> 
>>>>> org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native
>>>>> Method)
>>>>> 
>>>>> org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209)
>>>>> 
>>>>> org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202)
>>>>> 
>>>>> org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:316)
>>>>> 
>>>>> org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:373)
>>>>> 
>>>>> org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
>>>>> 
>>>>> org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
>>>>> 
>>>>> org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
>>>>>   java.lang.Thread.run(Thread.java:750)
>>>>> Thread 107 (RS-EventLoopGroup-1-3):
>>>>> State: RUNNABLE
>>>>> Blocked count: 130
>>>>> Waited count: 39
>>>>> Stack:
>>>>> 
>>>>> org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native
>>>>> Method)
>>>>> 
>>>>> org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209)
>>>>> 
>>>>> org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202)
>>>>> 
>>>>> org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:316)
>>>>> 
>>>>> org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:373)
>>>>> 
>>>>> org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
>>>>> 
>>>>> org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
>>>>> 
>>>>> org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
>>>>>   java.lang.Thread.run(Thread.java:750)
>>>>> Thread 106 (RS-EventLoopGroup-1-2):
>>>>> State: RUNNABLE
>>>>> Blocked count: 51
>>>>> Waited count: 7
>>>>> Stack:
>>>>> 
>>>>> org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native
>>>>> Method)
>>>>> 
>>>>> org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209)
>>>>> 
>>>>> org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202)
>>>>> 
>>>>> org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:316)
>>>>> 
>>>>> org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:373)
>>>>> 
>>>>> org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
>>>>> 
>>>>> org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
>>>>> 
>>>>> org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
>>>>>   java.lang.Thread.run(Thread.java:750)
>>>>> Thread 105 (master:store-WAL-Roller):
>>>>> State: TIMED_WAITING
>>>>> Blocked count: 3
>>>>> Waited count: 104
>>>>> Stack:
>>>>>   java.lang.Object.wait(Native Method)
>>>>> 
>>>>> org.apache.hadoop.hbase.wal.AbstractWALRoller.run(AbstractWALRoller.java:179)
>>>>> Thread 103 (LeaseRenewer:seritrack@masterserver:9000):
>>>>> State: TIMED_WAITING
>>>>> Blocked count: 31
>>>>> Waited count: 1016
>>>>> Stack:
>>>>>   java.lang.Thread.sleep(Native Method)
>>>>> 
>>>>> org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:412)
>>>>> 
>>>>> org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76)
>>>>> 
>>>>> org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:308)
>>>>>   java.lang.Thread.run(Thread.java:750)
>>>>> Thread 100 (org.apache.hadoop.hdfs.PeerCache@2af0ac32):
>>>>> State: TIMED_WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 319
>>>>> Stack:
>>>>>   java.lang.Thread.sleep(Native Method)
>>>>>   org.apache.hadoop.hdfs.PeerCache.run(PeerCache.java:253)
>>>>>   org.apache.hadoop.hdfs.PeerCache.access$000(PeerCache.java:46)
>>>>>   org.apache.hadoop.hdfs.PeerCache$1.run(PeerCache.java:124)
>>>>>   java.lang.Thread.run(Thread.java:750)
>>>>> Thread 99 (IPC Parameter Sending Thread #0):
>>>>> State: TIMED_WAITING
>>>>> Blocked count: 1
>>>>> Waited count: 204
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
>>>>> 
>>>>> java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
>>>>> 
>>>>> java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
>>>>>   java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941)
>>>>> 
>>>>> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073)
>>>>> 
>>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
>>>>> 
>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>>>>>   java.lang.Thread.run(Thread.java:750)
>>>>> Thread 97 (master/masterserver:16000:becomeActiveMaster-MemStoreChunkPool
>>>>> Statistics):
>>>>> State: TIMED_WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 4
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
>>>>> 
>>>>> java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
>>>>> 
>>>>> java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
>>>>> 
>>>>> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
>>>>> 
>>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
>>>>> 
>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>>>>>   java.lang.Thread.run(Thread.java:750)
>>>>> Thread 95 (master/masterserver:16000:becomeActiveMaster-MemStoreChunkPool
>>>>> Statistics):
>>>>> State: TIMED_WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 4
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
>>>>> 
>>>>> java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
>>>>> 
>>>>> java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
>>>>> 
>>>>> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
>>>>> 
>>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
>>>>> 
>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>>>>>   java.lang.Thread.run(Thread.java:750)
>>>>> Thread 93 (Monitor thread for TaskMonitor):
>>>>> State: TIMED_WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 96
>>>>> Stack:
>>>>>   java.lang.Thread.sleep(Native Method)
>>>>> 
>>>>> org.apache.hadoop.hbase.monitoring.TaskMonitor$MonitorRunnable.run(TaskMonitor.java:325)
>>>>>   java.lang.Thread.run(Thread.java:750)
>>>>> Thread 92 (master/masterserver:16000:becomeActiveMaster):
>>>>> State: TIMED_WAITING
>>>>> Blocked count: 165
>>>>> Waited count: 404
>>>>> Stack:
>>>>>   java.lang.Thread.sleep(Native Method)
>>>>>   org.apache.hadoop.hbase.util.Threads.sleep(Threads.java:125)
>>>>>   org.apache.hadoop.hbase.master.HMaster.isRegionOnline(HMaster.java:1358)
>>>>> 
>>>>> org.apache.hadoop.hbase.master.HMaster.waitForMetaOnline(HMaster.java:1328)
>>>>> 
>>>>> org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:1069)
>>>>> 
>>>>> org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2405)
>>>>>   org.apache.hadoop.hbase.master.HMaster.lambda$null$0(HMaster.java:565)
>>>>> 
>>>>> org.apache.hadoop.hbase.master.HMaster$$Lambda$265/1598878738.run(Unknown
>>>>> Source)
>>>>>   org.apache.hadoop.hbase.trace.TraceUtil.trace(TraceUtil.java:187)
>>>>>   org.apache.hadoop.hbase.trace.TraceUtil.trace(TraceUtil.java:177)
>>>>>   org.apache.hadoop.hbase.master.HMaster.lambda$run$1(HMaster.java:562)
>>>>> 
>>>>> org.apache.hadoop.hbase.master.HMaster$$Lambda$264/1129144214.run(Unknown
>>>>> Source)
>>>>>   java.lang.Thread.run(Thread.java:750)
>>>>> Thread 19 (master/masterserver:16000):
>>>>> State: TIMED_WAITING
>>>>> Blocked count: 5
>>>>> Waited count: 322
>>>>> Stack:
>>>>>   java.lang.Object.wait(Native Method)
>>>>>   org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81)
>>>>>   org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64)
>>>>> 
>>>>> org.apache.hadoop.hbase.master.HMaster.waitForMasterActive(HMaster.java:677)
>>>>> 
>>>>> org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:999)
>>>>> 
>>>>> org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:942)
>>>>> 
>>>>> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1048)
>>>>>   org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:578)
>>>>> Thread 91 (Session-HouseKeeper-513b52af-1):
>>>>> State: TIMED_WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 2
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
>>>>> 
>>>>> java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
>>>>> 
>>>>> java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
>>>>> 
>>>>> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
>>>>> 
>>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
>>>>> 
>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>>>>>   java.lang.Thread.run(Thread.java:750)
>>>>> Thread 90 (qtp85435056-90):
>>>>> State: RUNNABLE
>>>>> Blocked count: 0
>>>>> Waited count: 24
>>>>> Stack:
>>>>>   sun.management.ThreadImpl.getThreadInfo1(Native Method)
>>>>>   sun.management.ThreadImpl.getThreadInfo(ThreadImpl.java:185)
>>>>>   sun.management.ThreadImpl.getThreadInfo(ThreadImpl.java:144)
>>>>> 
>>>>> org.apache.hadoop.hbase.util.ReflectionUtils.printThreadInfo(ReflectionUtils.java:181)
>>>>>   org.apache.hadoop.hbase.util.Threads.printThreadInfo(Threads.java:186)
>>>>> 
>>>>> org.apache.hadoop.hbase.master.http.MasterDumpServlet.doGet(MasterDumpServlet.java:86)
>>>>>   javax.servlet.http.HttpServlet.service(HttpServlet.java:687)
>>>>>   javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
>>>>> 
>>>>> org.apache.hbase.thirdparty.org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:799)
>>>>> 
>>>>> org.apache.hbase.thirdparty.org.eclipse.jetty.servlet.ServletHandler$ChainEnd.doFilter(ServletHandler.java:1656)
>>>>> 
>>>>> org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:117)
>>>>> 
>>>>> org.apache.hbase.thirdparty.org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193)
>>>>> 
>>>>> org.apache.hbase.thirdparty.org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1626)
>>>>> 
>>>>> org.apache.hadoop.hbase.http.SecurityHeadersFilter.doFilter(SecurityHeadersFilter.java:65)
>>>>> 
>>>>> org.apache.hbase.thirdparty.org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193)
>>>>> 
>>>>> org.apache.hbase.thirdparty.org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1626)
>>>>> 
>>>>> org.apache.hadoop.hbase.http.ClickjackingPreventionFilter.doFilter(ClickjackingPreventionFilter.java:49)
>>>>> 
>>>>> org.apache.hbase.thirdparty.org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193)
>>>>> 
>>>>> org.apache.hbase.thirdparty.org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1626)
>>>>> 
>>>>> org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:1521)
>>>>> Thread 89 (qtp85435056-89):
>>>>> State: RUNNABLE
>>>>> Blocked count: 2
>>>>> Waited count: 26
>>>>> Stack:
>>>>>   sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
>>>>>   sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
>>>>>   sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
>>>>>   sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
>>>>>   sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
>>>>>   sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101)
>>>>> 
>>>>> org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183)
>>>>> 
>>>>> org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190)
>>>>> 
>>>>> org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606)
>>>>> 
>>>>> org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543)
>>>>> 
>>>>> org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362)
>>>>> 
>>>>> org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186)
>>>>> 
>>>>> org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173)
>>>>> 
>>>>> org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:131)
>>>>> 
>>>>> org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:409)
>>>>> 
>>>>> org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883)
>>>>> 
>>>>> org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034)
>>>>>   java.lang.Thread.run(Thread.java:750)
>>>>> Thread 88 (qtp85435056-88):
>>>>> State: TIMED_WAITING
>>>>> Blocked count: 2
>>>>> Waited count: 24
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
>>>>> 
>>>>> org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382)
>>>>> 
>>>>> org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974)
>>>>> 
>>>>> org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018)
>>>>>   java.lang.Thread.run(Thread.java:750)
>>>>> Thread 87 (qtp85435056-87):
>>>>> State: TIMED_WAITING
>>>>> Blocked count: 8
>>>>> Waited count: 24
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
>>>>> 
>>>>> java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
>>>>> 
>>>>> java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
>>>>>   java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941)
>>>>> 
>>>>> org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:324)
>>>>> 
>>>>> org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:399)
>>>>> 
>>>>> org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883)
>>>>> 
>>>>> org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034)
>>>>>   java.lang.Thread.run(Thread.java:750)
>>>>> Thread 86 (qtp85435056-86):
>>>>> State: TIMED_WAITING
>>>>> Blocked count: 3
>>>>> Waited count: 27
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
>>>>> 
>>>>> org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382)
>>>>> 
>>>>> org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974)
>>>>> 
>>>>> org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018)
>>>>>   java.lang.Thread.run(Thread.java:750)
>>>>> Thread 85 (qtp85435056-85):
>>>>> State: TIMED_WAITING
>>>>> Blocked count: 3
>>>>> Waited count: 26
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
>>>>> 
>>>>> org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382)
>>>>> 
>>>>> org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974)
>>>>> 
>>>>> org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018)
>>>>>   java.lang.Thread.run(Thread.java:750)
>>>>> Thread 84 
>>>>> (qtp85435056-84-acceptor-0@acc4c8d-ServerConnector@5ec5ea63{HTTP/1.1,
>>>>> (http/1.1)}{0.0.0.0:16010}):
>>>>> State: RUNNABLE
>>>>> Blocked count: 0
>>>>> Waited count: 1
>>>>> Stack:
>>>>>   sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
>>>>> 
>>>>> sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421)
>>>>> 
>>>>> sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249)
>>>>> 
>>>>> org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388)
>>>>> 
>>>>> org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704)
>>>>> 
>>>>> org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883)
>>>>> 
>>>>> org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034)
>>>>>   java.lang.Thread.run(Thread.java:750)
>>>>> Thread 83 (qtp85435056-83):
>>>>> State: TIMED_WAITING
>>>>> Blocked count: 6
>>>>> Waited count: 21
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
>>>>> 
>>>>> org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382)
>>>>> 
>>>>> org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974)
>>>>> 
>>>>> org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018)
>>>>>   java.lang.Thread.run(Thread.java:750)
>>>>> Thread 81 (RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=16000):
>>>>> State: WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 1
>>>>> Waiting on java.util.concurrent.Semaphore$NonfairSync@52fbca80
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
>>>>>   java.util.concurrent.Semaphore.acquire(Semaphore.java:312)
>>>>> 
>>>>> org.apache.hadoop.hbase.ipc.FastPathRpcHandler.getCallRunner(FastPathRpcHandler.java:55)
>>>>>   org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:82)
>>>>> Thread 80 (RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=16000):
>>>>> State: WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 1
>>>>> Waiting on java.util.concurrent.Semaphore$NonfairSync@2a42337c
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
>>>>>   java.util.concurrent.Semaphore.acquire(Semaphore.java:312)
>>>>> 
>>>>> org.apache.hadoop.hbase.ipc.FastPathRpcHandler.getCallRunner(FastPathRpcHandler.java:55)
>>>>>   org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:82)
>>>>> Thread 79 (RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=16000):
>>>>> State: WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 1
>>>>> Waiting on java.util.concurrent.Semaphore$NonfairSync@550383c4
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
>>>>>   java.util.concurrent.Semaphore.acquire(Semaphore.java:312)
>>>>> 
>>>>> org.apache.hadoop.hbase.ipc.FastPathRpcHandler.getCallRunner(FastPathRpcHandler.java:55)
>>>>>   org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:82)
>>>>> Thread 78 (RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=16000):
>>>>> State: WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 1
>>>>> Waiting on java.util.concurrent.Semaphore$NonfairSync@cdb4b1f
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
>>>>>   java.util.concurrent.Semaphore.acquire(Semaphore.java:312)
>>>>> 
>>>>> org.apache.hadoop.hbase.ipc.FastPathRpcHandler.getCallRunner(FastPathRpcHandler.java:55)
>>>>>   org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:82)
>>>>> Thread 77 
>>>>> (RpcServer.priority.RWQ.Fifo.read.handler=19,queue=1,port=16000):
>>>>> State: WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 1
>>>>> Waiting on
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@783fce
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
>>>>> 
>>>>> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
>>>>>   org.apache.hadoop.hbase.ipc.RpcHandler.getCallRunner(RpcHandler.java:68)
>>>>>   org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:82)
>>>>> Thread 76 
>>>>> (RpcServer.priority.RWQ.Fifo.read.handler=18,queue=1,port=16000):
>>>>> State: WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 1
>>>>> Waiting on
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@783fce
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
>>>>> 
>>>>> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
>>>>>   org.apache.hadoop.hbase.ipc.RpcHandler.getCallRunner(RpcHandler.java:68)
>>>>>   org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:82)
>>>>> Thread 75 
>>>>> (RpcServer.priority.RWQ.Fifo.read.handler=17,queue=1,port=16000):
>>>>> State: WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 1
>>>>> Waiting on
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@783fce
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
>>>>> 
>>>>> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
>>>>>   org.apache.hadoop.hbase.ipc.RpcHandler.getCallRunner(RpcHandler.java:68)
>>>>>   org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:82)
>>>>> Thread 74 
>>>>> (RpcServer.priority.RWQ.Fifo.read.handler=16,queue=1,port=16000):
>>>>> State: WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 1
>>>>> Waiting on
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@783fce
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
>>>>> 
>>>>> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
>>>>>   org.apache.hadoop.hbase.ipc.RpcHandler.getCallRunner(RpcHandler.java:68)
>>>>>   org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:82)
>>>>> Thread 73 
>>>>> (RpcServer.priority.RWQ.Fifo.read.handler=15,queue=1,port=16000):
>>>>> State: WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 1
>>>>> Waiting on
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@783fce
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
>>>>> 
>>>>> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
>>>>>   org.apache.hadoop.hbase.ipc.RpcHandler.getCallRunner(RpcHandler.java:68)
>>>>>   org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:82)
>>>>> Thread 72 
>>>>> (RpcServer.priority.RWQ.Fifo.read.handler=14,queue=1,port=16000):
>>>>> State: WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 1
>>>>> Waiting on
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@783fce
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
>>>>> 
>>>>> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
>>>>>   org.apache.hadoop.hbase.ipc.RpcHandler.getCallRunner(RpcHandler.java:68)
>>>>>   org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:82)
>>>>> Thread 71 
>>>>> (RpcServer.priority.RWQ.Fifo.read.handler=13,queue=1,port=16000):
>>>>> State: WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 1
>>>>> Waiting on
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@783fce
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
>>>>> 
>>>>> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
>>>>>   org.apache.hadoop.hbase.ipc.RpcHandler.getCallRunner(RpcHandler.java:68)
>>>>>   org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:82)
>>>>> Thread 70 
>>>>> (RpcServer.priority.RWQ.Fifo.read.handler=12,queue=1,port=16000):
>>>>> State: WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 1
>>>>> Waiting on
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@783fce
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
>>>>> 
>>>>> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
>>>>>   org.apache.hadoop.hbase.ipc.RpcHandler.getCallRunner(RpcHandler.java:68)
>>>>>   org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:82)
>>>>> Thread 69 
>>>>> (RpcServer.priority.RWQ.Fifo.read.handler=11,queue=1,port=16000):
>>>>> State: WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 1
>>>>> Waiting on
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@783fce
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
>>>>> 
>>>>> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
>>>>>   org.apache.hadoop.hbase.ipc.RpcHandler.getCallRunner(RpcHandler.java:68)
>>>>>   org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:82)
>>>>> Thread 68 
>>>>> (RpcServer.priority.RWQ.Fifo.read.handler=10,queue=1,port=16000):
>>>>> State: WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 1
>>>>> Waiting on
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@783fce
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
>>>>> 
>>>>> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
>>>>>   org.apache.hadoop.hbase.ipc.RpcHandler.getCallRunner(RpcHandler.java:68)
>>>>>   org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:82)
>>>>> Thread 67 (RpcServer.priority.RWQ.Fifo.read.handler=9,queue=1,port=16000):
>>>>> State: WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 1
>>>>> Waiting on
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@783fce
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
>>>>> 
>>>>> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
>>>>>   org.apache.hadoop.hbase.ipc.RpcHandler.getCallRunner(RpcHandler.java:68)
>>>>>   org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:82)
>>>>> Thread 66 (RpcServer.priority.RWQ.Fifo.read.handler=8,queue=1,port=16000):
>>>>> State: WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 1
>>>>> Waiting on
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@783fce
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
>>>>> 
>>>>> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
>>>>>   org.apache.hadoop.hbase.ipc.RpcHandler.getCallRunner(RpcHandler.java:68)
>>>>>   org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:82)
>>>>> Thread 65 (RpcServer.priority.RWQ.Fifo.read.handler=7,queue=1,port=16000):
>>>>> State: WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 1
>>>>> Waiting on
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@783fce
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
>>>>> 
>>>>> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
>>>>>   org.apache.hadoop.hbase.ipc.RpcHandler.getCallRunner(RpcHandler.java:68)
>>>>>   org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:82)
>>>>> Thread 64 (RpcServer.priority.RWQ.Fifo.read.handler=6,queue=1,port=16000):
>>>>> State: WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 1
>>>>> Waiting on
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@783fce
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
>>>>> 
>>>>> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
>>>>>   org.apache.hadoop.hbase.ipc.RpcHandler.getCallRunner(RpcHandler.java:68)
>>>>>   org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:82)
>>>>> Thread 63 (RpcServer.priority.RWQ.Fifo.read.handler=5,queue=1,port=16000):
>>>>> State: WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 1
>>>>> Waiting on
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@783fce
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
>>>>> 
>>>>> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
>>>>>   org.apache.hadoop.hbase.ipc.RpcHandler.getCallRunner(RpcHandler.java:68)
>>>>>   org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:82)
>>>>> Thread 62 (RpcServer.priority.RWQ.Fifo.read.handler=4,queue=1,port=16000):
>>>>> State: WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 1
>>>>> Waiting on
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@783fce
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
>>>>> 
>>>>> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
>>>>>   org.apache.hadoop.hbase.ipc.RpcHandler.getCallRunner(RpcHandler.java:68)
>>>>>   org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:82)
>>>>> Thread 61 (RpcServer.priority.RWQ.Fifo.read.handler=3,queue=1,port=16000):
>>>>> State: WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 1
>>>>> Waiting on
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@783fce
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
>>>>> 
>>>>> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
>>>>>   org.apache.hadoop.hbase.ipc.RpcHandler.getCallRunner(RpcHandler.java:68)
>>>>>   org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:82)
>>>>> Thread 60 (RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=16000):
>>>>> State: WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 1
>>>>> Waiting on
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@783fce
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
>>>>> 
>>>>> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
>>>>>   org.apache.hadoop.hbase.ipc.RpcHandler.getCallRunner(RpcHandler.java:68)
>>>>>   org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:82)
>>>>> Thread 59 
>>>>> (RpcServer.priority.RWQ.Fifo.write.handler=1,queue=0,port=16000):
>>>>> State: WAITING
>>>>> Blocked count: 35
>>>>> Waited count: 837
>>>>> Waiting on
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@3b9a256e
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
>>>>> 
>>>>> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
>>>>>   org.apache.hadoop.hbase.ipc.RpcHandler.getCallRunner(RpcHandler.java:68)
>>>>>   org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:82)
>>>>> Thread 58 
>>>>> (RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=16000):
>>>>> State: WAITING
>>>>> Blocked count: 25
>>>>> Waited count: 819
>>>>> Waiting on
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@3b9a256e
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
>>>>> 
>>>>> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
>>>>>   org.apache.hadoop.hbase.ipc.RpcHandler.getCallRunner(RpcHandler.java:68)
>>>>>   org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:82)
>>>>> Thread 57 (RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=16000):
>>>>> State: WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 1
>>>>> Waiting on java.util.concurrent.Semaphore$NonfairSync@724e8c7a
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
>>>>>   java.util.concurrent.Semaphore.acquire(Semaphore.java:312)
>>>>> 
>>>>> org.apache.hadoop.hbase.ipc.FastPathRpcHandler.getCallRunner(FastPathRpcHandler.java:55)
>>>>>   org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:82)
>>>>> Thread 56 (RpcServer.default.FPBQ.Fifo.handler=28,queue=1,port=16000):
>>>>> State: WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 1
>>>>> Waiting on java.util.concurrent.Semaphore$NonfairSync@249bb29f
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
>>>>>   java.util.concurrent.Semaphore.acquire(Semaphore.java:312)
>>>>> 
>>>>> org.apache.hadoop.hbase.ipc.FastPathRpcHandler.getCallRunner(FastPathRpcHandler.java:55)
>>>>>   org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:82)
>>>>> Thread 55 (RpcServer.default.FPBQ.Fifo.handler=27,queue=0,port=16000):
>>>>> State: WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 1
>>>>> Waiting on java.util.concurrent.Semaphore$NonfairSync@655c7f8d
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
>>>>>   java.util.concurrent.Semaphore.acquire(Semaphore.java:312)
>>>>> 
>>>>> org.apache.hadoop.hbase.ipc.FastPathRpcHandler.getCallRunner(FastPathRpcHandler.java:55)
>>>>>   org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:82)
>>>>> Thread 54 (RpcServer.default.FPBQ.Fifo.handler=26,queue=2,port=16000):
>>>>> State: WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 1
>>>>> Waiting on java.util.concurrent.Semaphore$NonfairSync@f87971d
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
>>>>>   java.util.concurrent.Semaphore.acquire(Semaphore.java:312)
>>>>> 
>>>>> org.apache.hadoop.hbase.ipc.FastPathRpcHandler.getCallRunner(FastPathRpcHandler.java:55)
>>>>>   org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:82)
>>>>> Thread 53 (RpcServer.default.FPBQ.Fifo.handler=25,queue=1,port=16000):
>>>>> State: WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 1
>>>>> Waiting on java.util.concurrent.Semaphore$NonfairSync@9ace7cb
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
>>>>>   java.util.concurrent.Semaphore.acquire(Semaphore.java:312)
>>>>> 
>>>>> org.apache.hadoop.hbase.ipc.FastPathRpcHandler.getCallRunner(FastPathRpcHandler.java:55)
>>>>>   org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:82)
>>>>> Thread 52 (RpcServer.default.FPBQ.Fifo.handler=24,queue=0,port=16000):
>>>>> State: WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 1
>>>>> Waiting on java.util.concurrent.Semaphore$NonfairSync@705aa32b
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
>>>>>   java.util.concurrent.Semaphore.acquire(Semaphore.java:312)
>>>>> 
>>>>> org.apache.hadoop.hbase.ipc.FastPathRpcHandler.getCallRunner(FastPathRpcHandler.java:55)
>>>>>   org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:82)
>>>>> Thread 51 (RpcServer.default.FPBQ.Fifo.handler=23,queue=2,port=16000):
>>>>> State: WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 1
>>>>> Waiting on java.util.concurrent.Semaphore$NonfairSync@767c0aba
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
>>>>>   java.util.concurrent.Semaphore.acquire(Semaphore.java:312)
>>>>> 
>>>>> org.apache.hadoop.hbase.ipc.FastPathRpcHandler.getCallRunner(FastPathRpcHandler.java:55)
>>>>>   org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:82)
>>>>> Thread 50 (RpcServer.default.FPBQ.Fifo.handler=22,queue=1,port=16000):
>>>>> State: WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 1
>>>>> Waiting on java.util.concurrent.Semaphore$NonfairSync@45704417
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
>>>>>   java.util.concurrent.Semaphore.acquire(Semaphore.java:312)
>>>>> 
>>>>> org.apache.hadoop.hbase.ipc.FastPathRpcHandler.getCallRunner(FastPathRpcHandler.java:55)
>>>>>   org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:82)
>>>>> Thread 49 (RpcServer.default.FPBQ.Fifo.handler=21,queue=0,port=16000):
>>>>> State: WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 1
>>>>> Waiting on java.util.concurrent.Semaphore$NonfairSync@2b8c61cd
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
>>>>>   java.util.concurrent.Semaphore.acquire(Semaphore.java:312)
>>>>> 
>>>>> org.apache.hadoop.hbase.ipc.FastPathRpcHandler.getCallRunner(FastPathRpcHandler.java:55)
>>>>>   org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:82)
>>>>> Thread 48 (RpcServer.default.FPBQ.Fifo.handler=20,queue=2,port=16000):
>>>>> State: WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 1
>>>>> Waiting on java.util.concurrent.Semaphore$NonfairSync@eeadc6c
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
>>>>>   java.util.concurrent.Semaphore.acquire(Semaphore.java:312)
>>>>> 
>>>>> org.apache.hadoop.hbase.ipc.FastPathRpcHandler.getCallRunner(FastPathRpcHandler.java:55)
>>>>>   org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:82)
>>>>> Thread 47 (RpcServer.default.FPBQ.Fifo.handler=19,queue=1,port=16000):
>>>>> State: WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 1
>>>>> Waiting on java.util.concurrent.Semaphore$NonfairSync@18552ed1
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
>>>>>   java.util.concurrent.Semaphore.acquire(Semaphore.java:312)
>>>>> 
>>>>> org.apache.hadoop.hbase.ipc.FastPathRpcHandler.getCallRunner(FastPathRpcHandler.java:55)
>>>>>   org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:82)
>>>>> Thread 46 (RpcServer.default.FPBQ.Fifo.handler=18,queue=0,port=16000):
>>>>> State: WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 1
>>>>> Waiting on java.util.concurrent.Semaphore$NonfairSync@5fc29130
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
>>>>>   java.util.concurrent.Semaphore.acquire(Semaphore.java:312)
>>>>> 
>>>>> org.apache.hadoop.hbase.ipc.FastPathRpcHandler.getCallRunner(FastPathRpcHandler.java:55)
>>>>>   org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:82)
>>>>> Thread 45 (RpcServer.default.FPBQ.Fifo.handler=17,queue=2,port=16000):
>>>>> State: WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 1
>>>>> Waiting on java.util.concurrent.Semaphore$NonfairSync@1708110c
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
>>>>>   java.util.concurrent.Semaphore.acquire(Semaphore.java:312)
>>>>> 
>>>>> org.apache.hadoop.hbase.ipc.FastPathRpcHandler.getCallRunner(FastPathRpcHandler.java:55)
>>>>>   org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:82)
>>>>> Thread 44 (RpcServer.default.FPBQ.Fifo.handler=16,queue=1,port=16000):
>>>>> State: WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 1
>>>>> Waiting on java.util.concurrent.Semaphore$NonfairSync@4f59a63d
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
>>>>>   java.util.concurrent.Semaphore.acquire(Semaphore.java:312)
>>>>> 
>>>>> org.apache.hadoop.hbase.ipc.FastPathRpcHandler.getCallRunner(FastPathRpcHandler.java:55)
>>>>>   org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:82)
>>>>> Thread 43 (RpcServer.default.FPBQ.Fifo.handler=15,queue=0,port=16000):
>>>>> State: WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 1
>>>>> Waiting on java.util.concurrent.Semaphore$NonfairSync@7e4b2aa1
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
>>>>>   java.util.concurrent.Semaphore.acquire(Semaphore.java:312)
>>>>> 
>>>>> org.apache.hadoop.hbase.ipc.FastPathRpcHandler.getCallRunner(FastPathRpcHandler.java:55)
>>>>>   org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:82)
>>>>> Thread 42 (RpcServer.default.FPBQ.Fifo.handler=14,queue=2,port=16000):
>>>>> State: WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 1
>>>>> Waiting on java.util.concurrent.Semaphore$NonfairSync@354b7f49
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
>>>>>   java.util.concurrent.Semaphore.acquire(Semaphore.java:312)
>>>>> 
>>>>> org.apache.hadoop.hbase.ipc.FastPathRpcHandler.getCallRunner(FastPathRpcHandler.java:55)
>>>>>   org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:82)
>>>>> Thread 41 (RpcServer.default.FPBQ.Fifo.handler=13,queue=1,port=16000):
>>>>> State: WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 1
>>>>> Waiting on java.util.concurrent.Semaphore$NonfairSync@629c889d
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
>>>>>   java.util.concurrent.Semaphore.acquire(Semaphore.java:312)
>>>>> 
>>>>> org.apache.hadoop.hbase.ipc.FastPathRpcHandler.getCallRunner(FastPathRpcHandler.java:55)
>>>>>   org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:82)
>>>>> Thread 40 (RpcServer.default.FPBQ.Fifo.handler=12,queue=0,port=16000):
>>>>> State: WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 1
>>>>> Waiting on java.util.concurrent.Semaphore$NonfairSync@6076a2bd
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
>>>>>   java.util.concurrent.Semaphore.acquire(Semaphore.java:312)
>>>>> 
>>>>> org.apache.hadoop.hbase.ipc.FastPathRpcHandler.getCallRunner(FastPathRpcHandler.java:55)
>>>>>   org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:82)
>>>>> Thread 39 (RpcServer.default.FPBQ.Fifo.handler=11,queue=2,port=16000):
>>>>> State: WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 1
>>>>> Waiting on java.util.concurrent.Semaphore$NonfairSync@474f8230
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
>>>>>   java.util.concurrent.Semaphore.acquire(Semaphore.java:312)
>>>>> 
>>>>> org.apache.hadoop.hbase.ipc.FastPathRpcHandler.getCallRunner(FastPathRpcHandler.java:55)
>>>>>   org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:82)
>>>>> Thread 38 (RpcServer.default.FPBQ.Fifo.handler=10,queue=1,port=16000):
>>>>> State: WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 1
>>>>> Waiting on java.util.concurrent.Semaphore$NonfairSync@29901ca7
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
>>>>>   java.util.concurrent.Semaphore.acquire(Semaphore.java:312)
>>>>> 
>>>>> org.apache.hadoop.hbase.ipc.FastPathRpcHandler.getCallRunner(FastPathRpcHandler.java:55)
>>>>>   org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:82)
>>>>> Thread 37 (RpcServer.default.FPBQ.Fifo.handler=9,queue=0,port=16000):
>>>>> State: WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 1
>>>>> Waiting on java.util.concurrent.Semaphore$NonfairSync@2fceb168
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
>>>>>   java.util.concurrent.Semaphore.acquire(Semaphore.java:312)
>>>>> 
>>>>> org.apache.hadoop.hbase.ipc.FastPathRpcHandler.getCallRunner(FastPathRpcHandler.java:55)
>>>>>   org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:82)
>>>>> Thread 36 (RpcServer.default.FPBQ.Fifo.handler=8,queue=2,port=16000):
>>>>> State: WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 1
>>>>> Waiting on java.util.concurrent.Semaphore$NonfairSync@7af89d66
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
>>>>>   java.util.concurrent.Semaphore.acquire(Semaphore.java:312)
>>>>> 
>>>>> org.apache.hadoop.hbase.ipc.FastPathRpcHandler.getCallRunner(FastPathRpcHandler.java:55)
>>>>>   org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:82)
>>>>> Thread 35 (RpcServer.default.FPBQ.Fifo.handler=7,queue=1,port=16000):
>>>>> State: WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 1
>>>>> Waiting on java.util.concurrent.Semaphore$NonfairSync@411dce0b
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
>>>>>   java.util.concurrent.Semaphore.acquire(Semaphore.java:312)
>>>>> 
>>>>> org.apache.hadoop.hbase.ipc.FastPathRpcHandler.getCallRunner(FastPathRpcHandler.java:55)
>>>>>   org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:82)
>>>>> Thread 34 (RpcServer.default.FPBQ.Fifo.handler=6,queue=0,port=16000):
>>>>> State: WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 1
>>>>> Waiting on java.util.concurrent.Semaphore$NonfairSync@684187ab
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
>>>>>   java.util.concurrent.Semaphore.acquire(Semaphore.java:312)
>>>>> 
>>>>> org.apache.hadoop.hbase.ipc.FastPathRpcHandler.getCallRunner(FastPathRpcHandler.java:55)
>>>>>   org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:82)
>>>>> Thread 33 (RpcServer.default.FPBQ.Fifo.handler=5,queue=2,port=16000):
>>>>> State: WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 1
>>>>> Waiting on java.util.concurrent.Semaphore$NonfairSync@325e2e3a
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
>>>>>   java.util.concurrent.Semaphore.acquire(Semaphore.java:312)
>>>>> 
>>>>> org.apache.hadoop.hbase.ipc.FastPathRpcHandler.getCallRunner(FastPathRpcHandler.java:55)
>>>>>   org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:82)
>>>>> Thread 32 (RpcServer.default.FPBQ.Fifo.handler=4,queue=1,port=16000):
>>>>> State: WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 1
>>>>> Waiting on java.util.concurrent.Semaphore$NonfairSync@cf386ba
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
>>>>>   java.util.concurrent.Semaphore.acquire(Semaphore.java:312)
>>>>> 
>>>>> org.apache.hadoop.hbase.ipc.FastPathRpcHandler.getCallRunner(FastPathRpcHandler.java:55)
>>>>>   org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:82)
>>>>> Thread 31 (RpcServer.default.FPBQ.Fifo.handler=3,queue=0,port=16000):
>>>>> State: WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 1
>>>>> Waiting on java.util.concurrent.Semaphore$NonfairSync@23b17cb9
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
>>>>>   java.util.concurrent.Semaphore.acquire(Semaphore.java:312)
>>>>> 
>>>>> org.apache.hadoop.hbase.ipc.FastPathRpcHandler.getCallRunner(FastPathRpcHandler.java:55)
>>>>>   org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:82)
>>>>> Thread 30 (RpcServer.default.FPBQ.Fifo.handler=2,queue=2,port=16000):
>>>>> State: WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 1
>>>>> Waiting on java.util.concurrent.Semaphore$NonfairSync@4729dbbb
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
>>>>>   java.util.concurrent.Semaphore.acquire(Semaphore.java:312)
>>>>> 
>>>>> org.apache.hadoop.hbase.ipc.FastPathRpcHandler.getCallRunner(FastPathRpcHandler.java:55)
>>>>>   org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:82)
>>>>> Thread 29 (RpcServer.default.FPBQ.Fifo.handler=1,queue=1,port=16000):
>>>>> State: WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 1
>>>>> Waiting on java.util.concurrent.Semaphore$NonfairSync@445129a
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
>>>>>   java.util.concurrent.Semaphore.acquire(Semaphore.java:312)
>>>>> 
>>>>> org.apache.hadoop.hbase.ipc.FastPathRpcHandler.getCallRunner(FastPathRpcHandler.java:55)
>>>>>   org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:82)
>>>>> Thread 28 (RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=16000):
>>>>> State: WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 1
>>>>> Waiting on java.util.concurrent.Semaphore$NonfairSync@141392fe
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
>>>>>   java.util.concurrent.Semaphore.acquire(Semaphore.java:312)
>>>>> 
>>>>> org.apache.hadoop.hbase.ipc.FastPathRpcHandler.getCallRunner(FastPathRpcHandler.java:55)
>>>>>   org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:82)
>>>>> Thread 27 (zk-event-processor-pool-0):
>>>>> State: WAITING
>>>>> Blocked count: 10
>>>>> Waited count: 16
>>>>> Waiting on
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@3cef7071
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
>>>>> 
>>>>> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
>>>>> 
>>>>> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
>>>>> 
>>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
>>>>> 
>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>>>>>   java.lang.Thread.run(Thread.java:750)
>>>>> Thread 26 (main-EventThread):
>>>>> State: WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 8
>>>>> Waiting on
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@35929054
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
>>>>> 
>>>>> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
>>>>>   org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:549)
>>>>> Thread 25 (main-SendThread(masterserver:2181)):
>>>>> State: RUNNABLE
>>>>> Blocked count: 3
>>>>> Waited count: 0
>>>>> Stack:
>>>>>   sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
>>>>>   sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
>>>>>   sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
>>>>>   sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
>>>>>   sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
>>>>> 
>>>>> org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:332)
>>>>>   org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1289)
>>>>> Thread 23
>>>>> (org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner):
>>>>> State: WAITING
>>>>> Blocked count: 3
>>>>> Waited count: 4
>>>>> Waiting on java.lang.ref.ReferenceQueue$Lock@19ef85be
>>>>> Stack:
>>>>>   java.lang.Object.wait(Native Method)
>>>>>   java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:144)
>>>>>   java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:165)
>>>>> 
>>>>> org.apache.hadoop.fs.FileSystem$Statistics$StatisticsDataReferenceCleaner.run(FileSystem.java:3712)
>>>>>   java.lang.Thread.run(Thread.java:750)
>>>>> Thread 22 (RS-EventLoopGroup-1-1):
>>>>> State: RUNNABLE
>>>>> Blocked count: 21
>>>>> Waited count: 0
>>>>> Stack:
>>>>> 
>>>>> org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native
>>>>> Method)
>>>>> 
>>>>> org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209)
>>>>> 
>>>>> org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202)
>>>>> 
>>>>> org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:316)
>>>>> 
>>>>> org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:373)
>>>>> 
>>>>> org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
>>>>> 
>>>>> org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
>>>>> 
>>>>> org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
>>>>>   java.lang.Thread.run(Thread.java:750)
>>>>> Thread 21 (HBase-Metrics2-1):
>>>>> State: TIMED_WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 96
>>>>> Stack:
>>>>>   sun.misc.Unsafe.park(Native Method)
>>>>>   java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
>>>>> 
>>>>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
>>>>> 
>>>>> java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
>>>>> 
>>>>> java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
>>>>> 
>>>>> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
>>>>> 
>>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
>>>>> 
>>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>>>>>   java.lang.Thread.run(Thread.java:750)
>>>>> Thread 20 (Timer for 'HBase' metrics system):
>>>>> State: TIMED_WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 96
>>>>> Stack:
>>>>>   java.lang.Object.wait(Native Method)
>>>>>   java.util.TimerThread.mainLoop(Timer.java:552)
>>>>>   java.util.TimerThread.run(Timer.java:505)
>>>>> Thread 16 (RMI TCP Accept-0):
>>>>> State: RUNNABLE
>>>>> Blocked count: 0
>>>>> Waited count: 0
>>>>> Stack:
>>>>>   java.net.PlainSocketImpl.socketAccept(Native Method)
>>>>> 
>>>>> java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:409)
>>>>>   java.net.ServerSocket.implAccept(ServerSocket.java:560)
>>>>>   java.net.ServerSocket.accept(ServerSocket.java:528)
>>>>> 
>>>>> sun.management.jmxremote.LocalRMIServerSocketFactory$1.accept(LocalRMIServerSocketFactory.java:52)
>>>>> 
>>>>> sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(TCPTransport.java:405)
>>>>>   sun.rmi.transport.tcp.TCPTransport$AcceptLoop.run(TCPTransport.java:377)
>>>>>   java.lang.Thread.run(Thread.java:750)
>>>>> Thread 15 (RMI TCP Accept-10101):
>>>>> State: RUNNABLE
>>>>> Blocked count: 0
>>>>> Waited count: 0
>>>>> Stack:
>>>>>   java.net.PlainSocketImpl.socketAccept(Native Method)
>>>>> 
>>>>> java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:409)
>>>>>   java.net.ServerSocket.implAccept(ServerSocket.java:560)
>>>>>   java.net.ServerSocket.accept(ServerSocket.java:528)
>>>>> 
>>>>> sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(TCPTransport.java:405)
>>>>>   sun.rmi.transport.tcp.TCPTransport$AcceptLoop.run(TCPTransport.java:377)
>>>>>   java.lang.Thread.run(Thread.java:750)
>>>>> Thread 10 (Thread-3):
>>>>> State: RUNNABLE
>>>>> Blocked count: 2
>>>>> Waited count: 0
>>>>> Stack:
>>>>>   sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
>>>>>   sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
>>>>>   sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
>>>>>   sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
>>>>>   sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
>>>>>   sun.net.httpserver.ServerImpl$Dispatcher.run(ServerImpl.java:453)
>>>>>   java.lang.Thread.run(Thread.java:750)
>>>>> Thread 8 (req-rsp-timeout-task):
>>>>> State: TIMED_WAITING
>>>>> Blocked count: 1
>>>>> Waited count: 960
>>>>> Stack:
>>>>>   java.lang.Object.wait(Native Method)
>>>>>   java.util.TimerThread.mainLoop(Timer.java:552)
>>>>>   java.util.TimerThread.run(Timer.java:505)
>>>>> Thread 7 (idle-timeout-task):
>>>>> State: TIMED_WAITING
>>>>> Blocked count: 0
>>>>> Waited count: 96
>>>>> Stack:
>>>>>   java.lang.Object.wait(Native Method)
>>>>>   java.util.TimerThread.mainLoop(Timer.java:552)
>>>>>   java.util.TimerThread.run(Timer.java:505)
>>>>> Thread 5 (Signal Dispatcher):
>>>>> State: RUNNABLE
>>>>> Blocked count: 0
>>>>> Waited count: 0
>>>>> Stack:
>>>>> Thread 3 (Finalizer):
>>>>> State: WAITING
>>>>> Blocked count: 92
>>>>> Waited count: 24
>>>>> Waiting on java.lang.ref.ReferenceQueue$Lock@27c20538
>>>>> Stack:
>>>>>   java.lang.Object.wait(Native Method)
>>>>>   java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:144)
>>>>>   java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:165)
>>>>>   java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:188)
>>>>> Thread 2 (Reference Handler):
>>>>> State: WAITING
>>>>> Blocked count: 52
>>>>> Waited count: 23
>>>>> Waiting on java.lang.ref.Reference$Lock@72d818d1
>>>>> Stack:
>>>>>   java.lang.Object.wait(Native Method)
>>>>>   java.lang.Object.wait(Object.java:502)
>>>>>   java.lang.ref.Reference.tryHandlePending(Reference.java:191)
>>>>>   java.lang.ref.Reference$ReferenceHandler.run(Reference.java:153)
>>>>> Thread 1 (main):
>>>>> State: WAITING
>>>>> Blocked count: 20
>>>>> Waited count: 19
>>>>> Waiting on org.apache.hadoop.hbase.master.HMaster@4772c3a0
>>>>> Stack:
>>>>>   java.lang.Object.wait(Native Method)
>>>>>   java.lang.Thread.join(Thread.java:1257)
>>>>>   java.lang.Thread.join(Thread.java:1331)
>>>>> 
>>>>> org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:254)
>>>>> 
>>>>> org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:147)
>>>>>   org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>>>>> 
>>>>> org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:140)
>>>>>   org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:3311)
>>>>> 
>>>>> 
>>>>> 
>>>>> Am Sa., 20. Apr. 2024 um 15:11 Uhr schrieb 张铎(Duo Zhang) <
>>>>> palomino...@gmail.com>:
>>>>> 
>>>>>> Just post it somewhere so we can check it.
>>>>>> 
>>>>>> Udo Offermann <udo.offerm...@zfabrik.de> 于2024年4月20日周六 20:25写道:
>>>>>>> 
>>>>>>> I do have the dump File from the web ui. I can sende it all or you Tell
>>>>>> me
>>>>>>> threads you are interessted in. Fortunately they all have meaningfull
>>>>>> named.
>>>>>>> 
>>>>>>> 张铎(Duo Zhang) <palomino...@gmail.com> schrieb am Sa., 20. Apr. 2024,
>>>>>> 14:13:
>>>>>>> 
>>>>>>>> What is the jstack result for HMaster while hanging? Wait on the
>>>>>>>> namespace table online or meta table online?
>>>>>>>> 
>>>>>>>> Udo Offermann <udo.offerm...@zfabrik.de> 于2024年4月20日周六 19:43写道:
>>>>>>>>> 
>>>>>>>>> Hello everyone,
>>>>>>>>> 
>>>>>>>>> We are upgrading our Hadoop/HBase cluster from Hadoop 2.8.5 & HBase
>>>>>> 2.2.5
>>>>>>>>> to Hadoop 3.3.6 & HBase 2.5.7
>>>>>>>>> 
>>>>>>>>> The Hadoop upgrade worked well, but unfortunately we have problems
>>>>>> with
>>>>>>>> the
>>>>>>>>> Hbase upgrade, because the master hangs on startup inside the
>>>>>> „Starting
>>>>>>>>> assignment manger“ task.
>>>>>>>>> 
>>>>>>>>> After 15 minutes the following message appears in the log file:
>>>>>>>>> 
>>>>>>>>> Master failed to complete initialization after 900000ms. Please
>>>>>>>>> consider submitting a bug report including a thread dump of this
>>>>>>>>> process.
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> We face the same problem as Adam a couple of weeks ago: "Rolling
>>>>>> upgrade
>>>>>>>>> from HBase 2.2.2 to 2.5.8 [typo corrected]: There are 2336 corrupted
>>>>>>>>> procedures“ and we fixed it in the same way by deleting the
>>>>>>>>> MasterProcWALs-folder
>>>>>>>>> in HDFS.
>>>>>>>>> 
>>>>>>>>> I can provide HMaster dump and a dump of one data nodes!
>>>>>>>>> 
>>>>>>>>> How can we proceed with the upgrade?
>>>>>>>>> 
>>>>>>>>> Thanks and best regards
>>>>>>>>> Udo
>>>>>>>> 
>>>>>> 
>>>>> 
>>>>> 
>>>>> --
>>>>> Udo Offermann
>>>>> 
>>>>> udo.offerm...@zfabrik.de
>>>>> 
>>>>> ZFabrik <http://www.zfabrik.de/>
>>>>> Blog <http://www.z2-environment.net/blog>
>>>>> Z2-Environment <http://www.z2-environment.eu/>
>>>>> Z2 Wiki <http://redmine.z2-environment.net/>
>>>>> T: +49 6227 3984255
>>>>> F: +49 6227 3984254
>>>>> M: +49 1781891820
>>>>> 
>>>>> *ZFabrik Software GmbH & Co. KG*
>>>>> Lammstrasse 2, 69190 Walldorf
>>>>> Handelsregister: Amtsgericht Mannheim HRA 702598
>>>>> Persönlich haftende Gesellschafterin: ZFabrik Verwaltungs GmbH, Sitz
>>>>> Walldorf
>>>>> Geschäftsführer: Dr. H. Blohm u. Udo Offermann
>>>>> Handelsregister: Amtsgericht Mannheim HRB 723699
>>> 
> 


Reply via email to