Re: current leaseholder is trying to recreate file error with ProcedureV2

2016-04-25 Thread Ted Yu
Can you pastebin more of the master log ?

Which version of hadoop are you using ?

Log snippet from namenode w.r.t. state-0073.log may also
provide some more clue.

Thanks

On Mon, Apr 25, 2016 at 12:56 PM, donmai  wrote:

> Hi all,
>
> I'm getting a strange error during table creation / disable in HBase 1.1.2:
>
>
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException):
> failed to create file /hbase/MasterProcWALs/state-0073.log
> for DFSClient_NONMAPREDUCE_87753856_1 for client 10.66.102.192 because
> current leaseholder is trying to recreate file.
>
> Looks somewhat related to HBASE-14234 - what's the root cause behind this
> error message?
>
> Full stack trace below:
>
> -
>
> 2016-04-25 15:24:01,356 WARN
>  [B.defaultRpcServer.handler=7,queue=7,port=6] wal.WALProcedureStore:
> failed to create log file with id=73
>
>
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException):
> failed to create file /hbase/MasterProcWALs/state-0073.log
> for DFSClient_NONMAPREDUCE_87753856_1 for client 10.66.102.192 because
> current leaseholder is trying to recreate file.
>
> at
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2988)
>
> at
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:2737)
>
> at
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2632)
>
> at
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2519)
>
> at
>
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:566)
>
> at
>
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:394)
>
> at
>
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>
> at
>
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:635)
>
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>
> at java.security.AccessController.doPrivileged(Native Method)
>
> at javax.security.auth.Subject.doAs(Subject.java:415)
>
> at
>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
>
> at org.apache.hadoop.ipc.Client.call(Client.java:1468)
>
> at org.apache.hadoop.ipc.Client.call(Client.java:1399)
>
> at
>
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:241)
>
> at com.sun.proxy.$Proxy31.create(Unknown Source)
>
> at
>
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:295)
>
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>
> at
>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>
> at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>
> at java.lang.reflect.Method.invoke(Method.java:606)
>
> at
>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
>
> at
>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>
> at com.sun.proxy.$Proxy32.create(Unknown Source)
>
> at
>
> org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:1738)
>
> at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1671)
>
> at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1596)
>
> at
>
> org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:397)
>
> at
>
> org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:393)
>
> at
>
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>
> at
>
> org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:393)
>
> at
>
> org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:337)
>
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:912)
>
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:893)
>
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:790)
>
> at
>
> org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.rollWriter(WALProcedureStore.java:706)
>
> at
>
> org.apache.hadoop.hbase.procedure2

current leaseholder is trying to recreate file error with ProcedureV2

2016-04-25 Thread donmai
Hi all,

I'm getting a strange error during table creation / disable in HBase 1.1.2:

org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException):
failed to create file /hbase/MasterProcWALs/state-0073.log
for DFSClient_NONMAPREDUCE_87753856_1 for client 10.66.102.192 because
current leaseholder is trying to recreate file.

Looks somewhat related to HBASE-14234 - what's the root cause behind this
error message?

Full stack trace below:

-

2016-04-25 15:24:01,356 WARN
 [B.defaultRpcServer.handler=7,queue=7,port=6] wal.WALProcedureStore:
failed to create log file with id=73

org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException):
failed to create file /hbase/MasterProcWALs/state-0073.log
for DFSClient_NONMAPREDUCE_87753856_1 for client 10.66.102.192 because
current leaseholder is trying to recreate file.

at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2988)

at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:2737)

at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2632)

at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2519)

at
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:566)

at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:394)

at
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)

at
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:635)

at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)

at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)

at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)

at java.security.AccessController.doPrivileged(Native Method)

at javax.security.auth.Subject.doAs(Subject.java:415)

at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)

at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)

at org.apache.hadoop.ipc.Client.call(Client.java:1468)

at org.apache.hadoop.ipc.Client.call(Client.java:1399)

at
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:241)

at com.sun.proxy.$Proxy31.create(Unknown Source)

at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:295)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:606)

at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)

at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)

at com.sun.proxy.$Proxy32.create(Unknown Source)

at
org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:1738)

at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1671)

at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1596)

at
org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:397)

at
org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:393)

at
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)

at
org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:393)

at
org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:337)

at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:912)

at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:893)

at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:790)

at
org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.rollWriter(WALProcedureStore.java:706)

at
org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.rollWriter(WALProcedureStore.java:676)

at
org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.checkAndTryRoll(WALProcedureStore.java:655)

at
org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.insert(WALProcedureStore.java:355)

at
org.apache.hadoop.hbase.procedure2.ProcedureExecutor.submitProcedure(ProcedureExecutor.java:524)

at
org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1481)