The problem is that it seems like hfile TTL cleaner. Which can be
configured with --> *hbase.master.hfilecleaner.ttl*

On ExportSnapshot to hbase filesystem, temporary files are placed under
*/archive*, which is cleaned up by ttl cleaner on frequent intervals (I
think 15 minutes is default) if there are no references for these files
(which won't be created until snapshot is completed).

*Fix*: Increase the above configuration value to how long export snapshot
takes and then restart hmaster. Later on you can revert back.
---
Mallikarjun


On Wed, Feb 9, 2022 at 7:15 PM Hamado Dene <hamadod...@yahoo.com.invalid>
wrote:

> Hi hbase community,
>  I'm trying to make an export snapshot of a table that has large hfiles,
> however.With smaller tables I have not had problems, but bigger tables I
> can't export and I get the exception:
> rror: java.io.FileNotFoundException: File does not exist:
> /hbase/archive/data/default/mn1_7482_hevents/d37341ab3adad67e2c911edd6d5e6de7/d/27f6d74f99654685b5518a8db1c1496a
> (inode 604969) Holder
> DFSClient_attempt_1643276298721_0182_m_000000_0_1307333633_1 does not have
> any open files. at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2674)
> at
> org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.analyzeFileState(FSDirWriteFileOp.java:521)
> at
> org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.validateAddBlock(FSDirWriteFileOp.java:161)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2555)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:829)
> at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:510)
> at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989) at
> org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:850) at
> org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:793) at
> java.security.AccessController.doPrivileged(Native Method) at
> javax.security.auth.Subject.doAs(Subject.java:422) at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1844)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2489) at
> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121)
> at
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:88)
> at
> org.apache.hadoop.hdfs.DataStreamer.locateFollowingBlock(DataStreamer.java:1842)
> at
> org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1638)
> at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:704) Caused
> by: org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException):
> File does not exist:
> /hbase/archive/data/default/mn1_7482_hevents/d37341ab3adad67e2c911edd6d5e6de7/d/27f6d74f99654685b5518a8db1c1496a
> (inode 604969) Holder
> DFSClient_attempt_1643276298721_0182_m_000000_0_1307333633_1 does not have
> any open files. at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2674)
> at
> org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.analyzeFileState(FSDirWriteFileOp.java:521)
> at
> org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.validateAddBlock(FSDirWriteFileOp.java:161)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2555)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:829)
> at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:510)
> at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989) at
> org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:850) at
> org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:793) at
> java.security.AccessController.doPrivileged(Native Method) at
> javax.security.auth.Subject.doAs(Subject.java:422) at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1844)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2489) at
> org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1489) at
> org.apache.hadoop.ipc.Client.call(Client.java:1435) at
> org.apache.hadoop.ipc.Client.call(Client.java:1345) at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227)
> at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
> at com.sun.proxy.$Proxy14.addBlock(Unknown Source) at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:444)
> at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source) at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498) at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:409)
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:163)
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155)
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:346)
> at com.sun.proxy.$Proxy15.addBlock(Unknown Source) at
> org.apache.hadoop.hdfs.DataStreamer.locateFollowingBlock(DataStreamer.java:1838)
> ... 2 more
>
>
>
> attempt_1643276298721_0182_m_000000_0 100.00 FAILED copied 4.7 G/7.2 G
> (65.4%) from
> hdfs://rzv-db07:8020/hbase/data/default/mn1_7482_hevents/d37341ab3adad67e2c911edd6d5e6de7/d/27f6d74f99654685b5518a8db1c1496a
> to
> hdfs://acv-db10n:8020/hbase/archive/data/default/mn1_7482_hevents/d37341ab3adad67e2c911edd6d5e6de7/d/27f6d74f99654685b5518a8db1c1496a
> > map
>
>
> Is there a way to be able to do this export efficiently?
>  The command I run is:
>
>  hbase org.apache.hadoop.hbase.snapshot.ExportSnapshot  -snapshot
> $snapshotName -copy-from $source -copy-to $destination  -overwrite
>
>
> Thanks,
> Hamado Dene
>

Reply via email to