[ 
https://issues.apache.org/jira/browse/HBASE-15853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu resolved HBASE-15853.
----------------------------
    Resolution: Later

Creation of the directory can be handled outside of hbase.

> HBase backups fail if there is no /user/hbase directory on HDFS
> ---------------------------------------------------------------
>
>                 Key: HBASE-15853
>                 URL: https://issues.apache.org/jira/browse/HBASE-15853
>             Project: HBase
>          Issue Type: Bug
>            Reporter: Ted Yu
>            Assignee: Ted Yu
>              Labels: backup
>             Fix For: 2.0.0
>
>         Attachments: hbase-15853.v1.txt
>
>
> [~cartershanklin] reproted the following issue.
> When running a backup job with no /user/hbase directory created and writable 
> to hbase, the job will fail. You have to look in the logs to see the 
> specifics of the failure.
> The error is obscure:
> {code}
> 2016-05-18 00:05:42,616 ERROR [main] util.AbstractHBaseTool: Error running 
> command-line tool
> java.io.IOException: Failed of exporting snapshot 
> snapshot_1463529938818_default_SYSTEM.CATALOG to 
> /tmp/backup/backup_1463529938254/default/SYSTEM.CATALOG/ with reason code 1
>       at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>       at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>       at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>       at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>       at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
>       at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
>       at 
> org.apache.hadoop.hbase.util.ForeignExceptionUtil.toIOException(ForeignExceptionUtil.java:45)
>       at 
> org.apache.hadoop.hbase.client.HBaseAdmin$TableBackupFuture.convertResult(HBaseAdmin.java:2661)
>       at 
> org.apache.hadoop.hbase.client.HBaseAdmin$TableBackupFuture.convertResult(HBaseAdmin.java:2640)
>       at 
> org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.waitProcedureResult(HBaseAdmin.java:4537)
>       at 
> org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.get(HBaseAdmin.java:4471)
>       at org.apache.hadoop.hbase.client.HBaseAdmin.get(HBaseAdmin.java:2618)
>       at 
> org.apache.hadoop.hbase.client.HBaseAdmin.backupTables(HBaseAdmin.java:2634)
>       at 
> org.apache.hadoop.hbase.backup.impl.BackupCommands$CreateCommand.execute(BackupCommands.java:197)
>       at 
> org.apache.hadoop.hbase.backup.BackupDriver.parseAndRun(BackupDriver.java:107)
>       at 
> org.apache.hadoop.hbase.backup.BackupDriver.doWork(BackupDriver.java:122)
>       at 
> org.apache.hadoop.hbase.util.AbstractHBaseTool.run(AbstractHBaseTool.java:112)
>       at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>       at 
> org.apache.hadoop.hbase.backup.BackupDriver.main(BackupDriver.java:127)
> Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Failed 
> of exporting snapshot snapshot_1463529938818_default_SYSTEM.CATALOG to 
> /tmp/backup/backup_1463529938254/default/SYSTEM.CATALOG/ with reason code 1
>       at 
> org.apache.hadoop.hbase.backup.master.FullTableBackupProcedure.snapshotCopy(FullTableBackupProcedure.java:323)
>       at 
> org.apache.hadoop.hbase.backup.master.FullTableBackupProcedure.executeFromState(FullTableBackupProcedure.java:594)
>       at 
> org.apache.hadoop.hbase.backup.master.FullTableBackupProcedure.executeFromState(FullTableBackupProcedure.java:68)
>       at 
> org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:107)
>       at 
> org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:443)
>       at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:932)
>       at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:736)
>       at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:689)
>       at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$200(ProcedureExecutor.java:73)
>       at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor$1.run(ProcedureExecutor.java:416)
> {code}
> Here's what ends up in the logs:
> {code}
> 2016-05-18 00:05:41,051 ERROR [ProcedureExecutorThread-1] 
> snapshot.ExportSnapshot: Snapshot export failed
> org.apache.hadoop.security.AccessControlException: Permission denied: 
> user=hbase, access=WRITE, inode="/user/hbase/.staging":hdfs:hdfs:drwxr-xr-x
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:213)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1813)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1797)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1780)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:71)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4002)
>       at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1098)
>       at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:630)
>       at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>       at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:644)
>       at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
>       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2268)
>       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2264)
>       at java.security.AccessController.doPrivileged(Native Method)
>       at javax.security.auth.Subject.doAs(Subject.java:422)
>       at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1719)
>       at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2262)
>       at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>       at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>       at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>       at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>       at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
>       at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
>       at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3066)
>       at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:3034)
>       at 
> org.apache.hadoop.hdfs.DistributedFileSystem$23.doCall(DistributedFileSystem.java:1107)
>       at 
> org.apache.hadoop.hdfs.DistributedFileSystem$23.doCall(DistributedFileSystem.java:1103)
>       at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>       at 
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1103)
>       at 
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1096)
>       at 
> org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmissionFiles.java:133)
>       at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:144)
>       at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
>       at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
>       at java.security.AccessController.doPrivileged(Native Method)
>       at javax.security.auth.Subject.doAs(Subject.java:422)
>       at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1719)
>       at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
>       at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308)
>       at 
> org.apache.hadoop.hbase.snapshot.ExportSnapshot.runCopyJob(ExportSnapshot.java:815)
>       at 
> org.apache.hadoop.hbase.snapshot.ExportSnapshot.run(ExportSnapshot.java:1011)
>       at 
> org.apache.hadoop.hbase.backup.mapreduce.MapReduceBackupCopyService.copy(MapReduceBackupCopyService.java:294)
>       at 
> org.apache.hadoop.hbase.backup.master.FullTableBackupProcedure.snapshotCopy(FullTableBackupProcedure.java:318)
>       at 
> org.apache.hadoop.hbase.backup.master.FullTableBackupProcedure.executeFromState(FullTableBackupProcedure.java:594)
>       at 
> org.apache.hadoop.hbase.backup.master.FullTableBackupProcedure.executeFromState(FullTableBackupProcedure.java:68)
>       at 
> org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:107)
>       at 
> org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:443)
>       at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:932)
>       at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:736)
>       at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:689)
>       at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$200(ProcedureExecutor.java:73)
>       at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor$1.run(ProcedureExecutor.java:416)
> Caused by: 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
>  Permission denied: user=hbase, access=WRITE, 
> inode="/user/hbase/.staging":hdfs:hdfs:drwxr-xr-x
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:213)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1813)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1797)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1780)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:71)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4002)
>       at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1098)
>       at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:630)
>       at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>       at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:644)
>       at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
>       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2268)
>       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2264)
>       at java.security.AccessController.doPrivileged(Native Method)
>       at javax.security.auth.Subject.doAs(Subject.java:422)
>       at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1719)
>       at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2262)
>       at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1531)
>       at org.apache.hadoop.ipc.Client.call(Client.java:1481)
>       at org.apache.hadoop.ipc.Client.call(Client.java:1386)
>       at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:240)
>       at com.sun.proxy.$Proxy16.mkdirs(Unknown Source)
>       at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:588)
>       at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source)
>       at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>       at java.lang.reflect.Method.invoke(Method.java:498)
>       at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:256)
>       at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104)
>       at com.sun.proxy.$Proxy17.mkdirs(Unknown Source)
>       at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3064)
>       ... 28 more
> 2016-05-18 00:05:41,054 ERROR [ProcedureExecutorThread-1] 
> master.FullTableBackupProcedure: Exporting Snapshot 
> snapshot_1463529938818_default_SYSTEM.CATALOG failed with return code: 1.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to