[ 
https://issues.apache.org/jira/browse/HDFS-4913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-4913:
---------------------------------------

    Assignee: Colin Patrick McCabe
      Status: Patch Available  (was: Open)
    
> Deleting file through fuse-dfs when using trash fails requiring root 
> permissions
> --------------------------------------------------------------------------------
>
>                 Key: HDFS-4913
>                 URL: https://issues.apache.org/jira/browse/HDFS-4913
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: fuse-dfs
>    Affects Versions: 2.0.3-alpha
>            Reporter: Stephen Chu
>            Assignee: Colin Patrick McCabe
>         Attachments: HDFS-4913.002.patch
>
>
> As _root_, I mounted HDFS with fuse-dfs using the -ousetrash option.
> As _testuser_, I cd into the mount and touch a test file at 
> _/user/testuser/testFile1_. As the same user, I try to rm the file and run 
> into an error:
> {code}
> [testuser@hdfs-vanilla-1 ~]$ cd /hdfs_mnt/user/testuser
> [testuser@hdfs-vanilla-1 testuser]$ touch testFile1
> [testuser@hdfs-vanilla-1 testuser]$ rm testFile1
> rm: cannot remove `testFile1': Unknown error 255
> {code}
> I check the fuse-dfs debug output, and it shows that we attempt to mkdir 
> /user/root/.Trash, which testuser doesn't have permissions to.
> Ideally, we'd be able to remove testFile1 and have testFile1 be put into 
> /user/testuser/.Trash instead of /user/root/.Trash.
> Error in debug:
> {code}
> unlink /user/testuser/testFile1
> hdfsCreateDirectory(/user/root/.Trash/Current/user/testuser): 
> FileSystem#mkdirs error:
> org.apache.hadoop.security.AccessControlException: Permission denied: 
> user=testuser, access=WRITE, inode="/user/root":root:supergroup:drwxr-xr-x
>                                                  at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:224)
>                                                  at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:204)
>                                                  at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:149)
>                                                  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4716)
>                                                  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4698)
>                                                  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4672)
>                                                  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:3035)
>                                                  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:2999)
>                                                  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:2980)
>                                                  at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:648)
>                                                  at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:419)
>                                                  at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44970)
>                                                  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>                                                  at 
> org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>                                                  at 
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1701)
>                                                  at 
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1697)
>                                                  at 
> java.security.AccessController.doPrivileged(Native Method)
>                                                  at 
> javax.security.auth.Subject.doAs(Subject.java:396)
>                                                  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>                                                  at 
> org.apache.hadoop.ipc.Server$Handler.run(Server.java:1695)
>                                                  at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>                                                  at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>                                                  at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>                                                  at 
> java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>                                                  at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
>                                                  at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
>                                                  at 
> org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2153)
>                                                  at 
> org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2122)
>                                                  at 
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:545)
>                                                  at 
> org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1913)
> Caused by: 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
>  Permission denied: user=testuser, access=WRITE, 
> inode="/user/root":root:supergroup:drwxr-xr-x
>        at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:224)
>        at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:204)
>        at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:149)
>        at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4716)
>        at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4698)
>        at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4672)
>        at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:3035)
>        at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:2999)
>        at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:2980)
>        at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:648)
>        at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:419)
>        at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44970)
>        at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1701)
>        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1697)
>        at java.security.AccessController.doPrivileged(Native Method)
>        at javax.security.auth.Subject.doAs(Subject.java:396)
>        at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1695)
>        at org.apache.hadoop.ipc.Client.call(Client.java:1225)
>        at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
>        at $Proxy9.mkdirs(Unknown Source)
>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>        at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>        at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>        at java.lang.reflect.Method.invoke(Method.java:597)
>        at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
>        at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
>        at $Proxy9.mkdirs(Unknown Source)
>        at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:426)
>        at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2151)
>        ... 3 more
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to