[jira] [Commented] (HDFS-4913) Deleting file through fuse-dfs when using trash fails requiring root permissions

2014-05-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14003462#comment-14003462
 ] 

Hudson commented on HDFS-4913:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1780 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1780/])
HDFS-4913. Deleting file through fuse-dfs when using trash fails, requiring 
root permissions (cmccabe) (cmccabe: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1595371)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_trash.c


> Deleting file through fuse-dfs when using trash fails requiring root 
> permissions
> 
>
> Key: HDFS-4913
> URL: https://issues.apache.org/jira/browse/HDFS-4913
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fuse-dfs
>Affects Versions: 2.0.3-alpha
>Reporter: Stephen Chu
>Assignee: Colin Patrick McCabe
> Fix For: 2.5.0
>
> Attachments: HDFS-4913.002.patch, HDFS-4913.003.patch, 
> HDFS-4913.004.patch
>
>
> As _root_, I mounted HDFS with fuse-dfs using the -ousetrash option.
> As _testuser_, I cd into the mount and touch a test file at 
> _/user/testuser/testFile1_. As the same user, I try to rm the file and run 
> into an error:
> {code}
> [testuser@hdfs-vanilla-1 ~]$ cd /hdfs_mnt/user/testuser
> [testuser@hdfs-vanilla-1 testuser]$ touch testFile1
> [testuser@hdfs-vanilla-1 testuser]$ rm testFile1
> rm: cannot remove `testFile1': Unknown error 255
> {code}
> I check the fuse-dfs debug output, and it shows that we attempt to mkdir 
> /user/root/.Trash, which testuser doesn't have permissions to.
> Ideally, we'd be able to remove testFile1 and have testFile1 be put into 
> /user/testuser/.Trash instead of /user/root/.Trash.
> Error in debug:
> {code}
> unlink /user/testuser/testFile1
> hdfsCreateDirectory(/user/root/.Trash/Current/user/testuser): 
> FileSystem#mkdirs error:
> org.apache.hadoop.security.AccessControlException: Permission denied: 
> user=testuser, access=WRITE, inode="/user/root":root:supergroup:drwxr-xr-x
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:224)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:204)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:149)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4716)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4698)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4672)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:3035)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:2999)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:2980)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:648)
>  at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:419)
>  at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44970)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>  at 
> org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>  at 
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1701)
>  at 
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1697)
>  at 
> java.security.AccessController.doPrivileged(Native Method)
>  at 
> javax.security.auth.Subject.doAs(Subject.java:396)
>   

[jira] [Commented] (HDFS-4913) Deleting file through fuse-dfs when using trash fails requiring root permissions

2014-05-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14003285#comment-14003285
 ] 

Hudson commented on HDFS-4913:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1754 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1754/])
HDFS-4913. Deleting file through fuse-dfs when using trash fails, requiring 
root permissions (cmccabe) (cmccabe: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1595371)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_trash.c


> Deleting file through fuse-dfs when using trash fails requiring root 
> permissions
> 
>
> Key: HDFS-4913
> URL: https://issues.apache.org/jira/browse/HDFS-4913
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fuse-dfs
>Affects Versions: 2.0.3-alpha
>Reporter: Stephen Chu
>Assignee: Colin Patrick McCabe
> Fix For: 2.5.0
>
> Attachments: HDFS-4913.002.patch, HDFS-4913.003.patch, 
> HDFS-4913.004.patch
>
>
> As _root_, I mounted HDFS with fuse-dfs using the -ousetrash option.
> As _testuser_, I cd into the mount and touch a test file at 
> _/user/testuser/testFile1_. As the same user, I try to rm the file and run 
> into an error:
> {code}
> [testuser@hdfs-vanilla-1 ~]$ cd /hdfs_mnt/user/testuser
> [testuser@hdfs-vanilla-1 testuser]$ touch testFile1
> [testuser@hdfs-vanilla-1 testuser]$ rm testFile1
> rm: cannot remove `testFile1': Unknown error 255
> {code}
> I check the fuse-dfs debug output, and it shows that we attempt to mkdir 
> /user/root/.Trash, which testuser doesn't have permissions to.
> Ideally, we'd be able to remove testFile1 and have testFile1 be put into 
> /user/testuser/.Trash instead of /user/root/.Trash.
> Error in debug:
> {code}
> unlink /user/testuser/testFile1
> hdfsCreateDirectory(/user/root/.Trash/Current/user/testuser): 
> FileSystem#mkdirs error:
> org.apache.hadoop.security.AccessControlException: Permission denied: 
> user=testuser, access=WRITE, inode="/user/root":root:supergroup:drwxr-xr-x
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:224)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:204)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:149)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4716)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4698)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4672)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:3035)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:2999)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:2980)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:648)
>  at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:419)
>  at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44970)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>  at 
> org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>  at 
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1701)
>  at 
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1697)
>  at 
> java.security.AccessController.doPrivileged(Native Method)
>  at 
> javax.security.auth.Subject.doAs(Subject.java:396)
> 

[jira] [Commented] (HDFS-4913) Deleting file through fuse-dfs when using trash fails requiring root permissions

2014-05-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14003250#comment-14003250
 ] 

Hudson commented on HDFS-4913:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #562 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/562/])
HDFS-4913. Deleting file through fuse-dfs when using trash fails, requiring 
root permissions (cmccabe) (cmccabe: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1595371)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_trash.c


> Deleting file through fuse-dfs when using trash fails requiring root 
> permissions
> 
>
> Key: HDFS-4913
> URL: https://issues.apache.org/jira/browse/HDFS-4913
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fuse-dfs
>Affects Versions: 2.0.3-alpha
>Reporter: Stephen Chu
>Assignee: Colin Patrick McCabe
> Fix For: 2.5.0
>
> Attachments: HDFS-4913.002.patch, HDFS-4913.003.patch, 
> HDFS-4913.004.patch
>
>
> As _root_, I mounted HDFS with fuse-dfs using the -ousetrash option.
> As _testuser_, I cd into the mount and touch a test file at 
> _/user/testuser/testFile1_. As the same user, I try to rm the file and run 
> into an error:
> {code}
> [testuser@hdfs-vanilla-1 ~]$ cd /hdfs_mnt/user/testuser
> [testuser@hdfs-vanilla-1 testuser]$ touch testFile1
> [testuser@hdfs-vanilla-1 testuser]$ rm testFile1
> rm: cannot remove `testFile1': Unknown error 255
> {code}
> I check the fuse-dfs debug output, and it shows that we attempt to mkdir 
> /user/root/.Trash, which testuser doesn't have permissions to.
> Ideally, we'd be able to remove testFile1 and have testFile1 be put into 
> /user/testuser/.Trash instead of /user/root/.Trash.
> Error in debug:
> {code}
> unlink /user/testuser/testFile1
> hdfsCreateDirectory(/user/root/.Trash/Current/user/testuser): 
> FileSystem#mkdirs error:
> org.apache.hadoop.security.AccessControlException: Permission denied: 
> user=testuser, access=WRITE, inode="/user/root":root:supergroup:drwxr-xr-x
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:224)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:204)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:149)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4716)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4698)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4672)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:3035)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:2999)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:2980)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:648)
>  at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:419)
>  at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44970)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>  at 
> org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>  at 
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1701)
>  at 
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1697)
>  at 
> java.security.AccessController.doPrivileged(Native Method)
>  at 
> javax.security.auth.Subject.doAs(Subject.java:396)
>   

[jira] [Commented] (HDFS-4913) Deleting file through fuse-dfs when using trash fails requiring root permissions

2014-05-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14002945#comment-14002945
 ] 

Hudson commented on HDFS-4913:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #5606 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/5606/])
HDFS-4913. Deleting file through fuse-dfs when using trash fails, requiring 
root permissions (cmccabe) (cmccabe: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1595371)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/native/fuse-dfs/fuse_trash.c


> Deleting file through fuse-dfs when using trash fails requiring root 
> permissions
> 
>
> Key: HDFS-4913
> URL: https://issues.apache.org/jira/browse/HDFS-4913
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fuse-dfs
>Affects Versions: 2.0.3-alpha
>Reporter: Stephen Chu
>Assignee: Colin Patrick McCabe
> Fix For: 2.5.0
>
> Attachments: HDFS-4913.002.patch, HDFS-4913.003.patch, 
> HDFS-4913.004.patch
>
>
> As _root_, I mounted HDFS with fuse-dfs using the -ousetrash option.
> As _testuser_, I cd into the mount and touch a test file at 
> _/user/testuser/testFile1_. As the same user, I try to rm the file and run 
> into an error:
> {code}
> [testuser@hdfs-vanilla-1 ~]$ cd /hdfs_mnt/user/testuser
> [testuser@hdfs-vanilla-1 testuser]$ touch testFile1
> [testuser@hdfs-vanilla-1 testuser]$ rm testFile1
> rm: cannot remove `testFile1': Unknown error 255
> {code}
> I check the fuse-dfs debug output, and it shows that we attempt to mkdir 
> /user/root/.Trash, which testuser doesn't have permissions to.
> Ideally, we'd be able to remove testFile1 and have testFile1 be put into 
> /user/testuser/.Trash instead of /user/root/.Trash.
> Error in debug:
> {code}
> unlink /user/testuser/testFile1
> hdfsCreateDirectory(/user/root/.Trash/Current/user/testuser): 
> FileSystem#mkdirs error:
> org.apache.hadoop.security.AccessControlException: Permission denied: 
> user=testuser, access=WRITE, inode="/user/root":root:supergroup:drwxr-xr-x
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:224)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:204)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:149)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4716)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4698)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4672)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:3035)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:2999)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:2980)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:648)
>  at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:419)
>  at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44970)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>  at 
> org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>  at 
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1701)
>  at 
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1697)
>  at 
> java.security.AccessController.doPrivileged(Native Method)
>  at 
> javax.security.auth.Subject.doAs(Subject.java:396)
> 

[jira] [Commented] (HDFS-4913) Deleting file through fuse-dfs when using trash fails requiring root permissions

2014-05-16 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14000406#comment-14000406
 ] 

Colin Patrick McCabe commented on HDFS-4913:


The FindBugs warning is clearly bogus since this patch doesn't change any Java 
code (and findbugs only operates on java).  Similarly with the 
{{TestBPOfferService}} test failure.

Thanks for the reviews-- committing...

> Deleting file through fuse-dfs when using trash fails requiring root 
> permissions
> 
>
> Key: HDFS-4913
> URL: https://issues.apache.org/jira/browse/HDFS-4913
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fuse-dfs
>Affects Versions: 2.0.3-alpha
>Reporter: Stephen Chu
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-4913.002.patch, HDFS-4913.003.patch, 
> HDFS-4913.004.patch
>
>
> As _root_, I mounted HDFS with fuse-dfs using the -ousetrash option.
> As _testuser_, I cd into the mount and touch a test file at 
> _/user/testuser/testFile1_. As the same user, I try to rm the file and run 
> into an error:
> {code}
> [testuser@hdfs-vanilla-1 ~]$ cd /hdfs_mnt/user/testuser
> [testuser@hdfs-vanilla-1 testuser]$ touch testFile1
> [testuser@hdfs-vanilla-1 testuser]$ rm testFile1
> rm: cannot remove `testFile1': Unknown error 255
> {code}
> I check the fuse-dfs debug output, and it shows that we attempt to mkdir 
> /user/root/.Trash, which testuser doesn't have permissions to.
> Ideally, we'd be able to remove testFile1 and have testFile1 be put into 
> /user/testuser/.Trash instead of /user/root/.Trash.
> Error in debug:
> {code}
> unlink /user/testuser/testFile1
> hdfsCreateDirectory(/user/root/.Trash/Current/user/testuser): 
> FileSystem#mkdirs error:
> org.apache.hadoop.security.AccessControlException: Permission denied: 
> user=testuser, access=WRITE, inode="/user/root":root:supergroup:drwxr-xr-x
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:224)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:204)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:149)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4716)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4698)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4672)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:3035)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:2999)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:2980)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:648)
>  at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:419)
>  at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44970)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>  at 
> org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>  at 
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1701)
>  at 
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1697)
>  at 
> java.security.AccessController.doPrivileged(Native Method)
>  at 
> javax.security.auth.Subject.doAs(Subject.java:396)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>  at 
> org.apache.hadoop.ipc.Server$Handler.run(Server.java:1695)
> 

[jira] [Commented] (HDFS-4913) Deleting file through fuse-dfs when using trash fails requiring root permissions

2014-05-16 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13999247#comment-13999247
 ] 

Colin Patrick McCabe commented on HDFS-4913:


One small difference I notice between fuse_dfs and the FsShell is that the 
latter now pulls its trash configuration from the NameNode ("server-side 
trash"), but {{fuse_dfs}} still requires you to specify the {{use_trash}} 
option when starting FUSE.  I think this is probably OK, though.  Existing 
{{fuse_dfs}} configurations will continue to work, and I expect use of the 
trash to fade away gradually, as people use snapshots instead.

> Deleting file through fuse-dfs when using trash fails requiring root 
> permissions
> 
>
> Key: HDFS-4913
> URL: https://issues.apache.org/jira/browse/HDFS-4913
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fuse-dfs
>Affects Versions: 2.0.3-alpha
>Reporter: Stephen Chu
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-4913.002.patch, HDFS-4913.003.patch, 
> HDFS-4913.004.patch
>
>
> As _root_, I mounted HDFS with fuse-dfs using the -ousetrash option.
> As _testuser_, I cd into the mount and touch a test file at 
> _/user/testuser/testFile1_. As the same user, I try to rm the file and run 
> into an error:
> {code}
> [testuser@hdfs-vanilla-1 ~]$ cd /hdfs_mnt/user/testuser
> [testuser@hdfs-vanilla-1 testuser]$ touch testFile1
> [testuser@hdfs-vanilla-1 testuser]$ rm testFile1
> rm: cannot remove `testFile1': Unknown error 255
> {code}
> I check the fuse-dfs debug output, and it shows that we attempt to mkdir 
> /user/root/.Trash, which testuser doesn't have permissions to.
> Ideally, we'd be able to remove testFile1 and have testFile1 be put into 
> /user/testuser/.Trash instead of /user/root/.Trash.
> Error in debug:
> {code}
> unlink /user/testuser/testFile1
> hdfsCreateDirectory(/user/root/.Trash/Current/user/testuser): 
> FileSystem#mkdirs error:
> org.apache.hadoop.security.AccessControlException: Permission denied: 
> user=testuser, access=WRITE, inode="/user/root":root:supergroup:drwxr-xr-x
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:224)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:204)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:149)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4716)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4698)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4672)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:3035)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:2999)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:2980)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:648)
>  at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:419)
>  at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44970)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>  at 
> org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>  at 
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1701)
>  at 
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1697)
>  at 
> java.security.AccessController.doPrivileged(Native Method)
>  at 
> javax.security.auth.Subject.doAs(Subject.java:396)
>  at

[jira] [Commented] (HDFS-4913) Deleting file through fuse-dfs when using trash fails requiring root permissions

2014-05-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13999486#comment-13999486
 ] 

Hadoop QA commented on HDFS-4913:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12645087/HDFS-4913.004.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.server.datanode.TestBPOfferService

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/6910//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/6910//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6910//console

This message is automatically generated.

> Deleting file through fuse-dfs when using trash fails requiring root 
> permissions
> 
>
> Key: HDFS-4913
> URL: https://issues.apache.org/jira/browse/HDFS-4913
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fuse-dfs
>Affects Versions: 2.0.3-alpha
>Reporter: Stephen Chu
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-4913.002.patch, HDFS-4913.003.patch, 
> HDFS-4913.004.patch
>
>
> As _root_, I mounted HDFS with fuse-dfs using the -ousetrash option.
> As _testuser_, I cd into the mount and touch a test file at 
> _/user/testuser/testFile1_. As the same user, I try to rm the file and run 
> into an error:
> {code}
> [testuser@hdfs-vanilla-1 ~]$ cd /hdfs_mnt/user/testuser
> [testuser@hdfs-vanilla-1 testuser]$ touch testFile1
> [testuser@hdfs-vanilla-1 testuser]$ rm testFile1
> rm: cannot remove `testFile1': Unknown error 255
> {code}
> I check the fuse-dfs debug output, and it shows that we attempt to mkdir 
> /user/root/.Trash, which testuser doesn't have permissions to.
> Ideally, we'd be able to remove testFile1 and have testFile1 be put into 
> /user/testuser/.Trash instead of /user/root/.Trash.
> Error in debug:
> {code}
> unlink /user/testuser/testFile1
> hdfsCreateDirectory(/user/root/.Trash/Current/user/testuser): 
> FileSystem#mkdirs error:
> org.apache.hadoop.security.AccessControlException: Permission denied: 
> user=testuser, access=WRITE, inode="/user/root":root:supergroup:drwxr-xr-x
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:224)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:204)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:149)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4716)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4698)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4672)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:3035)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:2999)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:2980)
>  at 
> org.apache.hadoop.hdfs.server.nam

[jira] [Commented] (HDFS-4913) Deleting file through fuse-dfs when using trash fails requiring root permissions

2014-05-16 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14000145#comment-14000145
 ] 

Andrew Wang commented on HDFS-4913:
---

+1, thanks Colin. We can definitely punt server-side trash support to a 
separate JIRA.

> Deleting file through fuse-dfs when using trash fails requiring root 
> permissions
> 
>
> Key: HDFS-4913
> URL: https://issues.apache.org/jira/browse/HDFS-4913
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fuse-dfs
>Affects Versions: 2.0.3-alpha
>Reporter: Stephen Chu
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-4913.002.patch, HDFS-4913.003.patch, 
> HDFS-4913.004.patch
>
>
> As _root_, I mounted HDFS with fuse-dfs using the -ousetrash option.
> As _testuser_, I cd into the mount and touch a test file at 
> _/user/testuser/testFile1_. As the same user, I try to rm the file and run 
> into an error:
> {code}
> [testuser@hdfs-vanilla-1 ~]$ cd /hdfs_mnt/user/testuser
> [testuser@hdfs-vanilla-1 testuser]$ touch testFile1
> [testuser@hdfs-vanilla-1 testuser]$ rm testFile1
> rm: cannot remove `testFile1': Unknown error 255
> {code}
> I check the fuse-dfs debug output, and it shows that we attempt to mkdir 
> /user/root/.Trash, which testuser doesn't have permissions to.
> Ideally, we'd be able to remove testFile1 and have testFile1 be put into 
> /user/testuser/.Trash instead of /user/root/.Trash.
> Error in debug:
> {code}
> unlink /user/testuser/testFile1
> hdfsCreateDirectory(/user/root/.Trash/Current/user/testuser): 
> FileSystem#mkdirs error:
> org.apache.hadoop.security.AccessControlException: Permission denied: 
> user=testuser, access=WRITE, inode="/user/root":root:supergroup:drwxr-xr-x
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:224)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:204)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:149)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4716)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4698)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4672)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:3035)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:2999)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:2980)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:648)
>  at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:419)
>  at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44970)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>  at 
> org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>  at 
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1701)
>  at 
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1697)
>  at 
> java.security.AccessController.doPrivileged(Native Method)
>  at 
> javax.security.auth.Subject.doAs(Subject.java:396)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>  at 
> org.apache.hadoop.ipc.Server$Handler.run(Server.java:1695)
>  at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> 

[jira] [Commented] (HDFS-4913) Deleting file through fuse-dfs when using trash fails requiring root permissions

2014-05-16 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13999208#comment-13999208
 ] 

Colin Patrick McCabe commented on HDFS-4913:


bq. Are the fprintf's to stderr intentional? They look like they're for 
debugging.

yeah, those were for debugging.  removed.

bq. Doc for get_parent_dir, "*parent dir" is missing an underscore, params 
aren't correct.

Fixed

bq. Code comments in get_parent_dir would be nice, it seems like the goal is to 
basically put a null in the last slash that's not the end of the path, then 
return each part.

I added some more comments to the doxygen.

bq. Mention that trash_base is malloc'd? Could also provide an example path 
that would be returned.

added

bq. hdfsRename failure case, move ret = errno down for consistency

I can't do that, unfortunately.  The log message may clear the thread-local 
{{errno}} value.  This is one reason why errno-based APIs suck :(

bq. Can you comment on manual testing?

I did some manual testing on this with trash enabled

> Deleting file through fuse-dfs when using trash fails requiring root 
> permissions
> 
>
> Key: HDFS-4913
> URL: https://issues.apache.org/jira/browse/HDFS-4913
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fuse-dfs
>Affects Versions: 2.0.3-alpha
>Reporter: Stephen Chu
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-4913.002.patch, HDFS-4913.003.patch, 
> HDFS-4913.004.patch
>
>
> As _root_, I mounted HDFS with fuse-dfs using the -ousetrash option.
> As _testuser_, I cd into the mount and touch a test file at 
> _/user/testuser/testFile1_. As the same user, I try to rm the file and run 
> into an error:
> {code}
> [testuser@hdfs-vanilla-1 ~]$ cd /hdfs_mnt/user/testuser
> [testuser@hdfs-vanilla-1 testuser]$ touch testFile1
> [testuser@hdfs-vanilla-1 testuser]$ rm testFile1
> rm: cannot remove `testFile1': Unknown error 255
> {code}
> I check the fuse-dfs debug output, and it shows that we attempt to mkdir 
> /user/root/.Trash, which testuser doesn't have permissions to.
> Ideally, we'd be able to remove testFile1 and have testFile1 be put into 
> /user/testuser/.Trash instead of /user/root/.Trash.
> Error in debug:
> {code}
> unlink /user/testuser/testFile1
> hdfsCreateDirectory(/user/root/.Trash/Current/user/testuser): 
> FileSystem#mkdirs error:
> org.apache.hadoop.security.AccessControlException: Permission denied: 
> user=testuser, access=WRITE, inode="/user/root":root:supergroup:drwxr-xr-x
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:224)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:204)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:149)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4716)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4698)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4672)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:3035)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:2999)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:2980)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:648)
>  at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:419)
>  at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44970)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>  at 
> org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>  at 
> org.apache.hadoop.ipc.Server$Hand

[jira] [Commented] (HDFS-4913) Deleting file through fuse-dfs when using trash fails requiring root permissions

2014-05-15 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13998356#comment-13998356
 ] 

Andrew Wang commented on HDFS-4913:
---

Nice work, I just have nits:

* Are the fprintf's to stderr intentional? They look like they're for debugging.
* Doc for get_parent_dir, "*parent dir" is missing an underscore, params aren't 
correct.
* Code comments in get_parent_dir would be nice, it seems like the goal is to 
basically put a null in the last slash that's not the end of the path, then 
return each part.
* Mention that trash_base is malloc'd? Could also provide an example path that 
would be returned.
* hdfsRename failure case, move {{ret = errno}} down for consistency
* Can you comment on manual testing?

+1 once addressed

> Deleting file through fuse-dfs when using trash fails requiring root 
> permissions
> 
>
> Key: HDFS-4913
> URL: https://issues.apache.org/jira/browse/HDFS-4913
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fuse-dfs
>Affects Versions: 2.0.3-alpha
>Reporter: Stephen Chu
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-4913.002.patch, HDFS-4913.003.patch
>
>
> As _root_, I mounted HDFS with fuse-dfs using the -ousetrash option.
> As _testuser_, I cd into the mount and touch a test file at 
> _/user/testuser/testFile1_. As the same user, I try to rm the file and run 
> into an error:
> {code}
> [testuser@hdfs-vanilla-1 ~]$ cd /hdfs_mnt/user/testuser
> [testuser@hdfs-vanilla-1 testuser]$ touch testFile1
> [testuser@hdfs-vanilla-1 testuser]$ rm testFile1
> rm: cannot remove `testFile1': Unknown error 255
> {code}
> I check the fuse-dfs debug output, and it shows that we attempt to mkdir 
> /user/root/.Trash, which testuser doesn't have permissions to.
> Ideally, we'd be able to remove testFile1 and have testFile1 be put into 
> /user/testuser/.Trash instead of /user/root/.Trash.
> Error in debug:
> {code}
> unlink /user/testuser/testFile1
> hdfsCreateDirectory(/user/root/.Trash/Current/user/testuser): 
> FileSystem#mkdirs error:
> org.apache.hadoop.security.AccessControlException: Permission denied: 
> user=testuser, access=WRITE, inode="/user/root":root:supergroup:drwxr-xr-x
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:224)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:204)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:149)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4716)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4698)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4672)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:3035)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:2999)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:2980)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:648)
>  at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:419)
>  at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44970)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>  at 
> org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>  at 
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1701)
>  at 
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1697)
>  at 
> java.security.AccessController.doPrivileged(Native Method)
>   

[jira] [Commented] (HDFS-4913) Deleting file through fuse-dfs when using trash fails requiring root permissions

2014-05-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13991406#comment-13991406
 ] 

Hadoop QA commented on HDFS-4913:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12643626/HDFS-4913.003.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/6839//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6839//console

This message is automatically generated.

> Deleting file through fuse-dfs when using trash fails requiring root 
> permissions
> 
>
> Key: HDFS-4913
> URL: https://issues.apache.org/jira/browse/HDFS-4913
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fuse-dfs
>Affects Versions: 2.0.3-alpha
>Reporter: Stephen Chu
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-4913.002.patch, HDFS-4913.003.patch
>
>
> As _root_, I mounted HDFS with fuse-dfs using the -ousetrash option.
> As _testuser_, I cd into the mount and touch a test file at 
> _/user/testuser/testFile1_. As the same user, I try to rm the file and run 
> into an error:
> {code}
> [testuser@hdfs-vanilla-1 ~]$ cd /hdfs_mnt/user/testuser
> [testuser@hdfs-vanilla-1 testuser]$ touch testFile1
> [testuser@hdfs-vanilla-1 testuser]$ rm testFile1
> rm: cannot remove `testFile1': Unknown error 255
> {code}
> I check the fuse-dfs debug output, and it shows that we attempt to mkdir 
> /user/root/.Trash, which testuser doesn't have permissions to.
> Ideally, we'd be able to remove testFile1 and have testFile1 be put into 
> /user/testuser/.Trash instead of /user/root/.Trash.
> Error in debug:
> {code}
> unlink /user/testuser/testFile1
> hdfsCreateDirectory(/user/root/.Trash/Current/user/testuser): 
> FileSystem#mkdirs error:
> org.apache.hadoop.security.AccessControlException: Permission denied: 
> user=testuser, access=WRITE, inode="/user/root":root:supergroup:drwxr-xr-x
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:224)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:204)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:149)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4716)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4698)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4672)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:3035)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:2999)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:2980)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:648)
>  at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.ja

[jira] [Commented] (HDFS-4913) Deleting file through fuse-dfs when using trash fails requiring root permissions

2013-07-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13698577#comment-13698577
 ] 

Hadoop QA commented on HDFS-4913:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12590559/HDFS-4913.002.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/4592//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4592//console

This message is automatically generated.

> Deleting file through fuse-dfs when using trash fails requiring root 
> permissions
> 
>
> Key: HDFS-4913
> URL: https://issues.apache.org/jira/browse/HDFS-4913
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fuse-dfs
>Affects Versions: 2.0.3-alpha
>Reporter: Stephen Chu
>Assignee: Colin Patrick McCabe
> Attachments: HDFS-4913.002.patch
>
>
> As _root_, I mounted HDFS with fuse-dfs using the -ousetrash option.
> As _testuser_, I cd into the mount and touch a test file at 
> _/user/testuser/testFile1_. As the same user, I try to rm the file and run 
> into an error:
> {code}
> [testuser@hdfs-vanilla-1 ~]$ cd /hdfs_mnt/user/testuser
> [testuser@hdfs-vanilla-1 testuser]$ touch testFile1
> [testuser@hdfs-vanilla-1 testuser]$ rm testFile1
> rm: cannot remove `testFile1': Unknown error 255
> {code}
> I check the fuse-dfs debug output, and it shows that we attempt to mkdir 
> /user/root/.Trash, which testuser doesn't have permissions to.
> Ideally, we'd be able to remove testFile1 and have testFile1 be put into 
> /user/testuser/.Trash instead of /user/root/.Trash.
> Error in debug:
> {code}
> unlink /user/testuser/testFile1
> hdfsCreateDirectory(/user/root/.Trash/Current/user/testuser): 
> FileSystem#mkdirs error:
> org.apache.hadoop.security.AccessControlException: Permission denied: 
> user=testuser, access=WRITE, inode="/user/root":root:supergroup:drwxr-xr-x
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:224)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:204)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:149)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4716)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4698)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4672)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:3035)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:2999)
>  at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:2980)
>  at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:648)
>  at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTransla