[ 
https://issues.apache.org/jira/browse/HDFS-17760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17936076#comment-17936076
 ] 

ASF GitHub Bot commented on HDFS-17760:
---------------------------------------

LiuGuH opened a new pull request, #7514:
URL: https://github.com/apache/hadoop/pull/7514

   <!--
     Thanks for sending a pull request!
       1. If this is your first time, please read our contributor guidelines: 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
       2. Make sure your PR title starts with JIRA issue id, e.g., 
'HADOOP-17799. Your PR title ...'.
   -->
   
   ### Description of PR
   As discription in 
[HDFS-17760](https://issues.apache.org/jira/browse/HDFS-17760)
   
   Move a dir  to trash , when the trash alreay has a  file inode with the same 
name in dir, it will throw ParentNotDirectoryException.
   
   This can be reproduced by following the steps below:
   
   User test does this:
   1. hdfs dfs -touch /subdir0
   2. hdfs dfs -rmr  /subdir0
       The trash path will be /user/test/.Trash/Current/subdir0
   3. hdfs dfs -mkdir -p /subdir0/subdir1/subdir2
   4. hdfs dfs -rmr   /subdir0/subdir1/subdir2
        The trash path should be 
/user/test/.Trash/Current/subdir0timestamp/subdir1/subdir2 ,rather than thow 
ParentNotDirectoryException
   
   
   
   Detailed stack of errors reported:
   ```
    Caused by: 
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.fs.ParentNotDirectoryException):
 /user/test/Current/subdir0 (is not a directory)
        at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkIsDirectory(FSPermissionChecker.java:905)
        at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:578)
        at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:490)
        at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermissionWithContext(FSPermissionChecker.java:527)
        at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:397)
        at 
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:873)
        at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1907)
        at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1925)
        at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.resolvePath(FSDirectory.java:740)
        at 
org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:53)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3568)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1176)
        at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:742)
        at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:631)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:599)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:583)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1228)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1294)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1216)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1959)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:3250)
   
        at org.apache.hadoop.ipc.Client.warpIOException(Client.java:1596)
        at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1587)
        at org.apache.hadoop.ipc.Client.call(Client.java:1540)
        at org.apache.hadoop.ipc.Client.call(Client.java:1458)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:259)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:140)
        at com.sun.proxy.$Proxy79.mkdirs(Unknown Source)
        at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.lambda$mkdirs$20(ClientNamenodeProtocolTranslatorPB.java:611)
        at 
org.apache.hadoop.ipc.internal.ShadedProtobufHelper.ipc(ShadedProtobufHelper.java:160)
        at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:611)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:437)
        at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:170)
        at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:162)
        at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:100)
        at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:366)
        at com.sun.proxy.$Proxy80.mkdirs(Unknown Source)
        at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2555)
        at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2531)
        at 
org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1507)
        at 
org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1504)
        at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
        at 
org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1521)
        at 
org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1496)
   ```
   




> Fix MoveToTrash throws ParentNotDirectoryException when there is a file inode 
> with the same name  in the trash
> --------------------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-17760
>                 URL: https://issues.apache.org/jira/browse/HDFS-17760
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: liuguanghua
>            Assignee: liuguanghua
>            Priority: Major
>
> Move a dir  to trash , when the trash alreay has a  file inode with the same 
> name in dir, it will throw ParentNotDirectoryException.
> This can be reproduced by following the steps below:
> User test does this:
> (1)hdfs dfs -touch /subdir0
> (2)hdfs dfs -rmr  /subdir0
>     The trash path will be /user/test/.Trash/Current/subdir0
> (3) hdfs dfs -mkdir -p /subdir0/subdir1/subdir2
> (4)hdfs dfs -rmr   /subdir0/subdir1/subdir2
>     The trash path should be 
> /user/test/.Trash/Current/subdir0timestamp/subdir1/subdir2 ,rather than thow 
> ParentNotDirectoryException
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to