[ https://issues.apache.org/jira/browse/HADOOP-10902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Stephen Chu updated HADOOP-10902: --------------------------------- Attachment: HADOOP-10902.1.patch Attaching a patch that appends the cause to the "Failed to move to trash" error message. Now when users fail to move to Trash through FsShell, they'll get an explanation. e.g. {code} rm: Failed to move to trash: hdfs://schu-enc2.vpc.com:8020/user/hdfs/snap: The directory /user/hdfs/snap cannot be deleted since /user/hdfs/snap is snapshottable and already has snapshots {code} HDFS-6767 is an example of where this improved error message is helpful. > Deletion of directories with snapshots will not output reason for trash move > failure > ------------------------------------------------------------------------------------ > > Key: HADOOP-10902 > URL: https://issues.apache.org/jira/browse/HADOOP-10902 > Project: Hadoop Common > Issue Type: Improvement > Affects Versions: 2.4.0 > Reporter: Stephen Chu > Assignee: Stephen Chu > Priority: Minor > Attachments: HADOOP-10902.1.patch > > > When using trash-enabled FsShell to delete a directory that has snapshots, we > se an error message saying "Failed to move to trash" but no explanation. > {code} > [hdfs@schu-enc2 ~]$ hdfs dfs -rm -r snap > 2014-07-28 05:45:29,527 INFO [main] fs.TrashPolicyDefault > (TrashPolicyDefault.java:initialize(92)) - Namenode trash configuration: > Deletion interval = 1440 minutes, Emptier interval = 0 minutes. > rm: Failed to move to trash: hdfs://schu-enc2.vpc.com:8020/user/hdfs/snap. > Consider using -skipTrash option > {code} > If we use -skipTrash, then we'll get the explanation: "rm: The directory > /user/hdfs/snap cannot be deleted since /user/hdfs/snap is snapshottable and > already has snapshots" > It'd be an improvement to make it clear that dirs with snapshots cannot be > deleted when we're using the trash. -- This message was sent by Atlassian JIRA (v6.2#6252)