[ https://issues.apache.org/jira/browse/HADOOP-12374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14747110#comment-14747110 ]
Hudson commented on HADOOP-12374: --------------------------------- FAILURE: Integrated in Hadoop-Hdfs-trunk #2317 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/2317/]) HADOOP-12374. Updated expunge command description. (eyang: rev 2ffe2db95ede7f30aeaece4619db7eb08b84280e) * hadoop-common-project/hadoop-common/src/site/markdown/FileSystemShell.md * hadoop-common-project/hadoop-common/CHANGES.txt * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Delete.java > Description of hdfs expunge command is confusing > ------------------------------------------------ > > Key: HADOOP-12374 > URL: https://issues.apache.org/jira/browse/HADOOP-12374 > Project: Hadoop Common > Issue Type: Bug > Components: documentation, trash > Affects Versions: 2.7.0, 2.7.1 > Reporter: Weiwei Yang > Assignee: Weiwei Yang > Labels: docuentation, newbie, suggestions, trash > Attachments: HADOOP-12374.001.patch, HADOOP-12374.002.patch, > HADOOP-12374.003.patch, HADOOP-12374.004.patch > > > Usage: hadoop fs -expunge > Empty the Trash. Refer to the HDFS Architecture Guide for more information on > the Trash feature. > this description is confusing. It gives user the impression that this command > will empty trash, but actually it only removes old checkpoints. If user sets > a pretty long value for fs.trash.interval, this command will not remove > anything until checkpoints exist longer than this value. -- This message was sent by Atlassian JIRA (v6.3.4#6332)