[ https://issues.apache.org/jira/browse/HDFS-729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12769124#action_12769124 ]
dhruba borthakur commented on HDFS-729: --------------------------------------- There are two existing options to handle corrupted files, one option moves the file to lost+found and the other option deletes the corrupted file. I would like to add another option "listCorruptedFiles" that will list the corrupted files if any. An alternative is to running a "fsck -files" and then filter the output on the client to display only corrupted files; but on a cluster with 20 million files, the total amount of data (one for every line of output) to be transferred to the client is huge and introduces lots of latency. > fsck option to list only corrupted files > ---------------------------------------- > > Key: HDFS-729 > URL: https://issues.apache.org/jira/browse/HDFS-729 > Project: Hadoop HDFS > Issue Type: Improvement > Reporter: dhruba borthakur > Assignee: dhruba borthakur > > An option to fsck to list only corrupted files will be very helpful for > frequent monitoring. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.