[ https://issues.apache.org/jira/browse/HDFS-729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12840378#action_12840378 ]
dhruba borthakur commented on HDFS-729: --------------------------------------- Code looks good. The only question I have is that BlockManager.getCorruptInodes does the following: {code} LinkedHashSet<INode> set = new LinkedHashSet<INode>(this.maxCorruptFilesReturned*2); {code} Can you pl explain why the multiplication by 2 is needed? > fsck option to list only corrupted files > ---------------------------------------- > > Key: HDFS-729 > URL: https://issues.apache.org/jira/browse/HDFS-729 > Project: Hadoop HDFS > Issue Type: Improvement > Reporter: dhruba borthakur > Assignee: Rodrigo Schmidt > Attachments: badFiles.txt, badFiles2.txt, corruptFiles.txt, > HDFS-729.1.patch, HDFS-729.2.patch, HDFS-729.3.patch > > > An option to fsck to list only corrupted files will be very helpful for > frequent monitoring. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.