[ https://issues.apache.org/jira/browse/HDFS-1111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12904363#action_12904363 ]
dhruba borthakur commented on HDFS-1111: ---------------------------------------- The patch posted by Sriram look good, however I would add the new API listCorruptFilesAndBlocks to ClientProtocol so that tools can use it. > getCorruptFiles() should give some hint that the list is not complete > --------------------------------------------------------------------- > > Key: HDFS-1111 > URL: https://issues.apache.org/jira/browse/HDFS-1111 > Project: Hadoop HDFS > Issue Type: New Feature > Affects Versions: 0.22.0 > Reporter: Rodrigo Schmidt > Assignee: Sriram Rao > Fix For: 0.22.0 > > Attachments: HADFS-1111.0.patch, HDFS-1111-y20.1.patch, > HDFS-1111-y20.2.patch, HDFS-1111.trunk.patch > > > If the list of corruptfiles returned by the namenode doesn't say anything if > the number of corrupted files is larger than the call output limit (which > means the list is not complete). There should be a way to hint incompleteness > to clients. > A simple hack would be to add an extra entry to the array returned with the > value null. Clients could interpret this as a sign that there are other > corrupt files in the system. > We should also do some rephrasing of the fsck output to make it more > confident when the list is not complete and less confident when the list is > known to be incomplete. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.