[ https://issues.apache.org/jira/browse/HDFS-1032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
André Oriani updated HDFS-1032: ------------------------------- Attachment: hdfs-1032_aoriani_2.patch *Rework done *Unit Test Added *Upmerged to last commit of Mar 20th 2010 I decided not handle the case where "-corruptfiles" is combined with another options. There is little validation of fsck's parameters. Things like "fsck -invalidoption" and "fsck -move -delete" are currently allowed. Handling just the case for "-corruptfiles" would lead to low quality code. In my opinion a new JIRA shall be filled to deal with this issue. > Extend DFSck with an option to list corrupt files using API from HDFS-729 > ------------------------------------------------------------------------- > > Key: HDFS-1032 > URL: https://issues.apache.org/jira/browse/HDFS-1032 > Project: Hadoop HDFS > Issue Type: Improvement > Components: tools > Reporter: Rodrigo Schmidt > Assignee: André Oriani > Attachments: hdfs-1032_aoriani.patch, hdfs-1032_aoriani_2.patch > > > HDFS-729 created a new API to namenode that returns the list of corrupt files. > We can now extend fsck (DFSck.java) to add an option (e.g. --list_corrupt) > that queries the namenode using the new API and lists the corrupt blocks to > the users. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.