[ 
https://issues.apache.org/jira/browse/HDFS-1032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12849590#action_12849590
 ] 

Rodrigo Schmidt commented on HDFS-1032:
---------------------------------------

What about the following option:

2) In your summary, if

2.1) count = 0, output: "Unable to locate any corrupt files under [path].\n\n 
Please run a complete fsck to verify if [path] is really 
[NamenodeFsck.HEALTHY_STATUS]"

2.2) count = 1, output: "There is at least 1 corrupt file under [path], which 
is [NamenodeFsck.CORRUPT_STATUS]"

2.3) count > 1, output: "There are at least [count] corrupt files under [path], 
which is [NamenodeFsck.CORRUPT_STATUS]"


As for your description, it might be slightly shorter this way:
_print out corrupt files up to a maximum defined by property 
dfs.corruptfilesreturned.max_

> Extend DFSck with an option to list corrupt files using API from HDFS-729
> -------------------------------------------------------------------------
>
>                 Key: HDFS-1032
>                 URL: https://issues.apache.org/jira/browse/HDFS-1032
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: tools
>            Reporter: Rodrigo Schmidt
>            Assignee: André Oriani
>         Attachments: hdfs-1032_aoriani.patch, hdfs-1032_aoriani_2.patch, 
> hdfs-1032_aoriani_3.patch
>
>
> HDFS-729 created a new API to namenode that returns the list of corrupt files.
> We can now extend fsck (DFSck.java) to add an option (e.g. --list_corrupt) 
> that queries the namenode using the new API and lists the corrupt blocks to 
> the users.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to