[ 
https://issues.apache.org/jira/browse/HDFS-4360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13544615#comment-13544615
 ] 

Hadoop QA commented on HDFS-4360:
---------------------------------

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12563403/HDFS-4360.patch
  against trunk revision .

    {color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3744//console

This message is automatically generated.
                
> multiple BlockFixer should be supported in order to improve scalability and 
> reduce too much work on single BlockFixer
> ---------------------------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-4360
>                 URL: https://issues.apache.org/jira/browse/HDFS-4360
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: contrib/raid
>    Affects Versions: 0.22.0
>            Reporter: Jun Jin
>              Labels: patch
>         Attachments: HDFS-4360.patch
>
>
> current implementation can only run single BlockFixer since the fsck (in 
> RaidDFSUtil.getCorruptFiles) only check the whole DFS file system. multiple 
> BlockFixer will do the same thing and try to fix same file if multiple 
> BlockFixer launched. 
> the change/fix will be mainly in BlockFixer.java and 
> RaidDFSUtil.getCorruptFile(), to enable fsck to check the different paths 
> defined in separated Raid.xml for single RaidNode/BlockFixer

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to