[ https://issues.apache.org/jira/browse/HDFS-15009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16983251#comment-16983251 ]
Ayush Saxena commented on HDFS-15009: ------------------------------------- Thanx [~hemanthboyina] for the patch. {code:java} -import static org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys.FEDERATION_MOUNT_TABLE_MAX_CACHE_SIZE; -import static org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys.FEDERATION_MOUNT_TABLE_MAX_CACHE_SIZE_DEFAULT; +import static org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys.FEDERATION_MOUNT_TABLE_MAX_CACHE_SIZE; +import static org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys.FEDERATION_MOUNT_TABLE_MAX_CACHE_SIZE_DEFAULT; {code} Avoid this change due to change of order in imports unnecessarily. For the test, Add a line of comment, explaining what it is testing, as done for other cases. Apart LGTM. > FSCK "-list-corruptfileblocks" return Invalid Entries > ----------------------------------------------------- > > Key: HDFS-15009 > URL: https://issues.apache.org/jira/browse/HDFS-15009 > Project: Hadoop HDFS > Issue Type: Bug > Reporter: hemanthboyina > Assignee: hemanthboyina > Priority: Major > Attachments: HDFS-15009.001.patch, HDFS-15009.002.patch > > > Scenario : if we have two directories dir1, dir10 and only dir10 have > corrupt files > Now if we run -list-corruptfileblocks for dir1, corrupt files count for dir1 > showing is of dir10 > {code:java} > while (blkIterator.hasNext()) { > BlockInfo blk = blkIterator.next(); > final INodeFile inode = getBlockCollection(blk); > skip++; > if (inode != null) { > String src = inode.getFullPathName(); > if (src.startsWith(path)){ > corruptFiles.add(new CorruptFileBlockInfo(src, blk)); > count++; > if (count >= DEFAULT_MAX_CORRUPT_FILEBLOCKS_RETURNED) > break; > } > } > } {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org