[
https://issues.apache.org/jira/browse/HDFS-1111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12883369#action_12883369
]
Sanjay Radia commented on HDFS-1111:
------------------------------------
Q. Is the RaidNode accessing the functionality via RPC directly or via method
that was added to the Hdfs and DistributedFileSystem.
RaidNode should not be accessing the functionality directly via RPC - rpc's are
internal interfaces.
Further if you believe this functionality is useful for adding to Hdfs and
DistributedFilesystem please make the case ( i believe one could make such a
case).
When adding special hooks for private or external tools, one should make a case
that such hooks are generally useful.
I realize that a previous Jira has added this functionality; but you were the
author of the previous jira and so should be able to make the case.
> getCorruptFiles() should give some hint that the list is not complete
> ---------------------------------------------------------------------
>
> Key: HDFS-1111
> URL: https://issues.apache.org/jira/browse/HDFS-1111
> Project: Hadoop HDFS
> Issue Type: New Feature
> Reporter: Rodrigo Schmidt
> Assignee: Rodrigo Schmidt
> Attachments: HADFS-1111.0.patch
>
>
> If the list of corruptfiles returned by the namenode doesn't say anything if
> the number of corrupted files is larger than the call output limit (which
> means the list is not complete). There should be a way to hint incompleteness
> to clients.
> A simple hack would be to add an extra entry to the array returned with the
> value null. Clients could interpret this as a sign that there are other
> corrupt files in the system.
> We should also do some rephrasing of the fsck output to make it more
> confident when the list is not complete and less confident when the list is
> known to be incomplete.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.