[
https://issues.apache.org/jira/browse/HADOOP-1259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12488760
]
Doug Cutting commented on HADOOP-1259:
--------------------------------------
> Is this with ChecksumFileSystem or Block level checksums?
DFS block-level checksums, as will be added by HADOOP-1134.
>> If we want to change bytesPerChecksum for a [ ... ].
> I didn't follow this completely.
I was questioning whether, after HADOOP-1134 is implemented, it will complicate
future upgrades to permit arbitrary block sizes and bytesPerChecksum. The
current upgrade, for HADOOP-1134, already has to deal with this problem: it's
too late to fix things for that. I assumed that the motivation for the
proposal in this issue was to simplify post-HADOOP-1134 upgrades to DFS. I am
not convinced that it will simplify them much, but perhaps that was not your
motivation.
> DFS should enforce block size is a multiple of io.bytes.per.checksum
> ---------------------------------------------------------------------
>
> Key: HADOOP-1259
> URL: https://issues.apache.org/jira/browse/HADOOP-1259
> Project: Hadoop
> Issue Type: Improvement
> Reporter: Raghu Angadi
>
> DFSClient currently does not enforce that dfs.block.size is a multiple
> io.bytes.per.checksum. This not really problem currently but can future
> upgrades like HADOOP-1134 (see one of the comments
> http://issues.apache.org/jira/browse/HADOOP-1134#action_12488542 there).
> I propose DFSClient should fail loudly and ask the user politely to change
> the config to meet this conidtion. Of course we will change the documentation
> for dfs.block.size also.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.