[
https://issues.apache.org/jira/browse/HADOOP-1259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12488756
]
Raghu Angadi commented on HADOOP-1259:
--------------------------------------
> But I don't see how it complicates upgrades to permit the final checksum in
> each block to represent fewer bytes than bytesPerChecksum.
Is this with ChecksumFileSystem or Block level checksums?
If you are implying that we can do this for current ChecksumFileSystem: One
problem with it is that ChecksumFileSystem is not aware of blocks and it does
(should) not know how to handle this this situation. Doing this will change
offset of where 4 byte checksum is located for data located in a DFS file.
> If we want to change bytesPerChecksum for a [ ... ].
I didn't follow this completely.
> DFS should enforce block size is a multiple of io.bytes.per.checksum
> ---------------------------------------------------------------------
>
> Key: HADOOP-1259
> URL: https://issues.apache.org/jira/browse/HADOOP-1259
> Project: Hadoop
> Issue Type: Improvement
> Reporter: Raghu Angadi
>
> DFSClient currently does not enforce that dfs.block.size is a multiple
> io.bytes.per.checksum. This not really problem currently but can future
> upgrades like HADOOP-1134 (see one of the comments
> http://issues.apache.org/jira/browse/HADOOP-1134#action_12488542 there).
> I propose DFSClient should fail loudly and ask the user politely to change
> the config to meet this conidtion. Of course we will change the documentation
> for dfs.block.size also.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.