[
https://issues.apache.org/jira/browse/HADOOP-1259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12488763
]
Raghu Angadi commented on HADOOP-1259:
--------------------------------------
> for HADOOP-1134, already has to deal with this problem: it's too late to fix
> things for that.
true :-(. Since the fix is simple, I thought this will reduce such cases. Or as
you mentioned, if there are no such clusters right now.
My motivation was only for this upgrade. post-HADOOP-1134-upgrade, such a
mismatch will cause similar issue, for e.g., to join two blocks, we will need
to re-checksum the entire second block. But I am fine with the mismatch for
Block level checksums..
We can close issue for now I guess..
> DFS should enforce block size is a multiple of io.bytes.per.checksum
> ---------------------------------------------------------------------
>
> Key: HADOOP-1259
> URL: https://issues.apache.org/jira/browse/HADOOP-1259
> Project: Hadoop
> Issue Type: Improvement
> Reporter: Raghu Angadi
>
> DFSClient currently does not enforce that dfs.block.size is a multiple
> io.bytes.per.checksum. This not really problem currently but can future
> upgrades like HADOOP-1134 (see one of the comments
> http://issues.apache.org/jira/browse/HADOOP-1134#action_12488542 there).
> I propose DFSClient should fail loudly and ask the user politely to change
> the config to meet this conidtion. Of course we will change the documentation
> for dfs.block.size also.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.