[jira] [Updated] (HDFS-5761) DataNode fails to validate integrity for checksum type NULL when DataNode recovers

2015-05-05 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HDFS-5761:
---
Labels: BB2015-05-TBR  (was: )

> DataNode fails to validate integrity for checksum type NULL when DataNode 
> recovers 
> ---
>
> Key: HDFS-5761
> URL: https://issues.apache.org/jira/browse/HDFS-5761
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0
>Reporter: Kousuke Saruta
>Assignee: Kousuke Saruta
>  Labels: BB2015-05-TBR
> Attachments: HDFS-5761.patch
>
>
> When DataNode is down during writing blocks, the blocks are not filinalized 
> and the next time DataNode recovers, integrity validation will run.
> But if we use NULL for checksum algorithm (we can set NULL to 
> dfs.checksum.type), DataNode will fail to validate integrity and cannot be 
> up. 
> The cause is in BlockPoolSlice#validateIntegrity.
> In the method, there is following code.
> {code}
> long numChunks = Math.min(
>   (blockFileLen + bytesPerChecksum - 1)/bytesPerChecksum, 
>   (metaFileLen - crcHeaderLen)/checksumSize);
> {code}
> When we choose NULL checksum, checksumSize is 0 so ArithmeticException will 
> be thrown and DataNode cannot be up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-5761) DataNode fails to validate integrity for checksum type NULL when DataNode recovers

2014-01-13 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated HDFS-5761:
-

Status: Patch Available  (was: Open)

> DataNode fails to validate integrity for checksum type NULL when DataNode 
> recovers 
> ---
>
> Key: HDFS-5761
> URL: https://issues.apache.org/jira/browse/HDFS-5761
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0
>Reporter: Kousuke Saruta
>Assignee: Kousuke Saruta
> Attachments: HDFS-5761.patch
>
>
> When DataNode is down during writing blocks, the blocks are not filinalized 
> and the next time DataNode recovers, integrity validation will run.
> But if we use NULL for checksum algorithm (we can set NULL to 
> dfs.checksum.type), DataNode will fail to validate integrity and cannot be 
> up. 
> The cause is in BlockPoolSlice#validateIntegrity.
> In the method, there is following code.
> {code}
> long numChunks = Math.min(
>   (blockFileLen + bytesPerChecksum - 1)/bytesPerChecksum, 
>   (metaFileLen - crcHeaderLen)/checksumSize);
> {code}
> When we choose NULL checksum, checksumSize is 0 so ArithmeticException will 
> be thrown and DataNode cannot be up.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HDFS-5761) DataNode fails to validate integrity for checksum type NULL when DataNode recovers

2014-01-13 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated HDFS-5761:
-

Attachment: HDFS-5761.patch

I've attached a patch for this issue.

> DataNode fails to validate integrity for checksum type NULL when DataNode 
> recovers 
> ---
>
> Key: HDFS-5761
> URL: https://issues.apache.org/jira/browse/HDFS-5761
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0
>Reporter: Kousuke Saruta
>Assignee: Kousuke Saruta
> Attachments: HDFS-5761.patch
>
>
> When DataNode is down during writing blocks, the blocks are not filinalized 
> and the next time DataNode recovers, integrity validation will run.
> But if we use NULL for checksum algorithm (we can set NULL to 
> dfs.checksum.type), DataNode will fail to validate integrity and cannot be 
> up. 
> The cause is in BlockPoolSlice#validateIntegrity.
> In the method, there is following code.
> {code}
> long numChunks = Math.min(
>   (blockFileLen + bytesPerChecksum - 1)/bytesPerChecksum, 
>   (metaFileLen - crcHeaderLen)/checksumSize);
> {code}
> When we choose NULL checksum, checksumSize is 0 so ArithmeticException will 
> be thrown and DataNode cannot be up.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HDFS-5761) DataNode fails to validate integrity for checksum type NULL when DataNode recovers

2014-01-13 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated HDFS-5761:
-

Summary: DataNode fails to validate integrity for checksum type NULL when 
DataNode recovers   (was: DataNode fail to validate integrity for checksum type 
NULL when DataNode recovers )

> DataNode fails to validate integrity for checksum type NULL when DataNode 
> recovers 
> ---
>
> Key: HDFS-5761
> URL: https://issues.apache.org/jira/browse/HDFS-5761
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.0.0
>Reporter: Kousuke Saruta
>Assignee: Kousuke Saruta
>
> When DataNode is down during writing blocks, the blocks are not filinalized 
> and the next time DataNode recovers, integrity validation will run.
> But if we use NULL for checksum algorithm (we can set NULL to 
> dfs.checksum.type), DataNode will fail to validate integrity and cannot be 
> up. 
> The cause is in BlockPoolSlice#validateIntegrity.
> In the method, there is following code.
> {code}
> long numChunks = Math.min(
>   (blockFileLen + bytesPerChecksum - 1)/bytesPerChecksum, 
>   (metaFileLen - crcHeaderLen)/checksumSize);
> {code}
> When we choose NULL checksum, checksumSize is 0 so ArithmeticException will 
> be thrown and DataNode cannot be up.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)