[ 
https://issues.apache.org/jira/browse/HDDS-2259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-2259:
-----------------------------------
    Description: 
Chunk checksum verification fails for (almost) any file.  This is caused by 
computing checksum for the entire buffer, regardless of the actual size of the 
chunk.

{code:title=https://github.com/apache/hadoop/blob/55c5436f39120da0d7dabf43d7e5e6404307123b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueContainerCheck.java#L259-L273}
            byte[] buffer = new byte[cData.getBytesPerChecksum()];
...
                v = fs.read(buffer);
...
                bytesRead += v;
...
                ByteString actual = cal.computeChecksum(buffer)
                    .getChecksums().get(0);
{code}

This results in marking all closed containers as unhealthy.

  was:
Chunk checksum verification fails for (almost) any file.  This is caused by 
computing checksum for the entire buffer, regardless of the actual size of the 
chunk.

{code:title=https://github.com/apache/hadoop/blob/55c5436f39120da0d7dabf43d7e5e6404307123b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueContainerCheck.java#L259-L273}
            byte[] buffer = new byte[cData.getBytesPerChecksum()];
...
                v = fs.read(buffer);
...
                bytesRead += v;
...
                ByteString actual = cal.computeChecksum(buffer)
                    .getChecksums().get(0);
{code}


> Container Data Scrubber computes wrong checksum
> -----------------------------------------------
>
>                 Key: HDDS-2259
>                 URL: https://issues.apache.org/jira/browse/HDDS-2259
>             Project: Hadoop Distributed Data Store
>          Issue Type: Sub-task
>          Components: Ozone Datanode
>    Affects Versions: 0.5.0
>            Reporter: Attila Doroszlai
>            Assignee: Attila Doroszlai
>            Priority: Critical
>
> Chunk checksum verification fails for (almost) any file.  This is caused by 
> computing checksum for the entire buffer, regardless of the actual size of 
> the chunk.
> {code:title=https://github.com/apache/hadoop/blob/55c5436f39120da0d7dabf43d7e5e6404307123b/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/KeyValueContainerCheck.java#L259-L273}
>             byte[] buffer = new byte[cData.getBytesPerChecksum()];
> ...
>                 v = fs.read(buffer);
> ...
>                 bytesRead += v;
> ...
>                 ByteString actual = cal.computeChecksum(buffer)
>                     .getChecksums().get(0);
> {code}
> This results in marking all closed containers as unhealthy.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to