[ 
https://issues.apache.org/jira/browse/HDFS-9220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15104034#comment-15104034
 ] 

Junping Du commented on HDFS-9220:
----------------------------------

Thanks [~kihwal] for clarification on this. For layout fix, I think you mean 
HDFS-8791 (together with HDFS-8578). Isn't it? I am Ok with pulling that two 
fixes if it is possible to make on time.

> Reading small file (< 512 bytes) that is open for append fails due to 
> incorrect checksum
> ----------------------------------------------------------------------------------------
>
>                 Key: HDFS-9220
>                 URL: https://issues.apache.org/jira/browse/HDFS-9220
>             Project: Hadoop HDFS
>          Issue Type: Bug
>    Affects Versions: 2.7.1
>            Reporter: Bogdan Raducanu
>            Assignee: Jing Zhao
>            Priority: Blocker
>             Fix For: 3.0.0, 2.7.2
>
>         Attachments: HDFS-9220.000.patch, HDFS-9220.001.patch, 
> HDFS-9220.002.patch, test2.java
>
>
> Exception:
> 2015-10-09 14:59:40 WARN  DFSClient:1150 - fetchBlockByteRange(). Got a 
> checksum exception for /tmp/file0.05355529331575182 at 
> BP-353681639-10.10.10.10-1437493596883:blk_1075692769_9244882:0 from 
> DatanodeInfoWithStorage[10.10.10.10]:5001
> All 3 replicas cause this exception and the read fails entirely with:
> BlockMissingException: Could not obtain block: 
> BP-353681639-10.10.10.10-1437493596883:blk_1075692769_9244882 
> file=/tmp/file0.05355529331575182
> Code to reproduce is attached.
> Does not happen in 2.7.0.
> Data is read correctly if checksum verification is disabled.
> More generally, the failure happens when reading from the last block of a 
> file and the last block has <= 512 bytes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to