[ 
https://issues.apache.org/jira/browse/HDFS-2021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13042208#comment-13042208
 ] 

Daryn Sharp commented on HDFS-2021:
-----------------------------------

I noticed that you omitted the conditional {{replyAck.isSuccess()}} when you 
moved the code block that updates the {{bytesAcked}}.  The {{isSuccess()}} 
isn't tied to whether the ack was successfully sent upstream, but rather 
whether the downstreams were all successful, thus is seems like the conditional 
should be reinserted to preserve the current behavior.  Changing the overall 
logic seems fraught with peril...

That said, I'm a bit confused about why a datanode updates its {{bytesAcked}} 
iff all downstreams are successful.  The datanode received and wrote those 
bytes so it seems like the conditional isn't needed in either case.  Unless... 
{{bytesAcked}} is intended to track exactly how many bytes were written 
throughout the entire pipeline.  I'd think that a pipeline should write as much 
as it can even if downstreams are lost, then backfill the under-replicated 
blocks.  To satisfy curiosity, perhaps someone with more knowledge of the code 
will comment.

> HDFS Junit test TestWriteRead failed with inconsistent visible length of a 
> file 
> --------------------------------------------------------------------------------
>
>                 Key: HDFS-2021
>                 URL: https://issues.apache.org/jira/browse/HDFS-2021
>             Project: Hadoop HDFS
>          Issue Type: Bug
>         Environment: Linux RHEL5
>            Reporter: CW Chung
>            Assignee: John George
>            Priority: Minor
>         Attachments: HDFS-2021.patch
>
>
> The junit test failed when iterates a number of times with larger chunk size 
> on Linux. Once a while, the visible number of bytes seen by a reader is 
> slightly less than what was supposed to be. 
> When run with the following parameter, it failed more often on Linux ( as 
> reported by John George) than my Mac:
>   private static final int WR_NTIMES = 300;
>   private static final int WR_CHUNK_SIZE = 10000;
> Adding more debugging output to the source, this is a sample of the output:
> Caused by: java.io.IOException: readData mismatch in byte read: 
> expected=2770000 ; got 2765312
>         at 
> org.apache.hadoop.hdfs.TestWriteRead.readData(TestWriteRead.java:141)

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to