[ 
https://issues.apache.org/jira/browse/HDFS-2021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13042384#comment-13042384
 ] 

Daryn Sharp commented on HDFS-2021:
-----------------------------------

+1
Looks good.  Presumably increasing the number of writes and the chunk size is 
to more easily induce the problem.  I hope it doesn't add much runtime to the 
test suite...

> TestWriteRead failed with inconsistent visible length of a file 
> ----------------------------------------------------------------
>
>                 Key: HDFS-2021
>                 URL: https://issues.apache.org/jira/browse/HDFS-2021
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: data-node
>         Environment: Linux RHEL5
>            Reporter: CW Chung
>            Assignee: John George
>         Attachments: HDFS-2021-2.patch, HDFS-2021.patch
>
>
> The junit test failed when iterates a number of times with larger chunk size 
> on Linux. Once a while, the visible number of bytes seen by a reader is 
> slightly less than what was supposed to be. 
> When run with the following parameter, it failed more often on Linux ( as 
> reported by John George) than my Mac:
>   private static final int WR_NTIMES = 300;
>   private static final int WR_CHUNK_SIZE = 10000;
> Adding more debugging output to the source, this is a sample of the output:
> Caused by: java.io.IOException: readData mismatch in byte read: 
> expected=2770000 ; got 2765312
>         at 
> org.apache.hadoop.hdfs.TestWriteRead.readData(TestWriteRead.java:141)

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to