[ 
https://issues.apache.org/jira/browse/HDFS-2021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HDFS-2021:
-----------------------------------------

    Component/s: data-node
       Priority: Major  (was: Minor)
        Summary: TestWriteRead failed with inconsistent visible length of a 
file   (was: HDFS Junit test TestWriteRead failed with inconsistent visible 
length of a file )

> TestWriteRead failed with inconsistent visible length of a file 
> ----------------------------------------------------------------
>
>                 Key: HDFS-2021
>                 URL: https://issues.apache.org/jira/browse/HDFS-2021
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: data-node
>         Environment: Linux RHEL5
>            Reporter: CW Chung
>            Assignee: John George
>         Attachments: HDFS-2021-2.patch, HDFS-2021.patch
>
>
> The junit test failed when iterates a number of times with larger chunk size 
> on Linux. Once a while, the visible number of bytes seen by a reader is 
> slightly less than what was supposed to be. 
> When run with the following parameter, it failed more often on Linux ( as 
> reported by John George) than my Mac:
>   private static final int WR_NTIMES = 300;
>   private static final int WR_CHUNK_SIZE = 10000;
> Adding more debugging output to the source, this is a sample of the output:
> Caused by: java.io.IOException: readData mismatch in byte read: 
> expected=2770000 ; got 2765312
>         at 
> org.apache.hadoop.hdfs.TestWriteRead.readData(TestWriteRead.java:141)

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to