[ 
https://issues.apache.org/jira/browse/HDFS-1057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12883785#action_12883785
 ] 

sam rash commented on HDFS-1057:
--------------------------------

from the raw console output of hudson:

     [exec]     [junit] Tests run: 3, Failures: 0, Errors: 1, Time elapsed: 
0.624 sec
     [exec]     [junit] Test 
org.apache.hadoop.hdfs.security.token.block.TestBlockToken FAILED
--
     [exec]     [junit] Tests run: 1, Failures: 0, Errors: 1, Time elapsed: 
0.706 sec
     [exec]     [junit] Test org.apache.hadoop.hdfs.server.common.TestJspHelper 
FAILED
--
     [exec]     [junit] Tests run: 1, Failures: 0, Errors: 1, Time elapsed: 
28.477 sec
     [exec]     [junit] Test org.apache.hadoop.hdfsproxy.TestHdfsProxy FAILED

I ran the tests locally and the first 2 succeed.  The third fails on the latest 
trunk without hdfs-1057.  I think from the test perspective, this is safe to 
commit.

1. TestBlockToken

run-test-hdfs:
   [delete] Deleting directory 
/data/users/srash/apache/hadoop-hdfs/build/test/data
    [mkdir] Created dir: /data/users/srash/apache/hadoop-hdfs/build/test/data
   [delete] Deleting directory 
/data/users/srash/apache/hadoop-hdfs/build/test/logs
    [mkdir] Created dir: /data/users/srash/apache/hadoop-hdfs/build/test/logs
    [junit] WARNING: multiple versions of ant detected in path for junit 
    [junit]          
jar:file:/usr/local/ant/lib/ant.jar!/org/apache/tools/ant/Project.class
    [junit]      and 
jar:file:/home/srash/.ivy2/cache/ant/ant/jars/ant-1.6.5.jar!/org/apache/tools/ant/Project.class
    [junit] Running org.apache.hadoop.hdfs.security.token.block.TestBlockToken
    [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 1.248 sec


2. TestJspHelper
run-test-hdfs:
   [delete] Deleting directory 
/data/users/srash/apache/hadoop-hdfs/build/test/data
    [mkdir] Created dir: /data/users/srash/apache/hadoop-hdfs/build/test/data
   [delete] Deleting directory 
/data/users/srash/apache/hadoop-hdfs/build/test/logs
    [mkdir] Created dir: /data/users/srash/apache/hadoop-hdfs/build/test/logs
    [junit] WARNING: multiple versions of ant detected in path for junit 
    [junit]          
jar:file:/usr/local/ant/lib/ant.jar!/org/apache/tools/ant/Project.class
    [junit]      and 
jar:file:/home/srash/.ivy2/cache/ant/ant/jars/ant-1.6.5.jar!/org/apache/tools/ant/Project.class
    [junit] Running org.apache.hadoop.hdfs.server.common.TestJspHelper
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 1.275 sec


> Concurrent readers hit ChecksumExceptions if following a writer to very end 
> of file
> -----------------------------------------------------------------------------------
>
>                 Key: HDFS-1057
>                 URL: https://issues.apache.org/jira/browse/HDFS-1057
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: data-node
>    Affects Versions: 0.20-append, 0.21.0, 0.22.0
>            Reporter: Todd Lipcon
>            Assignee: sam rash
>            Priority: Blocker
>             Fix For: 0.20-append
>
>         Attachments: conurrent-reader-patch-1.txt, 
> conurrent-reader-patch-2.txt, conurrent-reader-patch-3.txt, 
> HDFS-1057-0.20-append.patch, hdfs-1057-trunk-1.txt, hdfs-1057-trunk-2.txt, 
> hdfs-1057-trunk-3.txt, hdfs-1057-trunk-4.txt, hdfs-1057-trunk-5.txt, 
> hdfs-1057-trunk-6.txt
>
>
> In BlockReceiver.receivePacket, it calls replicaInfo.setBytesOnDisk before 
> calling flush(). Therefore, if there is a concurrent reader, it's possible to 
> race here - the reader will see the new length while those bytes are still in 
> the buffers of BlockReceiver. Thus the client will potentially see checksum 
> errors or EOFs. Additionally, the last checksum chunk of the file is made 
> accessible to readers even though it is not stable.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to