[ 
https://issues.apache.org/jira/browse/HDFS-1059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HDFS-1059:
------------------------------

    Issue Type: Sub-task  (was: Bug)
        Parent: HDFS-1060

> completeFile loops forever if the block's only replica has become corrupt
> -------------------------------------------------------------------------
>
>                 Key: HDFS-1059
>                 URL: https://issues.apache.org/jira/browse/HDFS-1059
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>    Affects Versions: 0.21.0, 0.22.0
>            Reporter: Todd Lipcon
>
> If a writer is appending to a block with replication factor 1, and that block 
> has become corrupt, a reader will report the corruption to the NN. Then when 
> the writer tries to complete the file, it will loop forever with an error 
> like:
>     [junit] 2010-03-21 17:40:08,093 INFO  namenode.FSNamesystem 
> (FSNamesystem.java:checkFileProgress(1613)) - BLOCK* 
> NameSystem.checkFileProgress: block 
> blk_-4256412191814117589_1001{blockUCState=COMMITTED, primaryNodeIndex=-1, 
> replicas=[ReplicaUnderConstruction[127.0.0.1:56782|RBW]]} has not reached 
> minimal replication 1
>     [junit] 2010-03-21 17:40:08,495 INFO  hdfs.DFSClient 
> (DFSOutputStream.java:completeFile(1435)) - Could not complete file 
> /TestReadWhileWriting/file1 retrying...
> Should add tests that cover the case of a writer appending to a block that is 
> corrupt while a reader accesses it.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to