[ 
https://issues.apache.org/jira/browse/HDFS-6325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Plamen Jeliazkov updated HDFS-6325:
-----------------------------------

    Attachment: appendTest.patch

Attaching patch to reproduce issue. This can reproduced on a MiniDFSCluster of 
1 NameNode and 1 DataNode.

Once the environment is in the state of NameNode having zero block locations of 
some file X and out of SafeMode the following events will happen:
The first append to X will fail with 'unable to retrieve last block of file X'.
Subsequent appends will fail with AlreadyBeingCreatedException until lease 
recovery occurs.

> Append should fail if the last block has insufficient number of replicas
> ------------------------------------------------------------------------
>
>                 Key: HDFS-6325
>                 URL: https://issues.apache.org/jira/browse/HDFS-6325
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: namenode
>    Affects Versions: 2.2.0
>            Reporter: Konstantin Shvachko
>         Attachments: appendTest.patch
>
>
> Currently append() succeeds on a file with the last block that has no 
> replicas. But the subsequent updatePipeline() fails as there are no replicas 
> with the exception "Unable to retrieve blocks locations for last block". This 
> leaves the file unclosed, and others can not do anything with it until its 
> lease expires.
> The solution is to check replicas of the last block on the NameNode and fail 
> during append() rather than during updatePipeline().
> How many replicas should be present before NN allows to append? I see two 
> options:
> # min-replication: allow append if the last block is minimally replicated (1 
> by default)
> # full-replication: allow append if the last block is fully replicated (3 by 
> default)



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to