[ 
https://issues.apache.org/jira/browse/HDFS-6325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13988209#comment-13988209
 ] 

Konstantin Shvachko commented on HDFS-6325:
-------------------------------------------

An easy way to reproduce this is to create a file on a cluster, then restart NN 
without DNs, manually leave SafeMode, and try to append data to the file. On a 
real cluster one can kill 3 DNs (on different racks), then some blocks will be 
missing, and it is likely that one of them will be the last of some file.

> Append should fail if the last block has unsufficient number of replicas
> ------------------------------------------------------------------------
>
>                 Key: HDFS-6325
>                 URL: https://issues.apache.org/jira/browse/HDFS-6325
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: namenode
>    Affects Versions: 2.2.0
>            Reporter: Konstantin Shvachko
>
> Currently append() succeeds on a file with the last block that has no 
> replicas. But the subsequent updatePipeline() fails as there are no replicas 
> with the exception "Unable to retrieve blocks locations for last block". This 
> leaves the file unclosed, and others can not do anything with it until its 
> lease expires.
> The solution is to check replicas of the last block on the NameNode and fail 
> during append() rather than during updatePipeline().
> How many replicas should be present before NN allows to append? I see two 
> options:
> # min-replication: allow append if the last block is minimally replicated (1 
> by default)
> # full-replication: allow append if the last block is fully replicated (3 by 
> default)



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to