[ 
https://issues.apache.org/jira/browse/HDFS-2932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Srikanth Upputuri resolved HDFS-2932.
-------------------------------------
       Resolution: Duplicate
    Fix Version/s:     (was: 0.24.0)

Closed as duplicate of HDFS-3493. 

> Under replicated block after the pipeline recovery.
> ---------------------------------------------------
>
>                 Key: HDFS-2932
>                 URL: https://issues.apache.org/jira/browse/HDFS-2932
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: datanode
>    Affects Versions: 0.24.0
>            Reporter: J.Andreina
>            Assignee: Srikanth Upputuri
>
> Started 1NN,DN1,DN2,DN3 in the same machine.
> Written a huge file of size 2 Gb
> while the write for the block-id-1005 is in progress bruought down DN3.
> after the pipeline recovery happened.Block stamp changed into block_id_1006 
> in DN1,Dn2.
> after the write is over.DN3 is brought up and fsck command is issued.
> the following mess is displayed as follows
> "block-id_1006 is underreplicatede.Target replicas is 3 but found 2 replicas".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to