[ 
https://issues.apache.org/jira/browse/HDFS-5723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13863898#comment-13863898
 ] 

Vinay commented on HDFS-5723:
-----------------------------

{noformat}2014-01-07 09:47:22,878 ERROR 
org.apache.hadoop.hdfs.server.datanode.DataNode: <ip>:25012:DataXceiver error 
processing WRITE_BLOCK operation  src: /<ip>:56873 dest: /<ip2>:25012
org.apache.hadoop.hdfs.server.datanode.ReplicaNotFoundException: Cannot append 
to a non-existent replica BP-1746676845-<ip>-1388725564463:blk_1073742062_1268
        at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getReplicaInfo(FsDatasetImpl.java:372)
        at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.append(FsDatasetImpl.java:507)
        at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.append(FsDatasetImpl.java:93)
        at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.<init>(BlockReceiver.java:200)
        at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:457)
        at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:115)
        at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:68)
        at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
        at java.lang.Thread.run(Thread.java:662){noformat}


{noformat}2014-01-07 09:47:46,773 ERROR 
org.apache.hadoop.hdfs.server.datanode.DataNode: <ip>:25012:DataXceiver error 
processing READ_BLOCK operation  src: /<ip>:35125 dest: /<ip>:25012
java.io.IOException: Replica gen stamp < block genstamp, 
block=BP-1746676845-<ip>-1388725564463:blk_1073742062_1270, 
replica=FinalizedReplica, blk_1073742062_1266, FINALIZED
  getNumBytes()     = 6
  getBytesOnDisk()  = 6
  getVisibleLength()= 6
  getVolume()       = /home/vinay/hadoop/dfs/data/current
  getBlockFile()    = 
/home/vinay/hadoop/dfs/data/current/BP-1746676845-<ip>-1388725564463/current/finalized/blk_1073742062
  unlinked          =false
        at 
org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:247)
        at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:328)
        at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:101)
        at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:65)
        at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
        at java.lang.Thread.run(Thread.java:662){noformat}

> Append failed FINALIZED replica should not be accepted as valid when that 
> block is underconstruction
> ----------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-5723
>                 URL: https://issues.apache.org/jira/browse/HDFS-5723
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: namenode
>    Affects Versions: 2.2.0
>            Reporter: Vinay
>
> Scenario:
> 1. 3 node cluster with 
> dfs.client.block.write.replace-datanode-on-failure.enable set to false.
> 2. One file is written with 3 replicas, blk_id_gs1
> 3. One of the datanode DN1 is down.
> 4. File was opened with append and some more data is added to the file and 
> synced. (to only 2 live nodes DN2 and DN3)-- blk_id_gs2
> 5. Now  DN1 restarted
> 6. In this block report, DN1 reported FINALIZED block blk_id_gs1, this should 
> be made marked corrupted.
> but since NN having appended block state as UnderConstruction, at this time 
> its not detecting this block as corrupt and adding to valid block locations.
> As long as the namenode is alive, this datanode also will be considered as 
> valid replica and read/append will fail in that datanode.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

Reply via email to